A modified JPEG-LS lossless compression method for remote sensing images
NASA Astrophysics Data System (ADS)
Deng, Lihua; Huang, Zhenghua
2015-12-01
As many variable length source coders, JPEG-LS is highly vulnerable to channel errors which occur in the transmission of remote sensing images. The error diffusion is one of the important factors which infect its robustness. The common method of improving the error resilience of JPEG-LS is dividing the image into many strips or blocks, and then coding each of them independently, but this method reduces the coding efficiency. In this paper, a block based JPEP-LS lossless compression method with an adaptive parameter is proposed. In the modified scheme, the threshold parameter RESET is adapted to an image and the compression efficiency is close to that of the conventional JPEG-LS.
The effect of JPEG compression on automated detection of microaneurysms in retinal images
NASA Astrophysics Data System (ADS)
Cree, M. J.; Jelinek, H. F.
2008-02-01
As JPEG compression at source is ubiquitous in retinal imaging, and the block artefacts introduced are known to be of similar size to microaneurysms (an important indicator of diabetic retinopathy) it is prudent to evaluate the effect of JPEG compression on automated detection of retinal pathology. Retinal images were acquired at high quality and then compressed to various lower qualities. An automated microaneurysm detector was run on the retinal images of various qualities of JPEG compression and the ability to predict the presence of diabetic retinopathy based on the detected presence of microaneurysms was evaluated with receiver operating characteristic (ROC) methodology. The negative effect of JPEG compression on automated detection was observed even at levels of compression sometimes used in retinal eye-screening programmes and these may have important clinical implications for deciding on acceptable levels of compression for a fully automated eye-screening programme.
Oblivious image watermarking combined with JPEG compression
NASA Astrophysics Data System (ADS)
Chen, Qing; Maitre, Henri; Pesquet-Popescu, Beatrice
2003-06-01
For most data hiding applications, the main source of concern is the effect of lossy compression on hidden information. The objective of watermarking is fundamentally in conflict with lossy compression. The latter attempts to remove all irrelevant and redundant information from a signal, while the former uses the irrelevant information to mask the presence of hidden data. Compression on a watermarked image can significantly affect the retrieval of the watermark. Past investigations of this problem have heavily relied on simulation. It is desirable not only to measure the effect of compression on embedded watermark, but also to control the embedding process to survive lossy compression. In this paper, we focus on oblivious watermarking by assuming that the watermarked image inevitably undergoes JPEG compression prior to watermark extraction. We propose an image-adaptive watermarking scheme where the watermarking algorithm and the JPEG compression standard are jointly considered. Watermark embedding takes into consideration the JPEG compression quality factor and exploits an HVS model to adaptively attain a proper trade-off among transparency, hiding data rate, and robustness to JPEG compression. The scheme estimates the image-dependent payload under JPEG compression to achieve the watermarking bit allocation in a determinate way, while maintaining consistent watermark retrieval performance.
Clunie, David A; Gebow, Dan
2015-01-01
Deidentification of medical images requires attention to both header information as well as the pixel data itself, in which burned-in text may be present. If the pixel data to be deidentified is stored in a compressed form, traditionally it is decompressed, identifying text is redacted, and if necessary, pixel data are recompressed. Decompression without recompression may result in images of excessive or intractable size. Recompression with an irreversible scheme is undesirable because it may cause additional loss in the diagnostically relevant regions of the images. The irreversible (lossy) JPEG compression scheme works on small blocks of the image independently, hence, redaction can selectively be confined only to those blocks containing identifying text, leaving all other blocks unchanged. An open source implementation of selective redaction and a demonstration of its applicability to multiframe color ultrasound images is described. The process can be applied either to standalone JPEG images or JPEG bit streams encapsulated in other formats, which in the case of medical images, is usually DICOM.
6 CFR 37.31 - Source document retention.
Code of Federal Regulations, 2014 CFR
2014-01-01
... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...
6 CFR 37.31 - Source document retention.
Code of Federal Regulations, 2012 CFR
2012-01-01
... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...
6 CFR 37.31 - Source document retention.
Code of Federal Regulations, 2010 CFR
2010-01-01
... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...
6 CFR 37.31 - Source document retention.
Code of Federal Regulations, 2011 CFR
2011-01-01
... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...
6 CFR 37.31 - Source document retention.
Code of Federal Regulations, 2013 CFR
2013-01-01
... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...
JPEG vs. JPEG 2000: an objective comparison of image encoding quality
NASA Astrophysics Data System (ADS)
Ebrahimi, Farzad; Chamik, Matthieu; Winkler, Stefan
2004-11-01
This paper describes an objective comparison of the image quality of different encoders. Our approach is based on estimating the visual impact of compression artifacts on perceived quality. We present a tool that measures these artifacts in an image and uses them to compute a prediction of the Mean Opinion Score (MOS) obtained in subjective experiments. We show that the MOS predictions by our proposed tool are a better indicator of perceived image quality than PSNR, especially for highly compressed images. For the encoder comparison, we compress a set of 29 test images with two JPEG encoders (Adobe Photoshop and IrfanView) and three JPEG2000 encoders (JasPer, Kakadu, and IrfanView) at various compression ratios. We compute blockiness, blur, and MOS predictions as well as PSNR of the compressed images. Our results show that the IrfanView JPEG encoder produces consistently better images than the Adobe Photoshop JPEG encoder at the same data rate. The differences between the JPEG2000 encoders in our test are less pronounced; JasPer comes out as the best codec, closely followed by IrfanView and Kakadu. Comparing the JPEG- and JPEG2000-encoding quality of IrfanView, we find that JPEG has a slight edge at low compression ratios, while JPEG2000 is the clear winner at medium and high compression ratios.
A generalized Benford's law for JPEG coefficients and its applications in image forensics
NASA Astrophysics Data System (ADS)
Fu, Dongdong; Shi, Yun Q.; Su, Wei
2007-02-01
In this paper, a novel statistical model based on Benford's law for the probability distributions of the first digits of the block-DCT and quantized JPEG coefficients is presented. A parametric logarithmic law, i.e., the generalized Benford's law, is formulated. Furthermore, some potential applications of this model in image forensics are discussed in this paper, which include the detection of JPEG compression for images in bitmap format, the estimation of JPEG compression Qfactor for JPEG compressed bitmap image, and the detection of double compressed JPEG image. The results of our extensive experiments demonstrate the effectiveness of the proposed statistical model.
NASA Astrophysics Data System (ADS)
Clunie, David A.
2000-05-01
Proprietary compression schemes have a cost and risk associated with their support, end of life and interoperability. Standards reduce this cost and risk. The new JPEG-LS process (ISO/IEC 14495-1), and the lossless mode of the proposed JPEG 2000 scheme (ISO/IEC CD15444-1), new standard schemes that may be incorporated into DICOM, are evaluated here. Three thousand, six hundred and seventy-nine (3,679) single frame grayscale images from multiple anatomical regions, modalities and vendors, were tested. For all images combined JPEG-LS and JPEG 2000 performed equally well (3.81), almost as well as CALIC (3.91), a complex predictive scheme used only as a benchmark. Both out-performed existing JPEG (3.04 with optimum predictor choice per image, 2.79 for previous pixel prediction as most commonly used in DICOM). Text dictionary schemes performed poorly (gzip 2.38), as did image dictionary schemes without statistical modeling (PNG 2.76). Proprietary transform based schemes did not perform as well as JPEG-LS or JPEG 2000 (S+P Arithmetic 3.4, CREW 3.56). Stratified by modality, JPEG-LS compressed CT images (4.00), MR (3.59), NM (5.98), US (3.4), IO (2.66), CR (3.64), DX (2.43), and MG (2.62). CALIC always achieved the highest compression except for one modality for which JPEG-LS did better (MG digital vendor A JPEG-LS 4.02, CALIC 4.01). JPEG-LS outperformed existing JPEG for all modalities. The use of standard schemes can achieve state of the art performance, regardless of modality, JPEG-LS is simple, easy to implement, consumes less memory, and is faster than JPEG 2000, though JPEG 2000 will offer lossy and progressive transmission. It is recommended that DICOM add transfer syntaxes for both JPEG-LS and JPEG 2000.
Reversible Watermarking Surviving JPEG Compression.
Zain, J; Clarke, M
2005-01-01
This paper will discuss the properties of watermarking medical images. We will also discuss the possibility of such images being compressed by JPEG and give an overview of JPEG compression. We will then propose a watermarking scheme that is reversible and robust to JPEG compression. The purpose is to verify the integrity and authenticity of medical images. We used 800x600x8 bits ultrasound (US) images in our experiment. SHA-256 of the image is then embedded in the Least significant bits (LSB) of an 8x8 block in the Region of Non Interest (RONI). The image is then compressed using JPEG and decompressed using Photoshop 6.0. If the image has not been altered, the watermark extracted will match the hash (SHA256) of the original image. The result shown that the embedded watermark is robust to JPEG compression up to image quality 60 (~91% compressed).
The JPEG XT suite of standards: status and future plans
NASA Astrophysics Data System (ADS)
Richter, Thomas; Bruylants, Tim; Schelkens, Peter; Ebrahimi, Touradj
2015-09-01
The JPEG standard has known an enormous market adoption. Daily, billions of pictures are created, stored and exchanged in this format. The JPEG committee acknowledges this success and spends continued efforts in maintaining and expanding the standard specifications. JPEG XT is a standardization effort targeting the extension of the JPEG features by enabling support for high dynamic range imaging, lossless and near-lossless coding, and alpha channel coding, while also guaranteeing backward and forward compatibility with the JPEG legacy format. This paper gives an overview of the current status of the JPEG XT standards suite. It discusses the JPEG legacy specification, and details how higher dynamic range support is facilitated both for integer and floating-point color representations. The paper shows how JPEG XT's support for lossless and near-lossless coding of low and high dynamic range images is achieved in combination with backward compatibility to JPEG legacy. In addition, the extensible boxed-based JPEG XT file format on which all following and future extensions of JPEG will be based is introduced. This paper also details how the lossy and lossless representations of alpha channels are supported to allow coding transparency information and arbitrarily shaped images. Finally, we conclude by giving prospects on upcoming JPEG standardization initiative JPEG Privacy & Security, and a number of other possible extensions in JPEG XT.
Evaluation of image compression for computer-aided diagnosis of breast tumors in 3D sonography
NASA Astrophysics Data System (ADS)
Chen, We-Min; Huang, Yu-Len; Tao, Chi-Chuan; Chen, Dar-Ren; Moon, Woo-Kyung
2006-03-01
Medical imaging examinations form the basis for physicians diagnosing diseases, as evidenced by the increasing use of digital medical images for picture archiving and communications systems (PACS). However, with enlarged medical image databases and rapid growth of patients' case reports, PACS requires image compression to accelerate the image transmission rate and conserve disk space for diminishing implementation costs. For this purpose, JPEG and JPEG2000 have been accepted as legal formats for the digital imaging and communications in medicine (DICOM). The high compression ratio is felt to be useful for medical imagery. Therefore, this study evaluates the compression ratios of JPEG and JPEG2000 standards for computer-aided diagnosis (CAD) of breast tumors in 3-D medical ultrasound (US) images. The 3-D US data sets with various compression ratios are compressed using the two efficacious image compression standards. The reconstructed data sets are then diagnosed by a previous proposed CAD system. The diagnostic accuracy is measured based on receiver operating characteristic (ROC) analysis. Namely, the ROC curves are used to compare the diagnostic performance of two or more reconstructed images. Analysis results ensure a comparison of the compression ratios by using JPEG and JPEG2000 for 3-D US images. Results of this study provide the possible bit rates using JPEG and JPEG2000 for 3-D breast US images.
Workflow opportunities using JPEG 2000
NASA Astrophysics Data System (ADS)
Foshee, Scott
2002-11-01
JPEG 2000 is a new image compression standard from ISO/IEC JTC1 SC29 WG1, the Joint Photographic Experts Group (JPEG) committee. Better thought of as a sibling to JPEG rather than descendant, the JPEG 2000 standard offers wavelet based compression as well as companion file formats and related standardized technology. This paper examines the JPEG 2000 standard for features in four specific areas-compression, file formats, client-server, and conformance/compliance that enable image workflows.
Request redirection paradigm in medical image archive implementation.
Dragan, Dinu; Ivetić, Dragan
2012-08-01
It is widely recognized that the JPEG2000 facilitates issues in medical imaging: storage, communication, sharing, remote access, interoperability, and presentation scalability. Therefore, JPEG2000 support was added to the DICOM standard Supplement 61. Two approaches to support JPEG2000 medical image are explicitly defined by the DICOM standard: replacing the DICOM image format with corresponding JPEG2000 codestream, or by the Pixel Data Provider service, DICOM supplement 106. The latest one supposes two-step retrieval of medical image: DICOM request and response from a DICOM server, and then JPIP request and response from a JPEG2000 server. We propose a novel strategy for transmission of scalable JPEG2000 images extracted from a single codestream over DICOM network using the DICOM Private Data Element without sacrificing system interoperability. It employs the request redirection paradigm: DICOM request and response from JPEG2000 server through DICOM server. The paper presents programming solution for implementation of request redirection paradigm in a DICOM transparent manner. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Unequal power allocation for JPEG transmission over MIMO systems.
Sabir, Muhammad Farooq; Bovik, Alan Conrad; Heath, Robert W
2010-02-01
With the introduction of multiple transmit and receive antennas in next generation wireless systems, real-time image and video communication are expected to become quite common, since very high data rates will become available along with improved data reliability. New joint transmission and coding schemes that explore advantages of multiple antenna systems matched with source statistics are expected to be developed. Based on this idea, we present an unequal power allocation scheme for transmission of JPEG compressed images over multiple-input multiple-output systems employing spatial multiplexing. The JPEG-compressed image is divided into different quality layers, and different layers are transmitted simultaneously from different transmit antennas using unequal transmit power, with a constraint on the total transmit power during any symbol period. Results show that our unequal power allocation scheme provides significant image quality improvement as compared to different equal power allocations schemes, with the peak-signal-to-noise-ratio gain as high as 14 dB at low signal-to-noise-ratios.
Image steganalysis using Artificial Bee Colony algorithm
NASA Astrophysics Data System (ADS)
Sajedi, Hedieh
2017-09-01
Steganography is the science of secure communication where the presence of the communication cannot be detected while steganalysis is the art of discovering the existence of the secret communication. Processing a huge amount of information takes extensive execution time and computational sources most of the time. As a result, it is needed to employ a phase of preprocessing, which can moderate the execution time and computational sources. In this paper, we propose a new feature-based blind steganalysis method for detecting stego images from the cover (clean) images with JPEG format. In this regard, we present a feature selection technique based on an improved Artificial Bee Colony (ABC). ABC algorithm is inspired by honeybees' social behaviour in their search for perfect food sources. In the proposed method, classifier performance and the dimension of the selected feature vector depend on using wrapper-based methods. The experiments are performed using two large data-sets of JPEG images. Experimental results demonstrate the effectiveness of the proposed steganalysis technique compared to the other existing techniques.
Estimating JPEG2000 compression for image forensics using Benford's Law
NASA Astrophysics Data System (ADS)
Qadir, Ghulam; Zhao, Xi; Ho, Anthony T. S.
2010-05-01
With the tremendous growth and usage of digital images nowadays, the integrity and authenticity of digital content is becoming increasingly important, and a growing concern to many government and commercial sectors. Image Forensics, based on a passive statistical analysis of the image data only, is an alternative approach to the active embedding of data associated with Digital Watermarking. Benford's Law was first introduced to analyse the probability distribution of the 1st digit (1-9) numbers of natural data, and has since been applied to Accounting Forensics for detecting fraudulent income tax returns [9]. More recently, Benford's Law has been further applied to image processing and image forensics. For example, Fu et al. [5] proposed a Generalised Benford's Law technique for estimating the Quality Factor (QF) of JPEG compressed images. In our previous work, we proposed a framework incorporating the Generalised Benford's Law to accurately detect unknown JPEG compression rates of watermarked images in semi-fragile watermarking schemes. JPEG2000 (a relatively new image compression standard) offers higher compression rates and better image quality as compared to JPEG compression. In this paper, we propose the novel use of Benford's Law for estimating JPEG2000 compression for image forensics applications. By analysing the DWT coefficients and JPEG2000 compression on 1338 test images, the initial results indicate that the 1st digit probability of DWT coefficients follow the Benford's Law. The unknown JPEG2000 compression rates of the image can also be derived, and proved with the help of a divergence factor, which shows the deviation between the probabilities and Benford's Law. Based on 1338 test images, the mean divergence for DWT coefficients is approximately 0.0016, which is lower than DCT coefficients at 0.0034. However, the mean divergence for JPEG2000 images compression rate at 0.1 is 0.0108, which is much higher than uncompressed DWT coefficients. This result clearly indicates a presence of compression in the image. Moreover, we compare the results of 1st digit probability and divergence among JPEG2000 compression rates at 0.1, 0.3, 0.5 and 0.9. The initial results show that the expected difference among them could be used for further analysis to estimate the unknown JPEG2000 compression rates.
JPIC-Rad-Hard JPEG2000 Image Compression ASIC
NASA Astrophysics Data System (ADS)
Zervas, Nikos; Ginosar, Ran; Broyde, Amitai; Alon, Dov
2010-08-01
JPIC is a rad-hard high-performance image compression ASIC for the aerospace market. JPIC implements tier 1 of the ISO/IEC 15444-1 JPEG2000 (a.k.a. J2K) image compression standard [1] as well as the post compression rate-distortion algorithm, which is part of tier 2 coding. A modular architecture enables employing a single JPIC or multiple coordinated JPIC units. JPIC is designed to support wide data sources of imager in optical, panchromatic and multi-spectral space and airborne sensors. JPIC has been developed as a collaboration of Alma Technologies S.A. (Greece), MBT/IAI Ltd (Israel) and Ramon Chips Ltd (Israel). MBT IAI defined the system architecture requirements and interfaces, The JPEG2K-E IP core from Alma implements the compression algorithm [2]. Ramon Chips adds SERDES interfaces and host interfaces and integrates the ASIC. MBT has demonstrated the full chip on an FPGA board and created system boards employing multiple JPIC units. The ASIC implementation, based on Ramon Chips' 180nm CMOS RadSafe[TM] RH cell library enables superior radiation hardness.
JPEG and wavelet compression of ophthalmic images
NASA Astrophysics Data System (ADS)
Eikelboom, Robert H.; Yogesan, Kanagasingam; Constable, Ian J.; Barry, Christopher J.
1999-05-01
This study was designed to determine the degree and methods of digital image compression to produce ophthalmic imags of sufficient quality for transmission and diagnosis. The photographs of 15 subjects, which inclined eyes with normal, subtle and distinct pathologies, were digitized to produce 1.54MB images and compressed to five different methods: (i) objectively by calculating the RMS error between the uncompressed and compressed images, (ii) semi-subjectively by assessing the visibility of blood vessels, and (iii) subjectively by asking a number of experienced observers to assess the images for quality and clinical interpretation. Results showed that as a function of compressed image size, wavelet compressed images produced less RMS error than JPEG compressed images. Blood vessel branching could be observed to a greater extent after Wavelet compression compared to JPEG compression produced better images then a JPEG compression for a given image size. Overall, it was shown that images had to be compressed to below 2.5 percent for JPEG and 1.7 percent for Wavelet compression before fine detail was lost, or when image quality was too poor to make a reliable diagnosis.
Performance of the JPEG Estimated Spectrum Adaptive Postfilter (JPEG-ESAP) for Low Bit Rates
NASA Technical Reports Server (NTRS)
Linares, Irving (Inventor)
2016-01-01
Frequency-based, pixel-adaptive filtering using the JPEG-ESAP algorithm for low bit rate JPEG formatted color images may allow for more compressed images while maintaining equivalent quality at a smaller file size or bitrate. For RGB, an image is decomposed into three color bands--red, green, and blue. The JPEG-ESAP algorithm is then applied to each band (e.g., once for red, once for green, and once for blue) and the output of each application of the algorithm is rebuilt as a single color image. The ESAP algorithm may be repeatedly applied to MPEG-2 video frames to reduce their bit rate by a factor of 2 or 3, while maintaining equivalent video quality, both perceptually, and objectively, as recorded in the computed PSNR values.
Estimation of color filter array data from JPEG images for improved demosaicking
NASA Astrophysics Data System (ADS)
Feng, Wei; Reeves, Stanley J.
2006-02-01
On-camera demosaicking algorithms are necessarily simple and therefore do not yield the best possible images. However, off-camera demosaicking algorithms face the additional challenge that the data has been compressed and therefore corrupted by quantization noise. We propose a method to estimate the original color filter array (CFA) data from JPEG-compressed images so that more sophisticated (and better) demosaicking schemes can be applied to get higher-quality images. The JPEG image formation process, including simple demosaicking, color space transformation, chrominance channel decimation and DCT, is modeled as a series of matrix operations followed by quantization on the CFA data, which is estimated by least squares. An iterative method is used to conserve memory and speed computation. Our experiments show that the mean square error (MSE) with respect to the original CFA data is reduced significantly using our algorithm, compared to that of unprocessed JPEG and deblocked JPEG data.
Steganalysis based on JPEG compatibility
NASA Astrophysics Data System (ADS)
Fridrich, Jessica; Goljan, Miroslav; Du, Rui
2001-11-01
In this paper, we introduce a new forensic tool that can reliably detect modifications in digital images, such as distortion due to steganography and watermarking, in images that were originally stored in the JPEG format. The JPEG compression leave unique fingerprints and serves as a fragile watermark enabling us to detect changes as small as modifying the LSB of one randomly chosen pixel. The detection of changes is based on investigating the compatibility of 8x8 blocks of pixels with JPEG compression with a given quantization matrix. The proposed steganalytic method is applicable to virtually all steganongraphic and watermarking algorithms with the exception of those that embed message bits into the quantized JPEG DCT coefficients. The method can also be used to estimate the size of the secret message and identify the pixels that carry message bits. As a consequence of our steganalysis, we strongly recommend avoiding using images that have been originally stored in the JPEG format as cover-images for spatial-domain steganography.
JPEG2000 and dissemination of cultural heritage over the Internet.
Politou, Eugenia A; Pavlidis, George P; Chamzas, Christodoulos
2004-03-01
By applying the latest technologies in image compression for managing the storage of massive image data within cultural heritage databases and by exploiting the universality of the Internet we are now able not only to effectively digitize, record and preserve, but also to promote the dissemination of cultural heritage. In this work we present an application of the latest image compression standard JPEG2000 in managing and browsing image databases, focusing on the image transmission aspect rather than database management and indexing. We combine the technologies of JPEG2000 image compression with client-server socket connections and client browser plug-in, as to provide with an all-in-one package for remote browsing of JPEG2000 compressed image databases, suitable for the effective dissemination of cultural heritage.
Non-parametric adaptative JPEG fragments carving
NASA Astrophysics Data System (ADS)
Amrouche, Sabrina Cherifa; Salamani, Dalila
2018-04-01
The most challenging JPEG recovery tasks arise when the file header is missing. In this paper we propose to use a two layer machine learning model to restore headerless JPEG images. We first build a classifier able to identify the structural properties of the images/fragments and then use an AutoEncoder (AE) to learn the fragment features for the header prediction. We define a JPEG universal header and the remaining free image parameters (Height, Width) are predicted with a Gradient Boosting Classifier. Our approach resulted in 90% accuracy using the manually defined features and 78% accuracy using the AE features.
Prior-Based Quantization Bin Matching for Cloud Storage of JPEG Images.
Liu, Xianming; Cheung, Gene; Lin, Chia-Wen; Zhao, Debin; Gao, Wen
2018-07-01
Millions of user-generated images are uploaded to social media sites like Facebook daily, which translate to a large storage cost. However, there exists an asymmetry in upload and download data: only a fraction of the uploaded images are subsequently retrieved for viewing. In this paper, we propose a cloud storage system that reduces the storage cost of all uploaded JPEG photos, at the expense of a controlled increase in computation mainly during download of requested image subset. Specifically, the system first selectively re-encodes code blocks of uploaded JPEG images using coarser quantization parameters for smaller storage sizes. Then during download, the system exploits known signal priors-sparsity prior and graph-signal smoothness prior-for reverse mapping to recover original fine quantization bin indices, with either deterministic guarantee (lossless mode) or statistical guarantee (near-lossless mode). For fast reverse mapping, we use small dictionaries and sparse graphs that are tailored for specific clusters of similar blocks, which are classified via tree-structured vector quantizer. During image upload, cluster indices identifying the appropriate dictionaries and graphs for the re-quantized blocks are encoded as side information using a differential distributed source coding scheme to facilitate reverse mapping during image download. Experimental results show that our system can reap significant storage savings (up to 12.05%) at roughly the same image PSNR (within 0.18 dB).
Toward privacy-preserving JPEG image retrieval
NASA Astrophysics Data System (ADS)
Cheng, Hang; Wang, Jingyue; Wang, Meiqing; Zhong, Shangping
2017-07-01
This paper proposes a privacy-preserving retrieval scheme for JPEG images based on local variance. Three parties are involved in the scheme: the content owner, the server, and the authorized user. The content owner encrypts JPEG images for privacy protection by jointly using permutation cipher and stream cipher, and then, the encrypted versions are uploaded to the server. With an encrypted query image provided by an authorized user, the server may extract blockwise local variances in different directions without knowing the plaintext content. After that, it can calculate the similarity between the encrypted query image and each encrypted database image by a local variance-based feature comparison mechanism. The authorized user with the encryption key can decrypt the returned encrypted images with plaintext content similar to the query image. The experimental results show that the proposed scheme not only provides effective privacy-preserving retrieval service but also ensures both format compliance and file size preservation for encrypted JPEG images.
A block-based JPEG-LS compression technique with lossless region of interest
NASA Astrophysics Data System (ADS)
Deng, Lihua; Huang, Zhenghua; Yao, Shoukui
2018-03-01
JPEG-LS lossless compression algorithm is used in many specialized applications that emphasize on the attainment of high fidelity for its lower complexity and better compression ratios than the lossless JPEG standard. But it cannot prevent error diffusion because of the context dependence of the algorithm, and have low compression rate when compared to lossy compression. In this paper, we firstly divide the image into two parts: ROI regions and non-ROI regions. Then we adopt a block-based image compression technique to decrease the range of error diffusion. We provide JPEG-LS lossless compression for the image blocks which include the whole or part region of interest (ROI) and JPEG-LS near lossless compression for the image blocks which are included in the non-ROI (unimportant) regions. Finally, a set of experiments are designed to assess the effectiveness of the proposed compression method.
A comparison of the fractal and JPEG algorithms
NASA Technical Reports Server (NTRS)
Cheung, K.-M.; Shahshahani, M.
1991-01-01
A proprietary fractal image compression algorithm and the Joint Photographic Experts Group (JPEG) industry standard algorithm for image compression are compared. In every case, the JPEG algorithm was superior to the fractal method at a given compression ratio according to a root mean square criterion and a peak signal to noise criterion.
An efficient multiple exposure image fusion in JPEG domain
NASA Astrophysics Data System (ADS)
Hebbalaguppe, Ramya; Kakarala, Ramakrishna
2012-01-01
In this paper, we describe a method to fuse multiple images taken with varying exposure times in the JPEG domain. The proposed algorithm finds its application in HDR image acquisition and image stabilization for hand-held devices like mobile phones, music players with cameras, digital cameras etc. Image acquisition at low light typically results in blurry and noisy images for hand-held camera's. Altering camera settings like ISO sensitivity, exposure times and aperture for low light image capture results in noise amplification, motion blur and reduction of depth-of-field respectively. The purpose of fusing multiple exposures is to combine the sharp details of the shorter exposure images with high signal-to-noise-ratio (SNR) of the longer exposure images. The algorithm requires only a single pass over all images, making it efficient. It comprises of - sigmoidal boosting of shorter exposed images, image fusion, artifact removal and saturation detection. Algorithm does not need more memory than a single JPEG macro block to be kept in memory making it feasible to be implemented as the part of a digital cameras hardware image processing engine. The Artifact removal step reuses the JPEGs built-in frequency analysis and hence benefits from the considerable optimization and design experience that is available for JPEG.
The impact of skull bone intensity on the quality of compressed CT neuro images
NASA Astrophysics Data System (ADS)
Kowalik-Urbaniak, Ilona; Vrscay, Edward R.; Wang, Zhou; Cavaro-Menard, Christine; Koff, David; Wallace, Bill; Obara, Boguslaw
2012-02-01
The increasing use of technologies such as CT and MRI, along with a continuing improvement in their resolution, has contributed to the explosive growth of digital image data being generated. Medical communities around the world have recognized the need for efficient storage, transmission and display of medical images. For example, the Canadian Association of Radiologists (CAR) has recommended compression ratios for various modalities and anatomical regions to be employed by lossy JPEG and JPEG2000 compression in order to preserve diagnostic quality. Here we investigate the effects of the sharp skull edges present in CT neuro images on JPEG and JPEG2000 lossy compression. We conjecture that this atypical effect is caused by the sharp edges between the skull bone and the background regions as well as between the skull bone and the interior regions. These strong edges create large wavelet coefficients that consume an unnecessarily large number of bits in JPEG2000 compression because of its bitplane coding scheme, and thus result in reduced quality at the interior region, which contains most diagnostic information in the image. To validate the conjecture, we investigate a segmentation based compression algorithm based on simple thresholding and morphological operators. As expected, quality is improved in terms of PSNR as well as the structural similarity (SSIM) image quality measure, and its multiscale (MS-SSIM) and informationweighted (IW-SSIM) versions. This study not only supports our conjecture, but also provides a solution to improve the performance of JPEG and JPEG2000 compression for specific types of CT images.
Image transmission system using adaptive joint source and channel decoding
NASA Astrophysics Data System (ADS)
Liu, Weiliang; Daut, David G.
2005-03-01
In this paper, an adaptive joint source and channel decoding method is designed to accelerate the convergence of the iterative log-dimain sum-product decoding procedure of LDPC codes as well as to improve the reconstructed image quality. Error resilience modes are used in the JPEG2000 source codec, which makes it possible to provide useful source decoded information to the channel decoder. After each iteration, a tentative decoding is made and the channel decoded bits are then sent to the JPEG2000 decoder. Due to the error resilience modes, some bits are known to be either correct or in error. The positions of these bits are then fed back to the channel decoder. The log-likelihood ratios (LLR) of these bits are then modified by a weighting factor for the next iteration. By observing the statistics of the decoding procedure, the weighting factor is designed as a function of the channel condition. That is, for lower channel SNR, a larger factor is assigned, and vice versa. Results show that the proposed joint decoding methods can greatly reduce the number of iterations, and thereby reduce the decoding delay considerably. At the same time, this method always outperforms the non-source controlled decoding method up to 5dB in terms of PSNR for various reconstructed images.
Camera-Model Identification Using Markovian Transition Probability Matrix
NASA Astrophysics Data System (ADS)
Xu, Guanshuo; Gao, Shang; Shi, Yun Qing; Hu, Ruimin; Su, Wei
Detecting the (brands and) models of digital cameras from given digital images has become a popular research topic in the field of digital forensics. As most of images are JPEG compressed before they are output from cameras, we propose to use an effective image statistical model to characterize the difference JPEG 2-D arrays of Y and Cb components from the JPEG images taken by various camera models. Specifically, the transition probability matrices derived from four different directional Markov processes applied to the image difference JPEG 2-D arrays are used to identify statistical difference caused by image formation pipelines inside different camera models. All elements of the transition probability matrices, after a thresholding technique, are directly used as features for classification purpose. Multi-class support vector machines (SVM) are used as the classification tool. The effectiveness of our proposed statistical model is demonstrated by large-scale experimental results.
Detection of shifted double JPEG compression by an adaptive DCT coefficient model
NASA Astrophysics Data System (ADS)
Wang, Shi-Lin; Liew, Alan Wee-Chung; Li, Sheng-Hong; Zhang, Yu-Jin; Li, Jian-Hua
2014-12-01
In many JPEG image splicing forgeries, the tampered image patch has been JPEG-compressed twice with different block alignments. Such phenomenon in JPEG image forgeries is called the shifted double JPEG (SDJPEG) compression effect. Detection of SDJPEG-compressed patches could help in detecting and locating the tampered region. However, the current SDJPEG detection methods do not provide satisfactory results especially when the tampered region is small. In this paper, we propose a new SDJPEG detection method based on an adaptive discrete cosine transform (DCT) coefficient model. DCT coefficient distributions for SDJPEG and non-SDJPEG patches have been analyzed and a discriminative feature has been proposed to perform the two-class classification. An adaptive approach is employed to select the most discriminative DCT modes for SDJPEG detection. The experimental results show that the proposed approach can achieve much better results compared with some existing approaches in SDJPEG patch detection especially when the patch size is small.
Generalised Category Attack—Improving Histogram-Based Attack on JPEG LSB Embedding
NASA Astrophysics Data System (ADS)
Lee, Kwangsoo; Westfeld, Andreas; Lee, Sangjin
We present a generalised and improved version of the category attack on LSB steganography in JPEG images with straddled embedding path. It detects more reliably low embedding rates and is also less disturbed by double compressed images. The proposed methods are evaluated on several thousand images. The results are compared to both recent blind and specific attacks for JPEG embedding. The proposed attack permits a more reliable detection, although it is based on first order statistics only. Its simple structure makes it very fast.
Image Size Variation Influence on Corrupted and Non-viewable BMP Image
NASA Astrophysics Data System (ADS)
Azmi, Tengku Norsuhaila T.; Azma Abdullah, Nurul; Rahman, Nurul Hidayah Ab; Hamid, Isredza Rahmi A.; Chai Wen, Chuah
2017-08-01
Image is one of the evidence component seek in digital forensics. Joint Photographic Experts Group (JPEG) format is most popular used in the Internet because JPEG files are very lossy and easy to compress that can speed up Internet transmitting processes. However, corrupted JPEG images are hard to recover due to the complexities of determining corruption point. Nowadays Bitmap (BMP) images are preferred in image processing compared to another formats because BMP image contain all the image information in a simple format. Therefore, in order to investigate the corruption point in JPEG, the file is required to be converted into BMP format. Nevertheless, there are many things that can influence the corrupting of BMP image such as the changes of image size that make the file non-viewable. In this paper, the experiment indicates that the size of BMP file influences the changes in the image itself through three conditions, deleting, replacing and insertion. From the experiment, we learnt by correcting the file size, it can able to produce a viewable file though partially. Then, it can be investigated further to identify the corruption point.
A novel high-frequency encoding algorithm for image compression
NASA Astrophysics Data System (ADS)
Siddeq, Mohammed M.; Rodrigues, Marcos A.
2017-12-01
In this paper, a new method for image compression is proposed whose quality is demonstrated through accurate 3D reconstruction from 2D images. The method is based on the discrete cosine transform (DCT) together with a high-frequency minimization encoding algorithm at compression stage and a new concurrent binary search algorithm at decompression stage. The proposed compression method consists of five main steps: (1) divide the image into blocks and apply DCT to each block; (2) apply a high-frequency minimization method to the AC-coefficients reducing each block by 2/3 resulting in a minimized array; (3) build a look up table of probability data to enable the recovery of the original high frequencies at decompression stage; (4) apply a delta or differential operator to the list of DC-components; and (5) apply arithmetic encoding to the outputs of steps (2) and (4). At decompression stage, the look up table and the concurrent binary search algorithm are used to reconstruct all high-frequency AC-coefficients while the DC-components are decoded by reversing the arithmetic coding. Finally, the inverse DCT recovers the original image. We tested the technique by compressing and decompressing 2D images including images with structured light patterns for 3D reconstruction. The technique is compared with JPEG and JPEG2000 through 2D and 3D RMSE. Results demonstrate that the proposed compression method is perceptually superior to JPEG with equivalent quality to JPEG2000. Concerning 3D surface reconstruction from images, it is demonstrated that the proposed method is superior to both JPEG and JPEG2000.
High-quality JPEG compression history detection for fake uncompressed images
NASA Astrophysics Data System (ADS)
Zhang, Rong; Wang, Rang-Ding; Guo, Li-Jun; Jiang, Bao-Chuan
2017-05-01
Authenticity is one of the most important evaluation factors of images for photography competitions or journalism. Unusual compression history of an image often implies the illicit intent of its author. Our work aims at distinguishing real uncompressed images from fake uncompressed images that are saved in uncompressed formats but have been previously compressed. To detect the potential image JPEG compression, we analyze the JPEG compression artifacts based on the tetrolet covering, which corresponds to the local image geometrical structure. Since the compression can alter the structure information, the tetrolet covering indexes may be changed if a compression is performed on the test image. Such changes can provide valuable clues about the image compression history. To be specific, the test image is first compressed with different quality factors to generate a set of temporary images. Then, the test image is compared with each temporary image block-by-block to investigate whether the tetrolet covering index of each 4×4 block is different between them. The percentages of the changed tetrolet covering indexes corresponding to the quality factors (from low to high) are computed and used to form the p-curve, the local minimum of which may indicate the potential compression. Our experimental results demonstrate the advantage of our method to detect JPEG compressions of high quality, even the highest quality factors such as 98, 99, or 100 of the standard JPEG compression, from uncompressed-format images. At the same time, our detection algorithm can accurately identify the corresponding compression quality factor.
Kim, J H; Kang, S W; Kim, J-r; Chang, Y S
2014-01-01
Purpose To evaluate the effect of image compression of spectral-domain optical coherence tomography (OCT) images in the examination of eyes with exudative age-related macular degeneration (AMD). Methods Thirty eyes from 30 patients who were diagnosed with exudative AMD were included in this retrospective observational case series. The horizontal OCT scans centered at the center of the fovea were conducted using spectral-domain OCT. The images were exported to Tag Image File Format (TIFF) and 100, 75, 50, 25 and 10% quality of Joint Photographic Experts Group (JPEG) format. OCT images were taken before and after intravitreal ranibizumab injections, and after relapse. The prevalence of subretinal and intraretinal fluids was determined. Differences in choroidal thickness between the TIFF and JPEG images were compared with the intra-observer variability. Results The prevalence of subretinal and intraretinal fluids was comparable regardless of the degree of compression. However, the chorio–scleral interface was not clearly identified in many images with a high degree of compression. In images with 25 and 10% quality of JPEG, the difference in choroidal thickness between the TIFF images and the respective JPEG images was significantly greater than the intra-observer variability of the TIFF images (P=0.029 and P=0.024, respectively). Conclusions In OCT images of eyes with AMD, 50% of the quality of the JPEG format would be an optimal degree of compression for efficient data storage and transfer without sacrificing image quality. PMID:24788012
Visually Lossless JPEG 2000 for Remote Image Browsing
Oh, Han; Bilgin, Ali; Marcellin, Michael
2017-01-01
Image sizes have increased exponentially in recent years. The resulting high-resolution images are often viewed via remote image browsing. Zooming and panning are desirable features in this context, which result in disparate spatial regions of an image being displayed at a variety of (spatial) resolutions. When an image is displayed at a reduced resolution, the quantization step sizes needed for visually lossless quality generally increase. This paper investigates the quantization step sizes needed for visually lossless display as a function of resolution, and proposes a method that effectively incorporates the resulting (multiple) quantization step sizes into a single JPEG2000 codestream. This codestream is JPEG2000 Part 1 compliant and allows for visually lossless decoding at all resolutions natively supported by the wavelet transform as well as arbitrary intermediate resolutions, using only a fraction of the full-resolution codestream. When images are browsed remotely using the JPEG2000 Interactive Protocol (JPIP), the required bandwidth is significantly reduced, as demonstrated by extensive experimental results. PMID:28748112
Adaptive image coding based on cubic-spline interpolation
NASA Astrophysics Data System (ADS)
Jiang, Jian-Xing; Hong, Shao-Hua; Lin, Tsung-Ching; Wang, Lin; Truong, Trieu-Kien
2014-09-01
It has been investigated that at low bit rates, downsampling prior to coding and upsampling after decoding can achieve better compression performance than standard coding algorithms, e.g., JPEG and H. 264/AVC. However, at high bit rates, the sampling-based schemes generate more distortion. Additionally, the maximum bit rate for the sampling-based scheme to outperform the standard algorithm is image-dependent. In this paper, a practical adaptive image coding algorithm based on the cubic-spline interpolation (CSI) is proposed. This proposed algorithm adaptively selects the image coding method from CSI-based modified JPEG and standard JPEG under a given target bit rate utilizing the so called ρ-domain analysis. The experimental results indicate that compared with the standard JPEG, the proposed algorithm can show better performance at low bit rates and maintain the same performance at high bit rates.
Estimated spectrum adaptive postfilter and the iterative prepost filtering algirighms
NASA Technical Reports Server (NTRS)
Linares, Irving (Inventor)
2004-01-01
The invention presents The Estimated Spectrum Adaptive Postfilter (ESAP) and the Iterative Prepost Filter (IPF) algorithms. These algorithms model a number of image-adaptive post-filtering and pre-post filtering methods. They are designed to minimize Discrete Cosine Transform (DCT) blocking distortion caused when images are highly compressed with the Joint Photographic Expert Group (JPEG) standard. The ESAP and the IPF techniques of the present invention minimize the mean square error (MSE) to improve the objective and subjective quality of low-bit-rate JPEG gray-scale images while simultaneously enhancing perceptual visual quality with respect to baseline JPEG images.
Embedding intensity image into a binary hologram with strong noise resistant capability
NASA Astrophysics Data System (ADS)
Zhuang, Zhaoyong; Jiao, Shuming; Zou, Wenbin; Li, Xia
2017-11-01
A digital hologram can be employed as a host image for image watermarking applications to protect information security. Past research demonstrates that a gray level intensity image can be embedded into a binary Fresnel hologram by error diffusion method or bit truncation coding method. However, the fidelity of the retrieved watermark image from binary hologram is generally not satisfactory, especially when the binary hologram is contaminated with noise. To address this problem, we propose a JPEG-BCH encoding method in this paper. First, we employ the JPEG standard to compress the intensity image into a binary bit stream. Next, we encode the binary bit stream with BCH code to obtain error correction capability. Finally, the JPEG-BCH code is embedded into the binary hologram. By this way, the intensity image can be retrieved with high fidelity by a BCH-JPEG decoder even if the binary hologram suffers from serious noise contamination. Numerical simulation results show that the image quality of retrieved intensity image with our proposed method is superior to the state-of-the-art work reported.
Visualization of JPEG Metadata
NASA Astrophysics Data System (ADS)
Malik Mohamad, Kamaruddin; Deris, Mustafa Mat
There are a lot of information embedded in JPEG image than just graphics. Visualization of its metadata would benefit digital forensic investigator to view embedded data including corrupted image where no graphics can be displayed in order to assist in evidence collection for cases such as child pornography or steganography. There are already available tools such as metadata readers, editors and extraction tools but mostly focusing on visualizing attribute information of JPEG Exif. However, none have been done to visualize metadata by consolidating markers summary, header structure, Huffman table and quantization table in a single program. In this paper, metadata visualization is done by developing a program that able to summarize all existing markers, header structure, Huffman table and quantization table in JPEG. The result shows that visualization of metadata helps viewing the hidden information within JPEG more easily.
77 FR 59692 - 2014 Diversity Immigrant Visa Program
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-28
... the E-DV system. The entry will not be accepted and must be resubmitted. Group or family photographs... must be in the Joint Photographic Experts Group (JPEG) format. Image File Size: The maximum file size...). Image File Format: The image must be in the Joint Photographic Experts Group (JPEG) format. Image File...
NASA Astrophysics Data System (ADS)
Agueh, Max; Diouris, Jean-François; Diop, Magaye; Devaux, François-Olivier; De Vleeschouwer, Christophe; Macq, Benoit
2008-12-01
Based on the analysis of real mobile ad hoc network (MANET) traces, we derive in this paper an optimal wireless JPEG 2000 compliant forward error correction (FEC) rate allocation scheme for a robust streaming of images and videos over MANET. The packet-based proposed scheme has a low complexity and is compliant to JPWL, the 11th part of the JPEG 2000 standard. The effectiveness of the proposed method is evaluated using a wireless Motion JPEG 2000 client/server application; and the ability of the optimal scheme to guarantee quality of service (QoS) to wireless clients is demonstrated.
LDPC-based iterative joint source-channel decoding for JPEG2000.
Pu, Lingling; Wu, Zhenyu; Bilgin, Ali; Marcellin, Michael W; Vasic, Bane
2007-02-01
A framework is proposed for iterative joint source-channel decoding of JPEG2000 codestreams. At the encoder, JPEG2000 is used to perform source coding with certain error-resilience (ER) modes, and LDPC codes are used to perform channel coding. During decoding, the source decoder uses the ER modes to identify corrupt sections of the codestream and provides this information to the channel decoder. Decoding is carried out jointly in an iterative fashion. Experimental results indicate that the proposed method requires fewer iterations and improves overall system performance.
NASA Astrophysics Data System (ADS)
Sablik, Thomas; Velten, Jörg; Kummert, Anton
2015-03-01
An novel system for automatic privacy protection in digital media based on spectral domain watermarking and JPEG compression is described in the present paper. In a first step private areas are detected. Therefore a detection method is presented. The implemented method uses Haar cascades to detects faces. Integral images are used to speed up calculations and the detection. Multiple detections of one face are combined. Succeeding steps comprise embedding the data into the image as part of JPEG compression using spectral domain methods and protecting the area of privacy. The embedding process is integrated into and adapted to JPEG compression. A Spread Spectrum Watermarking method is used to embed the size and position of the private areas into the cover image. Different methods for embedding regarding their robustness are compared. Moreover the performance of the method concerning tampered images is presented.
Applications of the JPEG standard in a medical environment
NASA Astrophysics Data System (ADS)
Wittenberg, Ulrich
1993-10-01
JPEG is a very versatile image coding and compression standard for single images. Medical images make a higher demand on image quality and precision than the usual 'pretty pictures'. In this paper the potential applications of the various JPEG coding modes in a medical environment are evaluated. Due to legal reasons the lossless modes are especially interesting. The spatial modes are equally important because medical data may well exceed the maximum of 12 bit precision allowed for the DCT modes. The performance of the spatial predictors is investigated. From the users point of view the progressive modes, which provide a fast but coarse approximation of the final image, reduce the subjective time one has to wait for it, so they also reduce the user's frustration. Even the lossy modes will find some applications, but they have to be handled with care, because repeated lossy coding and decoding leads to a degradation of the image quality. The amount of this degradation is investigated. The JPEG standard alone is not sufficient for a PACS because it does not store enough additional data such as creation data or details of the imaging modality. Therefore it will be an imbedded coding format in standards like TIFF or ACR/NEMA. It is concluded that the JPEG standard is versatile enough to match the requirements of the medical community.
Improved JPEG anti-forensics with better image visual quality and forensic undetectability.
Singh, Gurinder; Singh, Kulbir
2017-08-01
There is an immediate need to validate the authenticity of digital images due to the availability of powerful image processing tools that can easily manipulate the digital image information without leaving any traces. The digital image forensics most often employs the tampering detectors based on JPEG compression. Therefore, to evaluate the competency of the JPEG forensic detectors, an anti-forensic technique is required. In this paper, two improved JPEG anti-forensic techniques are proposed to remove the blocking artifacts left by the JPEG compression in both spatial and DCT domain. In the proposed framework, the grainy noise left by the perceptual histogram smoothing in DCT domain can be reduced significantly by applying the proposed de-noising operation. Two types of denoising algorithms are proposed, one is based on the constrained minimization problem of total variation of energy and other on the normalized weighted function. Subsequently, an improved TV based deblocking operation is proposed to eliminate the blocking artifacts in the spatial domain. Then, a decalibration operation is applied to bring the processed image statistics back to its standard position. The experimental results show that the proposed anti-forensic approaches outperform the existing state-of-the-art techniques in achieving enhanced tradeoff between image visual quality and forensic undetectability, but with high computational cost. Copyright © 2017 Elsevier B.V. All rights reserved.
1995-02-01
modification of existing JPEG compression and decompression software available from Independent JPEG Users Group to process CIELAB color images and to use...externally specificed Huffman tables. In addition a conversion program was written to convert CIELAB color space images to red, green, blue color space
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-27
... already a U.S. citizen or a Lawful Permanent Resident, but you will not be penalized if you do. Group... specifications: Image File Format: The miage must be in the Joint Photographic Experts Group (JPEG) format. Image... in the Joint Photographic Experts Group (JPEG) format. Image File Size: The maximum image file size...
Cell edge detection in JPEG2000 wavelet domain - analysis on sigmoid function edge model.
Punys, Vytenis; Maknickas, Ramunas
2011-01-01
Big virtual microscopy images (80K x 60K pixels and larger) are usually stored using the JPEG2000 image compression scheme. Diagnostic quantification, based on image analysis, might be faster if performed on compressed data (approx. 20 times less the original amount), representing the coefficients of the wavelet transform. The analysis of possible edge detection without reverse wavelet transform is presented in the paper. Two edge detection methods, suitable for JPEG2000 bi-orthogonal wavelets, are proposed. The methods are adjusted according calculated parameters of sigmoid edge model. The results of model analysis indicate more suitable method for given bi-orthogonal wavelet.
McCord, Layne K; Scarfe, William C; Naylor, Rachel H; Scheetz, James P; Silveira, Anibal; Gillespie, Kevin R
2007-05-01
The objectives of this study were to compare the effect of JPEG 2000 compression of hand-wrist radiographs on observer image quality qualitative assessment and to compare with a software-derived quantitative image quality index. Fifteen hand-wrist radiographs were digitized and saved as TIFF and JPEG 2000 images at 4 levels of compression (20:1, 40:1, 60:1, and 80:1). The images, including rereads, were viewed by 13 orthodontic residents who determined the image quality rating on a scale of 1 to 5. A quantitative analysis was also performed by using a readily available software based on the human visual system (Image Quality Measure Computer Program, version 6.2, Mitre, Bedford, Mass). ANOVA was used to determine the optimal compression level (P < or =.05). When we compared subjective indexes, JPEG compression greater than 60:1 significantly reduced image quality. When we used quantitative indexes, the JPEG 2000 images had lower quality at all compression ratios compared with the original TIFF images. There was excellent correlation (R2 >0.92) between qualitative and quantitative indexes. Image Quality Measure indexes are more sensitive than subjective image quality assessments in quantifying image degradation with compression. There is potential for this software-based quantitative method in determining the optimal compression ratio for any image without the use of subjective raters.
Switching theory-based steganographic system for JPEG images
NASA Astrophysics Data System (ADS)
Cherukuri, Ravindranath C.; Agaian, Sos S.
2007-04-01
Cellular communications constitute a significant portion of the global telecommunications market. Therefore, the need for secured communication over a mobile platform has increased exponentially. Steganography is an art of hiding critical data into an innocuous signal, which provide answers to the above needs. The JPEG is one of commonly used format for storing and transmitting images on the web. In addition, the pictures captured using mobile cameras are in mostly in JPEG format. In this article, we introduce a switching theory based steganographic system for JPEG images which is applicable for mobile and computer platforms. The proposed algorithm uses the fact that energy distribution among the quantized AC coefficients varies from block to block and coefficient to coefficient. Existing approaches are effective with a part of these coefficients but when employed over all the coefficients they show there ineffectiveness. Therefore, we propose an approach that works each set of AC coefficients with different frame work thus enhancing the performance of the approach. The proposed system offers a high capacity and embedding efficiency simultaneously withstanding to simple statistical attacks. In addition, the embedded information could be retrieved without prior knowledge of the cover image. Based on simulation results, the proposed method demonstrates an improved embedding capacity over existing algorithms while maintaining a high embedding efficiency and preserving the statistics of the JPEG image after hiding information.
Helioviewer.org: Browsing Very Large Image Archives Online Using JPEG 2000
NASA Astrophysics Data System (ADS)
Hughitt, V. K.; Ireland, J.; Mueller, D.; Dimitoglou, G.; Garcia Ortiz, J.; Schmidt, L.; Wamsler, B.; Beck, J.; Alexanderian, A.; Fleck, B.
2009-12-01
As the amount of solar data available to scientists continues to increase at faster and faster rates, it is important that there exist simple tools for navigating this data quickly with a minimal amount of effort. By combining heterogeneous solar physics datatypes such as full-disk images and coronagraphs, along with feature and event information, Helioviewer offers a simple and intuitive way to browse multiple datasets simultaneously. Images are stored in a repository using the JPEG 2000 format and tiled dynamically upon a client's request. By tiling images and serving only the portions of the image requested, it is possible for the client to work with very large images without having to fetch all of the data at once. In addition to a focus on intercommunication with other virtual observatories and browsers (VSO, HEK, etc), Helioviewer will offer a number of externally-available application programming interfaces (APIs) to enable easy third party use, adoption and extension. Recent efforts have resulted in increased performance, dynamic movie generation, and improved support for mobile web browsers. Future functionality will include: support for additional data-sources including RHESSI, SDO, STEREO, and TRACE, a navigable timeline of recorded solar events, social annotation, and basic client-side image processing.
IIPImage: Large-image visualization
NASA Astrophysics Data System (ADS)
Pillay, Ruven
2014-08-01
IIPImage is an advanced high-performance feature-rich image server system that enables online access to full resolution floating point (as well as other bit depth) images at terabyte scales. Paired with the VisiOmatic (ascl:1408.010) celestial image viewer, the system can comfortably handle gigapixel size images as well as advanced image features such as both 8, 16 and 32 bit depths, CIELAB colorimetric images and scientific imagery such as multispectral images. Streaming is tile-based, which enables viewing, navigating and zooming in real-time around gigapixel size images. Source images can be in either TIFF or JPEG2000 format. Whole images or regions within images can also be rapidly and dynamically resized and exported by the server from a single source image without the need to store multiple files in various sizes.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-01
... need to submit a photo for a child who is already a U.S. citizen or a Legal Permanent Resident. Group... Joint Photographic Experts Group (JPEG) format; it must have a maximum image file size of two hundred... (dpi); the image file format in Joint Photographic Experts Group (JPEG) format; the maximum image file...
NASA Astrophysics Data System (ADS)
Siddeq, M. M.; Rodrigues, M. A.
2015-09-01
Image compression techniques are widely used on 2D image 2D video 3D images and 3D video. There are many types of compression techniques and among the most popular are JPEG and JPEG2000. In this research, we introduce a new compression method based on applying a two level discrete cosine transform (DCT) and a two level discrete wavelet transform (DWT) in connection with novel compression steps for high-resolution images. The proposed image compression algorithm consists of four steps. (1) Transform an image by a two level DWT followed by a DCT to produce two matrices: DC- and AC-Matrix, or low and high frequency matrix, respectively, (2) apply a second level DCT on the DC-Matrix to generate two arrays, namely nonzero-array and zero-array, (3) apply the Minimize-Matrix-Size algorithm to the AC-Matrix and to the other high-frequencies generated by the second level DWT, (4) apply arithmetic coding to the output of previous steps. A novel decompression algorithm, Fast-Match-Search algorithm (FMS), is used to reconstruct all high-frequency matrices. The FMS-algorithm computes all compressed data probabilities by using a table of data, and then using a binary search algorithm for finding decompressed data inside the table. Thereafter, all decoded DC-values with the decoded AC-coefficients are combined in one matrix followed by inverse two levels DCT with two levels DWT. The technique is tested by compression and reconstruction of 3D surface patches. Additionally, this technique is compared with JPEG and JPEG2000 algorithm through 2D and 3D root-mean-square-error following reconstruction. The results demonstrate that the proposed compression method has better visual properties than JPEG and JPEG2000 and is able to more accurately reconstruct surface patches in 3D.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeon, Chang Ho; Kim, Bohyoung; Gu, Bon Seung
2013-10-15
Purpose: To modify the preprocessing technique, which was previously proposed, improving compressibility of computed tomography (CT) images to cover the diversity of three dimensional configurations of different body parts and to evaluate the robustness of the technique in terms of segmentation correctness and increase in reversible compression ratio (CR) for various CT examinations.Methods: This study had institutional review board approval with waiver of informed patient consent. A preprocessing technique was previously proposed to improve the compressibility of CT images by replacing pixel values outside the body region with a constant value resulting in maximizing data redundancy. Since the technique wasmore » developed aiming at only chest CT images, the authors modified the segmentation method to cover the diversity of three dimensional configurations of different body parts. The modified version was evaluated as follows. In randomly selected 368 CT examinations (352 787 images), each image was preprocessed by using the modified preprocessing technique. Radiologists visually confirmed whether the segmented region covers the body region or not. The images with and without the preprocessing were reversibly compressed using Joint Photographic Experts Group (JPEG), JPEG2000 two-dimensional (2D), and JPEG2000 three-dimensional (3D) compressions. The percentage increase in CR per examination (CR{sub I}) was measured.Results: The rate of correct segmentation was 100.0% (95% CI: 99.9%, 100.0%) for all the examinations. The median of CR{sub I} were 26.1% (95% CI: 24.9%, 27.1%), 40.2% (38.5%, 41.1%), and 34.5% (32.7%, 36.2%) in JPEG, JPEG2000 2D, and JPEG2000 3D, respectively.Conclusions: In various CT examinations, the modified preprocessing technique can increase in the CR by 25% or more without concerning about degradation of diagnostic information.« less
Codestream-Based Identification of JPEG 2000 Images with Different Coding Parameters
NASA Astrophysics Data System (ADS)
Watanabe, Osamu; Fukuhara, Takahiro; Kiya, Hitoshi
A method of identifying JPEG 2000 images with different coding parameters, such as code-block sizes, quantization-step sizes, and resolution levels, is presented. It does not produce false-negative matches regardless of different coding parameters (compression rate, code-block size, and discrete wavelet transform (DWT) resolutions levels) or quantization step sizes. This feature is not provided by conventional methods. Moreover, the proposed approach is fast because it uses the number of zero-bit-planes that can be extracted from the JPEG 2000 codestream by only parsing the header information without embedded block coding with optimized truncation (EBCOT) decoding. The experimental results revealed the effectiveness of image identification based on the new method.
JHelioviewer: Open-Source Software for Discovery and Image Access in the Petabyte Age (Invited)
NASA Astrophysics Data System (ADS)
Mueller, D.; Dimitoglou, G.; Langenberg, M.; Pagel, S.; Dau, A.; Nuhn, M.; Garcia Ortiz, J. P.; Dietert, H.; Schmidt, L.; Hughitt, V. K.; Ireland, J.; Fleck, B.
2010-12-01
The unprecedented torrent of data returned by the Solar Dynamics Observatory is both a blessing and a barrier: a blessing for making available data with significantly higher spatial and temporal resolution, but a barrier for scientists to access, browse and analyze them. With such staggering data volume, the data is bound to be accessible only from a few repositories and users will have to deal with data sets effectively immobile and practically difficult to download. From a scientist's perspective this poses three challenges: accessing, browsing and finding interesting data while avoiding the proverbial search for a needle in a haystack. To address these challenges, we have developed JHelioviewer, an open-source visualization software that lets users browse large data volumes both as still images and movies. We did so by deploying an efficient image encoding, storage, and dissemination solution using the JPEG 2000 standard. This solution enables users to access remote images at different resolution levels as a single data stream. Users can view, manipulate, pan, zoom, and overlay JPEG 2000 compressed data quickly, without severe network bandwidth penalties. Besides viewing data, the browser provides third-party metadata and event catalog integration to quickly locate data of interest, as well as an interface to the Virtual Solar Observatory to download science-quality data. As part of the Helioviewer Project, JHelioviewer offers intuitive ways to browse large amounts of heterogeneous data remotely and provides an extensible and customizable open-source platform for the scientific community.
Vulnerability Analysis of HD Photo Image Viewer Applications
2007-09-01
the successor to the ubiquitous JPEG image format, as well as the eventual de facto standard in the digital photography market. With massive efforts...renamed to HD Photo in November of 2006, is being touted as the successor to the ubiquitous JPEG image format, as well as the eventual de facto standard...associated state-of-the-art compression algorithm “specifically designed [for] all types of continuous tone photographic” images [HDPhotoFeatureSpec
Lossless Compression of JPEG Coded Photo Collections.
Wu, Hao; Sun, Xiaoyan; Yang, Jingyu; Zeng, Wenjun; Wu, Feng
2016-04-06
The explosion of digital photos has posed a significant challenge to photo storage and transmission for both personal devices and cloud platforms. In this paper, we propose a novel lossless compression method to further reduce the size of a set of JPEG coded correlated images without any loss of information. The proposed method jointly removes inter/intra image redundancy in the feature, spatial, and frequency domains. For each collection, we first organize the images into a pseudo video by minimizing the global prediction cost in the feature domain. We then present a hybrid disparity compensation method to better exploit both the global and local correlations among the images in the spatial domain. Furthermore, the redundancy between each compensated signal and the corresponding target image is adaptively reduced in the frequency domain. Experimental results demonstrate the effectiveness of the proposed lossless compression method. Compared to the JPEG coded image collections, our method achieves average bit savings of more than 31%.
Tampered Region Localization of Digital Color Images Based on JPEG Compression Noise
NASA Astrophysics Data System (ADS)
Wang, Wei; Dong, Jing; Tan, Tieniu
With the availability of various digital image edit tools, seeing is no longer believing. In this paper, we focus on tampered region localization for image forensics. We propose an algorithm which can locate tampered region(s) in a lossless compressed tampered image when its unchanged region is output of JPEG decompressor. We find the tampered region and the unchanged region have different responses for JPEG compression. The tampered region has stronger high frequency quantization noise than the unchanged region. We employ PCA to separate different spatial frequencies quantization noises, i.e. low, medium and high frequency quantization noise, and extract high frequency quantization noise for tampered region localization. Post-processing is involved to get final localization result. The experimental results prove the effectiveness of our proposed method.
Toward objective image quality metrics: the AIC Eval Program of the JPEG
NASA Astrophysics Data System (ADS)
Richter, Thomas; Larabi, Chaker
2008-08-01
Objective quality assessment of lossy image compression codecs is an important part of the recent call of the JPEG for Advanced Image Coding. The target of the AIC ad-hoc group is twofold: First, to receive state-of-the-art still image codecs and to propose suitable technology for standardization; and second, to study objective image quality metrics to evaluate the performance of such codes. Even tthough the performance of an objective metric is defined by how well it predicts the outcome of a subjective assessment, one can also study the usefulness of a metric in a non-traditional way indirectly, namely by measuring the subjective quality improvement of a codec that has been optimized for a specific objective metric. This approach shall be demonstrated here on the recently proposed HDPhoto format14 introduced by Microsoft and a SSIM-tuned17 version of it by one of the authors. We compare these two implementations with JPEG1 in two variations and a visual and PSNR optimal JPEG200013 implementation. To this end, we use subjective and objective tests based on the multiscale SSIM and a new DCT based metric.
A threshold-based fixed predictor for JPEG-LS image compression
NASA Astrophysics Data System (ADS)
Deng, Lihua; Huang, Zhenghua; Yao, Shoukui
2018-03-01
In JPEG-LS, fixed predictor based on median edge detector (MED) only detect horizontal and vertical edges, and thus produces large prediction errors in the locality of diagonal edges. In this paper, we propose a threshold-based edge detection scheme for the fixed predictor. The proposed scheme can detect not only the horizontal and vertical edges, but also diagonal edges. For some certain thresholds, the proposed scheme can be simplified to other existing schemes. So, it can also be regarded as the integration of these existing schemes. For a suitable threshold, the accuracy of horizontal and vertical edges detection is higher than the existing median edge detection in JPEG-LS. Thus, the proposed fixed predictor outperforms the existing JPEG-LS predictors for all images tested, while the complexity of the overall algorithm is maintained at a similar level.
An evaluation of the effect of JPEG, JPEG2000, and H.264/AVC on CQR codes decoding process
NASA Astrophysics Data System (ADS)
Vizcarra Melgar, Max E.; Farias, Mylène C. Q.; Zaghetto, Alexandre
2015-02-01
This paper presents a binarymatrix code based on QR Code (Quick Response Code), denoted as CQR Code (Colored Quick Response Code), and evaluates the effect of JPEG, JPEG2000 and H.264/AVC compression on the decoding process. The proposed CQR Code has three additional colors (red, green and blue), what enables twice as much storage capacity when compared to the traditional black and white QR Code. Using the Reed-Solomon error-correcting code, the CQR Code model has a theoretical correction capability of 38.41%. The goal of this paper is to evaluate the effect that degradations inserted by common image compression algorithms have on the decoding process. Results show that a successful decoding process can be achieved for compression rates up to 0.3877 bits/pixel, 0.1093 bits/pixel and 0.3808 bits/pixel for JPEG, JPEG2000 and H.264/AVC formats, respectively. The algorithm that presents the best performance is the H.264/AVC, followed by the JPEG2000, and JPEG.
NASA Astrophysics Data System (ADS)
Wijaya, Surya Li; Savvides, Marios; Vijaya Kumar, B. V. K.
2005-02-01
Face recognition on mobile devices, such as personal digital assistants and cell phones, is a big challenge owing to the limited computational resources available to run verifications on the devices themselves. One approach is to transmit the captured face images by use of the cell-phone connection and to run the verification on a remote station. However, owing to limitations in communication bandwidth, it may be necessary to transmit a compressed version of the image. We propose using the image compression standard JPEG2000, which is a wavelet-based compression engine used to compress the face images to low bit rates suitable for transmission over low-bandwidth communication channels. At the receiver end, the face images are reconstructed with a JPEG2000 decoder and are fed into the verification engine. We explore how advanced correlation filters, such as the minimum average correlation energy filter [Appl. Opt. 26, 3633 (1987)] and its variants, perform by using face images captured under different illumination conditions and encoded with different bit rates under the JPEG2000 wavelet-encoding standard. We evaluate the performance of these filters by using illumination variations from the Carnegie Mellon University's Pose, Illumination, and Expression (PIE) face database. We also demonstrate the tolerance of these filters to noisy versions of images with illumination variations.
Clinical evaluation of JPEG2000 compression for digital mammography
NASA Astrophysics Data System (ADS)
Sung, Min-Mo; Kim, Hee-Joung; Kim, Eun-Kyung; Kwak, Jin-Young; Yoo, Jae-Kyung; Yoo, Hyung-Sik
2002-06-01
Medical images, such as computed radiography (CR), and digital mammographic images will require large storage facilities and long transmission times for picture archiving and communications system (PACS) implementation. American College of Radiology and National Equipment Manufacturers Association (ACR/NEMA) group is planning to adopt a JPEG2000 compression algorithm in digital imaging and communications in medicine (DICOM) standard to better utilize medical images. The purpose of the study was to evaluate the compression ratios of JPEG2000 for digital mammographic images using peak signal-to-noise ratio (PSNR), receiver operating characteristic (ROC) analysis, and the t-test. The traditional statistical quality measures such as PSNR, which is a commonly used measure for the evaluation of reconstructed images, measures how the reconstructed image differs from the original by making pixel-by-pixel comparisons. The ability to accurately discriminate diseased cases from normal cases is evaluated using ROC curve analysis. ROC curves can be used to compare the diagnostic performance of two or more reconstructed images. The t test can be also used to evaluate the subjective image quality of reconstructed images. The results of the t test suggested that the possible compression ratios using JPEG2000 for digital mammographic images may be as much as 15:1 without visual loss or with preserving significant medical information at a confidence level of 99%, although both PSNR and ROC analyses suggest as much as 80:1 compression ratio can be achieved without affecting clinical diagnostic performance.
JPEG2000 still image coding quality.
Chen, Tzong-Jer; Lin, Sheng-Chieh; Lin, You-Chen; Cheng, Ren-Gui; Lin, Li-Hui; Wu, Wei
2013-10-01
This work demonstrates the image qualities between two popular JPEG2000 programs. Two medical image compression algorithms are both coded using JPEG2000, but they are different regarding the interface, convenience, speed of computation, and their characteristic options influenced by the encoder, quantization, tiling, etc. The differences in image quality and compression ratio are also affected by the modality and compression algorithm implementation. Do they provide the same quality? The qualities of compressed medical images from two image compression programs named Apollo and JJ2000 were evaluated extensively using objective metrics. These algorithms were applied to three medical image modalities at various compression ratios ranging from 10:1 to 100:1. Following that, the quality of the reconstructed images was evaluated using five objective metrics. The Spearman rank correlation coefficients were measured under every metric in the two programs. We found that JJ2000 and Apollo exhibited indistinguishable image quality for all images evaluated using the above five metrics (r > 0.98, p < 0.001). It can be concluded that the image quality of the JJ2000 and Apollo algorithms is statistically equivalent for medical image compression.
Dynamic code block size for JPEG 2000
NASA Astrophysics Data System (ADS)
Tsai, Ping-Sing; LeCornec, Yann
2008-02-01
Since the standardization of the JPEG 2000, it has found its way into many different applications such as DICOM (digital imaging and communication in medicine), satellite photography, military surveillance, digital cinema initiative, professional video cameras, and so on. The unified framework of the JPEG 2000 architecture makes practical high quality real-time compression possible even in video mode, i.e. motion JPEG 2000. In this paper, we present a study of the compression impact using dynamic code block size instead of fixed code block size as specified in the JPEG 2000 standard. The simulation results show that there is no significant impact on compression if dynamic code block sizes are used. In this study, we also unveil the advantages of using dynamic code block sizes.
NASA Astrophysics Data System (ADS)
Starosolski, Roman
2016-07-01
Reversible denoising and lifting steps (RDLS) are lifting steps integrated with denoising filters in such a way that, despite the inherently irreversible nature of denoising, they are perfectly reversible. We investigated the application of RDLS to reversible color space transforms: RCT, YCoCg-R, RDgDb, and LDgEb. In order to improve RDLS effects, we propose a heuristic for image-adaptive denoising filter selection, a fast estimator of the compressed image bitrate, and a special filter that may result in skipping of the steps. We analyzed the properties of the presented methods, paying special attention to their usefulness from a practical standpoint. For a diverse image test-set and lossless JPEG-LS, JPEG 2000, and JPEG XR algorithms, RDLS improves the bitrates of all the examined transforms. The most interesting results were obtained for an estimation-based heuristic filter selection out of a set of seven filters; the cost of this variant was similar to or lower than the transform cost, and it improved the average lossless JPEG 2000 bitrates by 2.65% for RDgDb and by over 1% for other transforms; bitrates of certain images were improved to a significantly greater extent.
Interband coding extension of the new lossless JPEG standard
NASA Astrophysics Data System (ADS)
Memon, Nasir D.; Wu, Xiaolin; Sippy, V.; Miller, G.
1997-01-01
Due to the perceived inadequacy of current standards for lossless image compression, the JPEG committee of the International Standards Organization (ISO) has been developing a new standard. A baseline algorithm, called JPEG-LS, has already been completed and is awaiting approval by national bodies. The JPEG-LS baseline algorithm despite being simple is surprisingly efficient, and provides compression performance that is within a few percent of the best and more sophisticated techniques reported in the literature. Extensive experimentations performed by the authors seem to indicate that an overall improvement by more than 10 percent in compression performance will be difficult to obtain even at the cost of great complexity; at least not with traditional approaches to lossless image compression. However, if we allow inter-band decorrelation and modeling in the baseline algorithm, nearly 30 percent improvement in compression gains for specific images in the test set become possible with a modest computational cost. In this paper we propose and investigate a few techniques for exploiting inter-band correlations in multi-band images. These techniques have been designed within the framework of the baseline algorithm, and require minimal changes to the basic architecture of the baseline, retaining its essential simplicity.
Image quality (IQ) guided multispectral image compression
NASA Astrophysics Data System (ADS)
Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik
2016-05-01
Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.
Fu, Chi-Yung; Petrich, Loren I.
1997-01-01
An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described.
Scan-Based Implementation of JPEG 2000 Extensions
NASA Technical Reports Server (NTRS)
Rountree, Janet C.; Webb, Brian N.; Flohr, Thomas J.; Marcellin, Michael W.
2001-01-01
JPEG 2000 Part 2 (Extensions) contains a number of technologies that are of potential interest in remote sensing applications. These include arbitrary wavelet transforms, techniques to limit boundary artifacts in tiles, multiple component transforms, and trellis-coded quantization (TCQ). We are investigating the addition of these features to the low-memory (scan-based) implementation of JPEG 2000 Part 1. A scan-based implementation of TCQ has been realized and tested, with a very small performance loss as compared with the full image (frame-based) version. A proposed amendment to JPEG 2000 Part 2 will effect the syntax changes required to make scan-based TCQ compatible with the standard.
Geller, G.N.; Fosnight, E.A.; Chaudhuri, Sambhudas
2008-01-01
Access to satellite images has been largely limited to communities with specialized tools and expertise, even though images could also benefit other communities. This situation has resulted in underutilization of the data. TerraLook, which consists of collections of georeferenced JPEG images and an open source toolkit to use them, makes satellite images available to those lacking experience with remote sensing. Users can find, roam, and zoom images, create and display vector overlays, adjust and annotate images so they can be used as a communication vehicle, compare images taken at different times, and perform other activities useful for natural resource management, sustainable development, education, and other activities. ?? 2007 IEEE.
Geller, G.N.; Fosnight, E.A.; Chaudhuri, Sambhudas
2007-01-01
Access to satellite images has been largely limited to communities with specialized tools and expertise, even though images could also benefit other communities. This situation has resulted in underutilization of the data. TerraLook, which consists of collections of georeferenced JPEG images and an open source toolkit to use them, makes satellite images available to those lacking experience with remote sensing. Users can find, roam, and zoom images, create and display vector overlays, adjust and annotate images so they can be used as a communication vehicle, compare images taken at different times, and perform other activities useful for natural resource management, sustainable development, education, and other activities. ?? 2007 IEEE.
Wavelet-based compression of M-FISH images.
Hua, Jianping; Xiong, Zixiang; Wu, Qiang; Castleman, Kenneth R
2005-05-01
Multiplex fluorescence in situ hybridization (M-FISH) is a recently developed technology that enables multi-color chromosome karyotyping for molecular cytogenetic analysis. Each M-FISH image set consists of a number of aligned images of the same chromosome specimen captured at different optical wavelength. This paper presents embedded M-FISH image coding (EMIC), where the foreground objects/chromosomes and the background objects/images are coded separately. We first apply critically sampled integer wavelet transforms to both the foreground and the background. We then use object-based bit-plane coding to compress each object and generate separate embedded bitstreams that allow continuous lossy-to-lossless compression of the foreground and the background. For efficient arithmetic coding of bit planes, we propose a method of designing an optimal context model that specifically exploits the statistical characteristics of M-FISH images in the wavelet domain. Our experiments show that EMIC achieves nearly twice as much compression as Lempel-Ziv-Welch coding. EMIC also performs much better than JPEG-LS and JPEG-2000 for lossless coding. The lossy performance of EMIC is significantly better than that of coding each M-FISH image with JPEG-2000.
Performance comparison of leading image codecs: H.264/AVC Intra, JPEG2000, and Microsoft HD Photo
NASA Astrophysics Data System (ADS)
Tran, Trac D.; Liu, Lijie; Topiwala, Pankaj
2007-09-01
This paper provides a detailed rate-distortion performance comparison between JPEG2000, Microsoft HD Photo, and H.264/AVC High Profile 4:4:4 I-frame coding for high-resolution still images and high-definition (HD) 1080p video sequences. This work is an extension to our previous comparative study published in previous SPIE conferences [1, 2]. Here we further optimize all three codecs for compression performance. Coding simulations are performed on a set of large-format color images captured from mainstream digital cameras and 1080p HD video sequences commonly used for H.264/AVC standardization work. Overall, our experimental results show that all three codecs offer very similar coding performances at the high-quality, high-resolution setting. Differences tend to be data-dependent: JPEG2000 with the wavelet technology tends to be the best performer with smooth spatial data; H.264/AVC High-Profile with advanced spatial prediction modes tends to cope best with more complex visual content; Microsoft HD Photo tends to be the most consistent across the board. For the still-image data sets, JPEG2000 offers the best R-D performance gains (around 0.2 to 1 dB in peak signal-to-noise ratio) over H.264/AVC High-Profile intra coding and Microsoft HD Photo. For the 1080p video data set, all three codecs offer very similar coding performance. As in [1, 2], neither do we consider scalability nor complexity in this study (JPEG2000 is operating in non-scalable, but optimal performance mode).
JPEG 2000 Encoding with Perceptual Distortion Control
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Liu, Zhen; Karam, Lina J.
2008-01-01
An alternative approach has been devised for encoding image data in compliance with JPEG 2000, the most recent still-image data-compression standard of the Joint Photographic Experts Group. Heretofore, JPEG 2000 encoding has been implemented by several related schemes classified as rate-based distortion-minimization encoding. In each of these schemes, the end user specifies a desired bit rate and the encoding algorithm strives to attain that rate while minimizing a mean squared error (MSE). While rate-based distortion minimization is appropriate for transmitting data over a limited-bandwidth channel, it is not the best approach for applications in which the perceptual quality of reconstructed images is a major consideration. A better approach for such applications is the present alternative one, denoted perceptual distortion control, in which the encoding algorithm strives to compress data to the lowest bit rate that yields at least a specified level of perceptual image quality. Some additional background information on JPEG 2000 is prerequisite to a meaningful summary of JPEG encoding with perceptual distortion control. The JPEG 2000 encoding process includes two subprocesses known as tier-1 and tier-2 coding. In order to minimize the MSE for the desired bit rate, a rate-distortion- optimization subprocess is introduced between the tier-1 and tier-2 subprocesses. In tier-1 coding, each coding block is independently bit-plane coded from the most-significant-bit (MSB) plane to the least-significant-bit (LSB) plane, using three coding passes (except for the MSB plane, which is coded using only one "clean up" coding pass). For M bit planes, this subprocess involves a total number of (3M - 2) coding passes. An embedded bit stream is then generated for each coding block. Information on the reduction in distortion and the increase in the bit rate associated with each coding pass is collected. This information is then used in a rate-control procedure to determine the contribution of each coding block to the output compressed bit stream.
Digital Semaphore: Technical Feasibility of QR Code Optical Signaling for Fleet Communications
2013-06-01
Standards (http://www.iso.org) JIS Japanese Industrial Standard JPEG Joint Photographic Experts Group (digital image format; http://www.jpeg.org) LED...Denso Wave corporation in the 1990s for the Japanese automotive manufacturing industry. See Appendix A for full details. Reed-Solomon Error...eliminates camera blur induced by the shutter, providing clear images at extremely high frame rates. Thusly, digital cinema cameras are more suitable
Fu, C.Y.; Petrich, L.I.
1997-12-30
An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described. 22 figs.
JPEG2000 encoding with perceptual distortion control.
Liu, Zhen; Karam, Lina J; Watson, Andrew B
2006-07-01
In this paper, a new encoding approach is proposed to control the JPEG2000 encoding in order to reach a desired perceptual quality. The new method is based on a vision model that incorporates various masking effects of human visual perception and a perceptual distortion metric that takes spatial and spectral summation of individual quantization errors into account. Compared with the conventional rate-based distortion minimization JPEG2000 encoding, the new method provides a way to generate consistent quality images at a lower bit rate.
Fingerprint recognition of wavelet-based compressed images by neuro-fuzzy clustering
NASA Astrophysics Data System (ADS)
Liu, Ti C.; Mitra, Sunanda
1996-06-01
Image compression plays a crucial role in many important and diverse applications requiring efficient storage and transmission. This work mainly focuses on a wavelet transform (WT) based compression of fingerprint images and the subsequent classification of the reconstructed images. The algorithm developed involves multiresolution wavelet decomposition, uniform scalar quantization, entropy and run- length encoder/decoder and K-means clustering of the invariant moments as fingerprint features. The performance of the WT-based compression algorithm has been compared with JPEG current image compression standard. Simulation results show that WT outperforms JPEG in high compression ratio region and the reconstructed fingerprint image yields proper classification.
Digital image modification detection using color information and its histograms.
Zhou, Haoyu; Shen, Yue; Zhu, Xinghui; Liu, Bo; Fu, Zigang; Fan, Na
2016-09-01
The rapid development of many open source and commercial image editing software makes the authenticity of the digital images questionable. Copy-move forgery is one of the most widely used tampering techniques to create desirable objects or conceal undesirable objects in a scene. Existing techniques reported in the literature to detect such tampering aim to improve the robustness of these methods against the use of JPEG compression, blurring, noise, or other types of post processing operations. These post processing operations are frequently used with the intention to conceal tampering and reduce tampering clues. A robust method based on the color moments and other five image descriptors is proposed in this paper. The method divides the image into fixed size overlapping blocks. Clustering operation divides entire search space into smaller pieces with similar color distribution. Blocks from the tampered regions will reside within the same cluster since both copied and moved regions have similar color distributions. Five image descriptors are used to extract block features, which makes the method more robust to post processing operations. An ensemble of deep compositional pattern-producing neural networks are trained with these extracted features. Similarity among feature vectors in clusters indicates possible forged regions. Experimental results show that the proposed method can detect copy-move forgery even if an image was distorted by gamma correction, addictive white Gaussian noise, JPEG compression, or blurring. Copyright © 2016. Published by Elsevier Ireland Ltd.
Parallel design of JPEG-LS encoder on graphics processing units
NASA Astrophysics Data System (ADS)
Duan, Hao; Fang, Yong; Huang, Bormin
2012-01-01
With recent technical advances in graphic processing units (GPUs), GPUs have outperformed CPUs in terms of compute capability and memory bandwidth. Many successful GPU applications to high performance computing have been reported. JPEG-LS is an ISO/IEC standard for lossless image compression which utilizes adaptive context modeling and run-length coding to improve compression ratio. However, adaptive context modeling causes data dependency among adjacent pixels and the run-length coding has to be performed in a sequential way. Hence, using JPEG-LS to compress large-volume hyperspectral image data is quite time-consuming. We implement an efficient parallel JPEG-LS encoder for lossless hyperspectral compression on a NVIDIA GPU using the computer unified device architecture (CUDA) programming technology. We use the block parallel strategy, as well as such CUDA techniques as coalesced global memory access, parallel prefix sum, and asynchronous data transfer. We also show the relation between GPU speedup and AVIRIS block size, as well as the relation between compression ratio and AVIRIS block size. When AVIRIS images are divided into blocks, each with 64×64 pixels, we gain the best GPU performance with 26.3x speedup over its original CPU code.
Perceptually-Based Adaptive JPEG Coding
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Rosenholtz, Ruth; Null, Cynthia H. (Technical Monitor)
1996-01-01
An extension to the JPEG standard (ISO/IEC DIS 10918-3) allows spatial adaptive coding of still images. As with baseline JPEG coding, one quantization matrix applies to an entire image channel, but in addition the user may specify a multiplier for each 8 x 8 block, which multiplies the quantization matrix, yielding the new matrix for the block. MPEG 1 and 2 use much the same scheme, except there the multiplier changes only on macroblock boundaries. We propose a method for perceptual optimization of the set of multipliers. We compute the perceptual error for each block based upon DCT quantization error adjusted according to contrast sensitivity, light adaptation, and contrast masking, and pick the set of multipliers which yield maximally flat perceptual error over the blocks of the image. We investigate the bitrate savings due to this adaptive coding scheme and the relative importance of the different sorts of masking on adaptive coding.
Halftoning processing on a JPEG-compressed image
NASA Astrophysics Data System (ADS)
Sibade, Cedric; Barizien, Stephane; Akil, Mohamed; Perroton, Laurent
2003-12-01
Digital image processing algorithms are usually designed for the raw format, that is on an uncompressed representation of the image. Therefore prior to transforming or processing a compressed format, decompression is applied; then, the result of the processing application is finally re-compressed for further transfer or storage. The change of data representation is resource-consuming in terms of computation, time and memory usage. In the wide format printing industry, this problem becomes an important issue: e.g. a 1 m2 input color image, scanned at 600 dpi exceeds 1.6 GB in its raw representation. However, some image processing algorithms can be performed in the compressed-domain, by applying an equivalent operation on the compressed format. This paper is presenting an innovative application of the halftoning processing operation by screening, to be applied on JPEG-compressed image. This compressed-domain transform is performed by computing the threshold operation of the screening algorithm in the DCT domain. This algorithm is illustrated by examples for different halftone masks. A pre-sharpening operation, applied on a JPEG-compressed low quality image is also described; it allows to de-noise and to enhance the contours of this image.
JHelioviewer: Open-Source Software for Discovery and Image Access in the Petabyte Age
NASA Astrophysics Data System (ADS)
Mueller, D.; Dimitoglou, G.; Garcia Ortiz, J.; Langenberg, M.; Nuhn, M.; Dau, A.; Pagel, S.; Schmidt, L.; Hughitt, V. K.; Ireland, J.; Fleck, B.
2011-12-01
The unprecedented torrent of data returned by the Solar Dynamics Observatory is both a blessing and a barrier: a blessing for making available data with significantly higher spatial and temporal resolution, but a barrier for scientists to access, browse and analyze them. With such staggering data volume, the data is accessible only from a few repositories and users have to deal with data sets effectively immobile and practically difficult to download. From a scientist's perspective this poses three challenges: accessing, browsing and finding interesting data while avoiding the proverbial search for a needle in a haystack. To address these challenges, we have developed JHelioviewer, an open-source visualization software that lets users browse large data volumes both as still images and movies. We did so by deploying an efficient image encoding, storage, and dissemination solution using the JPEG 2000 standard. This solution enables users to access remote images at different resolution levels as a single data stream. Users can view, manipulate, pan, zoom, and overlay JPEG 2000 compressed data quickly, without severe network bandwidth penalties. Besides viewing data, the browser provides third-party metadata and event catalog integration to quickly locate data of interest, as well as an interface to the Virtual Solar Observatory to download science-quality data. As part of the ESA/NASA Helioviewer Project, JHelioviewer offers intuitive ways to browse large amounts of heterogeneous data remotely and provides an extensible and customizable open-source platform for the scientific community. In addition, the easy-to-use graphical user interface enables the general public and educators to access, enjoy and reuse data from space missions without barriers.
Report about the Solar Eclipse on August 11, 1999
NASA Astrophysics Data System (ADS)
1999-08-01
This webpage provides information about the total eclipse on Wednesday, August 11, 1999, as it was seen by ESO staff, mostly at or near the ESO Headquarters in Garching (Bavaria, Germany). The zone of totality was about 108 km wide and the ESO HQ were located only 8 km south of the line of maximum totality. The duration of the phase of totality was about 2 min 17 sec. The weather was quite troublesome in this geographical area. Heavy clouds moved across the sky during the entire event, but there were also some holes in between. Consequently, sites that were only a few kilometres from each other had very different viewing conditions. Some photos and spectra of the eclipsed Sun are displayed below, with short texts about the circumstances under which they were made. Please note that reproduction of pictures on this webpage is only permitted, if the author is mentioned as source. Information made available before the eclipse is available here. Eclipse Impressions at the ESO HQ Photo by Eddy Pomaroli Preparing for the Eclipse Photo: Eddy Pomaroli [JEG: 400 x 239 pix - 116k] [JPEG: 800 x 477 pix - 481k] [JPEG: 3000 x 1789 pix - 3.9M] Photo by Eddy Pomaroli During the 1st Partial Phase Photo: Eddy Pomaroli [JPEG: 400 x 275 pix - 135k] [JPEG: 800 x 549 pix - 434k] [JPEG: 2908 x 1997 pix - 5.9M] Photo by Hamid Mehrgan Heavy Clouds Above Digital Photo: Hamid Mehrgan [JPEG: 400 x 320 pix - 140k] [JPEG: 800 x 640 pix - 540k] [JPEG: 1280 x 1024 pix - 631k] Photo by Olaf Iwert Totality Approaching Digital Photo: Olaf Iwert [JPEG: 400 x 320 pix - 149k] [JPEG: 800 x 640 pix - 380k] [JPEG: 1280 x 1024 pix - 536k] Photo by Olaf Iwert Beginning of Totality Digital Photo: Olaf Iwert [JPEG: 400 x 236 pix - 86k] [JPEG: 800 x 471 pix - 184k] [JPEG: 1280 x 753 pix - 217k] Photo by Olaf Iwert A Happy Eclipse Watcher Digital Photo: Olaf Iwert [JPEG: 400 x 311 pix - 144k] [JPEG: 800 x 622 pix - 333k] [JPEG: 1280 x 995 pix - 644k] ESO HQ Eclipse Video Clip [MPEG-version] ESO HQ Eclipse Video Clip (2425 frames/01:37 min) [MPEG Video; 160x120 pix; 2.2M] [MPEG Video; 320x240 pix; 4.4Mb] [RealMedia; streaming; 33kps] [RealMedia; streaming; 200kps] This Video Clip was prepared from a "reportage" of the event at the ESO HQ that was transmitted in real-time to ESO-Chile via ESO's satellite link. It begins with some sequences of the first partial phase and the eclipse watchers. Clouds move over and the landscape darkens as the phase of totality approaches. The Sun is again visible at the very moment this phase ends. Some further sequences from the second partial phase follow. Produced by Herbert Zodet. Dire Forecasts The weather predictions in the days before the eclipse were not good for Munich and surroundings. A heavy front with rain and thick clouds that completely covered the sky moved across Bavaria the day before and the meteorologists predicted a 20% chance of seeing anything at all. On August 10, it seemed that the chances were best in France and in the western parts of Germany, and much less close to the Alps. This changed to the opposite during the night before the eclipse. Now the main concern in Munich was a weather front approaching from the west - would it reach this area before the eclipse? The better chances were then further east, nearer the Austrian border. Many people travelled back and forth along the German highways, many of which quickly became heavily congested. Preparations About 500 persons, mostly ESO staff with their families and friends, were present at the ESO HQ in the morning of August 11. Prior to the eclipse, they received information about the various aspects of solar eclipses and about the specific conditions of this one in the auditorium. Protective glasses were handed out and it was the idea that they would then follow the eclipse from outside. In view of the pessimistic weather forecasts, TV sets had been set up in two large rooms, but in the end most chose to watch the eclipse from the terasse in front of the cafeteria and from the area south of the building. Several telescopes were set up among the trees and on the adjoining field (just harvested). Clouds and Holes It was an unusual solar eclipse experience. Heavy clouds were passing by with sudden rainshowers, but fortunately there were also some holes with blue sky in between. While much of the first partial phase was visible through these, some really heavy clouds moved in a few minutes before the total phase, when the light had begun to fade. They drifted slowly - too slowly! - towards the east and the corona was never seen from the ESO HQ site. From here, the view towards the eclipsed Sun only cleared at the very instant of the second "diamond ring" phenomenon. This was beautiful, however, and evidently took most of the photographers by surprise, so very few, if any, photos were made of this memorable moment. Temperature Curve by Benoit Pirenne Temperature Curve on August 11 [JPEG: 646 x 395 pix - 35k] Measured by Benoit Pirenne - see also his meteorological webpage Nevertheless, the entire experience was fantastic - there were all the expected effects, the darkness, the cool air, the wind and the silence. It was very impressive indeed! And it was certainly a unique day in ESO history! Carolyn Collins Petersen from "Sky & Telescope" participated in the conference at ESO in the days before and watched the eclipse from the "Bürgerplatz" in Garching, about 1.5 km south of the ESO HQ. She managed to see part of the totality phase and filed some dramatic reports at the S&T Eclipse Expedition website. They describe very well the feelings of those in this area! Eclipse Photos Several members of the ESO staff went elsewhere and had more luck with the weather, especially at the moment of totality. Below are some of their impressive pictures. Eclipse Photo by Philippe Duhoux First "Diamond Ring" [JPEG: 400 x 292 pix - 34k] [JPEG: 800 x 583 pix - 144k] [JPEG: 2531 x 1846 pix - 1.3M] Eclipse Photo by Philippe Duhoux Totality [JPEG: 400 x 306 pix - 49k] [JPEG: 800 x 612 pix - 262k] [JPEG: 3039 x 1846 pix - 3.6M] Eclipse Photo by Philippe Duhoux Second "Diamond Ring" [JPEG: 400 x 301 pix - 34k] [JPEG: 800 x 601 pix - 163k] [JPEG: 2905 x 2181 pix - 2.0M] The Corona (Philippe Duhoux) "For the observation of the eclipse, I chose a field on a hill offering a wide view towards the western horizon and located about 10 kilometers north west of Garching." "While the partial phase was mostly cloudy, the sky went clear 3 minutes before the totality and remained so for about 15 minutes. Enough to enjoy the event!" "The images were taken on Agfa CT100 colour slide film with an Olympus OM-20 at the focus of a Maksutov telescope (f = 1000 mm, f/D = 10). The exposure times were automatically set by the camera. During the partial phase, I used an off-axis mask of 40 mm diameter with a mylar filter ND = 3.6, which I removed for the diamond rings and the corona." Note in particular the strong, detached protuberances to the right of the rim, particularly noticeable in the last photo. Eclipse Photo by Cyril Cavadore Totality [JPEG: 400 x 360 pix - 45k] [JPEG: 800 x 719 pix - 144k] [JPEG: 908 x 816 pix - 207k] The Corona (Cyril Cavadore) "We (C.Cavadore from ESO and L. Bernasconi and B. Gaillard from Obs. de la Cote d'Azur) took this photo in France at Vouzier (Champagne-Ardennes), between Reims and Nancy. A large blue opening developed in the sky at 10 o'clock and we decided to set up the telescope and the camera at that time. During the partial phase, a lot of clouds passed over, making it hard to focus properly. Nevertheless, 5 min before totality, a deep blue sky opened above us, allowing us to watch it and to take this picture. 5-10 Minutes after the totality, the sky was almost overcast up to the 4th contact". "The image was taken with a 2x2K (14 µm pixels) Thomson "homemade" CCD camera mounted on a CN212 Takahashi (200 mm diameter telescope) with a 1/10.000 neutral filter. The acquisition software set exposure time (2 sec) and took images in a complete automated way, allowing us to observe the eclipse by naked eye or with binoculars. To get as many images as possible during totality, we use binning 2x2 to reduce the readout time to 19 sec. Afterward, one of the best image was flat-fielded and processed with a special algorithm that modelled a fit the continuous component of the corona and then subtracted from the original image. The remaining details were enhanced by unsharp masking and added to the original image. Finally, gaussian histogram equalization was applied". Eclipse Photo by Eddy Pomaroli Second "Diamond Ring" [JPEG: 400 x 438 pix - 129k] [JPEG: 731 x 800 pix - 277k] [JPEG: 1940 x 2123 pix - 2.3M] Diamond Ring at ESO HQ (Eddy Pomaroli) "Despite the clouds, we saw the second "diamond ring" from the ESO HQ. In a sense, we were quite lucky, since the clouds were very heavy during the total phase and we might easily have missed it all!". "I used an old Minolta SRT-101 camera and a teleobjective (450 mm; f/8). The exposure was 1/125 sec on Kodak Elite 100 (pushed to 200 ASA). I had the feeling that the Sun would become visible and had the camera pointed, by good luck in the correct direction, as soon as the cloud moved away". Eclipse Photo by Roland Reiss First Partial Phase [JPEG: 400 x 330 pix - 94k] [JPEG: 800 x 660 pix - 492k] [JPEG: 3000 x 2475 pix - 4.5M] End of First Partial Phase (Roland Reiss) "I observed the eclipse from my home in Garching. The clouds kept moving and this was the last photo I was able to obtain during the first partial phase, before they blocked everything". "The photo is interesting, because it shows two more images of the eclipsed Sun, below the overexposed central part. In one of them, the remaining, narrow crescent is particularly well visible. They are caused by reflections in the camera. I used a Minolta camera and a Fuji colour slide film". Eclipse Spectra Some ESO people went a step further and obtained spectra of the Sun at the time of the eclipse. Eclipse Spectrum by Roland Reiss Coronal Spectrum [JPEG: 400 x 273 pix - 94k] [JPEG: 800 x 546 pix - 492k] [JPEG: 3000 x 2046 pix - 4.5M] Coronal Spectrum (CAOS Group) The Club of Amateurs in Optical Spectroscopy (with Carlos Guirao Sanchez, Gerardo Avila and Jesus Rodriguez) obtained a spectrum of the solar corona from a site in Garching, about 2 km south of the ESO HQ. "This is a plot of the spectrum and the corresponding CCD image that we took during the total eclipse. The main coronal lines are well visible and have been identified in the figure. Note in particular one at 6374 Angstrom that was first ascribed to the mysterious substance "Coronium". We now know that it is emitted by iron atoms that have lost nine electrons (Fe X)". The equipment was: * Telescope: Schmidt Cassegrain F/6.3; Diameter: 250 mm * FIASCO Spectrograph: Fibre: 135 micron core diameter F = 100 mm collimator, f = 80 mm camera; Grating: 1300 gr/mm blazed at 500 nm; SBIG ST8E CCD camera; Exposure time was 20 sec. Eclipse Spectrum by Bob Fosbury Chromospheric Spectrum [JPEG: 120 x 549 pix - 20k] Chromospheric and Coronal Spectra (Bob Fosbury) "The 11 August 1999 total solar eclipse was seen from a small farm complex called Wolfersberg in open fields some 20km ESE of the centre of Munich. It was chosen to be within the 2min band of totality but likely to be relatively unpopulated". "There were intermittent views of the Sun between first and second contact with quite a heavy rainshower which stopped 9min before totality. A large clear patch of sky revealed a perfect view of the Sun just 2min before second contact and it remained clear for at least half an hour after third contact". "The principal project was to photograph the spectrum of the chromosphere during totality using a transmission grating in front of a moderate telephoto lens. The desire to do this was stimulated by a view of the 1976 eclipse in Australia when I held the same grating up to the eclipsed Sun and was thrilled by the view of the emission line spectrum. The trick now was to get the exposure right!". "A sequence of 13 H-alpha images was combined into a looping movie. The exposure times were different, but some attempt has been made to equalise the intensities. The last two frames show the low chromosphere and then the photosphere emerging at 3rd contact. The [FeX] coronal line can be seen on the left in the middle of the sequence. I used a Hasselblad camera and Agfa slide film (RSX II 100)".
NASA Astrophysics Data System (ADS)
Brown, Nicholas J.; Lloyd, David S.; Reynolds, Melvin I.; Plummer, David L.
2002-05-01
A visible digital image is rendered from a set of digital image data. Medical digital image data can be stored as either: (a) pre-rendered format, corresponding to a photographic print, or (b) un-rendered format, corresponding to a photographic negative. The appropriate image data storage format and associated header data (metadata) required by a user of the results of a diagnostic procedure recorded electronically depends on the task(s) to be performed. The DICOM standard provides a rich set of metadata that supports the needs of complex applications. Many end user applications, such as simple report text viewing and display of a selected image, are not so demanding and generic image formats such as JPEG are sometimes used. However, these are lacking some basic identification requirements. In this paper we make specific proposals for minimal extensions to generic image metadata of value in various domains, which enable safe use in the case of two simple healthcare end user scenarios: (a) viewing of text and a selected JPEG image activated by a hyperlink and (b) viewing of one or more JPEG images together with superimposed text and graphics annotation using a file specified by a profile of the ISO/IEC Basic Image Interchange Format (BIIF).
Diagnostic accuracy of chest X-rays acquired using a digital camera for low-cost teleradiology.
Szot, Agnieszka; Jacobson, Francine L; Munn, Samson; Jazayeri, Darius; Nardell, Edward; Harrison, David; Drosten, Ralph; Ohno-Machado, Lucila; Smeaton, Laura M; Fraser, Hamish S F
2004-02-01
Store-and-forward telemedicine, using e-mail to send clinical data and digital images, offers a low-cost alternative for physicians in developing countries to obtain second opinions from specialists. To explore the potential usefulness of this technique, 91 chest X-ray images were photographed using a digital camera and a view box. Four independent readers (three radiologists and one pulmonologist) read two types of digital (JPEG and JPEG2000) and original film images and indicated their confidence in the presence of eight features known to be radiological indicators of tuberculosis (TB). The results were compared to a "gold standard" established by two different radiologists, and assessed using receiver operating characteristic (ROC) curve analysis. There was no statistical difference in the overall performance between the readings from the original films and both types of digital images. The size of JPEG2000 images was approximately 120KB, making this technique feasible for slow internet connections. Our preliminary results show the potential usefulness of this technique particularly for tuberculosis and lung disease, but further studies are required to refine its potential.
NASA Astrophysics Data System (ADS)
Osada, Masakazu; Tsukui, Hideki
2002-09-01
ABSTRACT Picture Archiving and Communication System (PACS) is a system which connects imaging modalities, image archives, and image workstations to reduce film handling cost and improve hospital workflow. Handling diagnostic ultrasound and endoscopy images is challenging, because it produces large amount of data such as motion (cine) images of 30 frames per second, 640 x 480 in resolution, with 24-bit color. Also, it requires enough image quality for clinical review. We have developed PACS which is able to manage ultrasound and endoscopy cine images with above resolution and frame rate, and investigate suitable compression method and compression rate for clinical image review. Results show that clinicians require capability for frame-by-frame forward and backward review of cine images because they carefully look through motion images to find certain color patterns which may appear in one frame. In order to satisfy this quality, we have chosen motion JPEG, installed and confirmed that we could capture this specific pattern. As for acceptable image compression rate, we have performed subjective evaluation. No subjects could tell the difference between original non-compressed images and 1:10 lossy compressed JPEG images. One subject could tell the difference between original and 1:20 lossy compressed JPEG images although it is acceptable. Thus, ratios of 1:10 to 1:20 are acceptable to reduce data amount and cost while maintaining quality for clinical review.
A JPEG backward-compatible HDR image compression
NASA Astrophysics Data System (ADS)
Korshunov, Pavel; Ebrahimi, Touradj
2012-10-01
High Dynamic Range (HDR) imaging is expected to become one of the technologies that could shape next generation of consumer digital photography. Manufacturers are rolling out cameras and displays capable of capturing and rendering HDR images. The popularity and full public adoption of HDR content is however hindered by the lack of standards in evaluation of quality, file formats, and compression, as well as large legacy base of Low Dynamic Range (LDR) displays that are unable to render HDR. To facilitate wide spread of HDR usage, the backward compatibility of HDR technology with commonly used legacy image storage, rendering, and compression is necessary. Although many tone-mapping algorithms were developed for generating viewable LDR images from HDR content, there is no consensus on which algorithm to use and under which conditions. This paper, via a series of subjective evaluations, demonstrates the dependency of perceived quality of the tone-mapped LDR images on environmental parameters and image content. Based on the results of subjective tests, it proposes to extend JPEG file format, as the most popular image format, in a backward compatible manner to also deal with HDR pictures. To this end, the paper provides an architecture to achieve such backward compatibility with JPEG and demonstrates efficiency of a simple implementation of this framework when compared to the state of the art HDR image compression.
Design of a motion JPEG (M/JPEG) adapter card
NASA Astrophysics Data System (ADS)
Lee, D. H.; Sudharsanan, Subramania I.
1994-05-01
In this paper we describe a design of a high performance JPEG (Joint Photographic Experts Group) Micro Channel adapter card. The card, tested on a range of PS/2 platforms (models 50 to 95), can complete JPEG operations on a 640 by 240 pixel image within 1/60 of a second, thus enabling real-time capture and display of high quality digital video. The card accepts digital pixels for either a YUV 4:2:2 or an RGB 4:4:4 pixel bus and has been shown to handle up to 2.05 MBytes/second of compressed data. The compressed data is transmitted to a host memory area by Direct Memory Access operations. The card uses a single C-Cube's CL550 JPEG processor that complies with the baseline JPEG. We give broad descriptions of the hardware that controls the video interface, CL550, and the system interface. Some critical design points that enhance the overall performance of the M/JPEG systems are pointed out. The control of the adapter card is achieved by an interrupt driven software that runs under DOS. The software performs a variety of tasks that include change of color space (RGB or YUV), change of quantization and Huffman tables, odd and even field control and some diagnostic operations.
History of the Universe Poster
History of the Universe Poster You are free to use these images if you give credit to: Particle Data Group at Lawrence Berkeley National Lab. New Version (2014) History of the Universe Poster Download: JPEG version PDF version Old Version (2013) History of the Universe Poster Download: JPEG version
Multiple descriptions based on multirate coding for JPEG 2000 and H.264/AVC.
Tillo, Tammam; Baccaglini, Enrico; Olmo, Gabriella
2010-07-01
Multiple description coding (MDC) makes use of redundant representations of multimedia data to achieve resiliency. Descriptions should be generated so that the quality obtained when decoding a subset of them only depends on their number and not on the particular received subset. In this paper, we propose a method based on the principle of encoding the source at several rates, and properly blending the data encoded at different rates to generate the descriptions. The aim is to achieve efficient redundancy exploitation, and easy adaptation to different network scenarios by means of fine tuning of the encoder parameters. We apply this principle to both JPEG 2000 images and H.264/AVC video data. We consider as the reference scenario the distribution of contents on application-layer overlays with multiple-tree topology. The experimental results reveal that our method favorably compares with state-of-art MDC techniques.
A high-throughput two channel discrete wavelet transform architecture for the JPEG2000 standard
NASA Astrophysics Data System (ADS)
Badakhshannoory, Hossein; Hashemi, Mahmoud R.; Aminlou, Alireza; Fatemi, Omid
2005-07-01
The Discrete Wavelet Transform (DWT) is increasingly recognized in image and video compression standards, as indicated by its use in JPEG2000. The lifting scheme algorithm is an alternative DWT implementation that has a lower computational complexity and reduced resource requirement. In the JPEG2000 standard two lifting scheme based filter banks are introduced: the 5/3 and 9/7. In this paper a high throughput, two channel DWT architecture for both of the JPEG2000 DWT filters is presented. The proposed pipelined architecture has two separate input channels that process the incoming samples simultaneously with minimum memory requirement for each channel. The architecture had been implemented in VHDL and synthesized on a Xilinx Virtex2 XCV1000. The proposed architecture applies DWT on a 2K by 1K image at 33 fps with a 75 MHZ clock frequency. This performance is achieved with 70% less resources than two independent single channel modules. The high throughput and reduced resource requirement has made this architecture the proper choice for real time applications such as Digital Cinema.
NASA Astrophysics Data System (ADS)
Joshi, Rajan L.
2006-03-01
In medical imaging, the popularity of image capture modalities such as multislice CT and MRI is resulting in an exponential increase in the amount of volumetric data that needs to be archived and transmitted. At the same time, the increased data is taxing the interpretation capabilities of radiologists. One of the workflow strategies recommended for radiologists to overcome the data overload is the use of volumetric navigation. This allows the radiologist to seek a series of oblique slices through the data. However, it might be inconvenient for a radiologist to wait until all the slices are transferred from the PACS server to a client, such as a diagnostic workstation. To overcome this problem, we propose a client-server architecture based on JPEG2000 and JPEG2000 Interactive Protocol (JPIP) for rendering oblique slices through 3D volumetric data stored remotely at a server. The client uses the JPIP protocol for obtaining JPEG2000 compressed data from the server on an as needed basis. In JPEG2000, the image pixels are wavelet-transformed and the wavelet coefficients are grouped into precincts. Based on the positioning of the oblique slice, compressed data from only certain precincts is needed to render the slice. The client communicates this information to the server so that the server can transmit only relevant compressed data. We also discuss the use of caching on the client side for further reduction in bandwidth requirements. Finally, we present simulation results to quantify the bandwidth savings for rendering a series of oblique slices.
A new JPEG-based steganographic algorithm for mobile devices
NASA Astrophysics Data System (ADS)
Agaian, Sos S.; Cherukuri, Ravindranath C.; Schneider, Erik C.; White, Gregory B.
2006-05-01
Currently, cellular phones constitute a significant portion of the global telecommunications market. Modern cellular phones offer sophisticated features such as Internet access, on-board cameras, and expandable memory which provide these devices with excellent multimedia capabilities. Because of the high volume of cellular traffic, as well as the ability of these devices to transmit nearly all forms of data. The need for an increased level of security in wireless communications is becoming a growing concern. Steganography could provide a solution to this important problem. In this article, we present a new algorithm for JPEG-compressed images which is applicable to mobile platforms. This algorithm embeds sensitive information into quantized discrete cosine transform coefficients obtained from the cover JPEG. These coefficients are rearranged based on certain statistical properties and the inherent processing and memory constraints of mobile devices. Based on the energy variation and block characteristics of the cover image, the sensitive data is hidden by using a switching embedding technique proposed in this article. The proposed system offers high capacity while simultaneously withstanding visual and statistical attacks. Based on simulation results, the proposed method demonstrates an improved retention of first-order statistics when compared to existing JPEG-based steganographic algorithms, while maintaining a capacity which is comparable to F5 for certain cover images.
Steganographic embedding in containers-images
NASA Astrophysics Data System (ADS)
Nikishova, A. V.; Omelchenko, T. A.; Makedonskij, S. A.
2018-05-01
Steganography is one of the approaches to ensuring the protection of information transmitted over the network. But a steganographic method should vary depending on a used container. According to statistics, the most widely used containers are images and the most common image format is JPEG. Authors propose a method of data embedding into a frequency area of images in format JPEG 2000. It is proposed to use the method of Benham-Memon- Yeo-Yeung, in which instead of discrete cosine transform, discrete wavelet transform is used. Two requirements for images are formulated. Structure similarity is chosen to obtain quality assessment of data embedding. Experiments confirm that requirements satisfaction allows achieving high quality assessment of data embedding.
Region of interest and windowing-based progressive medical image delivery using JPEG2000
NASA Astrophysics Data System (ADS)
Nagaraj, Nithin; Mukhopadhyay, Sudipta; Wheeler, Frederick W.; Avila, Ricardo S.
2003-05-01
An important telemedicine application is the perusal of CT scans (digital format) from a central server housed in a healthcare enterprise across a bandwidth constrained network by radiologists situated at remote locations for medical diagnostic purposes. It is generally expected that a viewing station respond to an image request by displaying the image within 1-2 seconds. Owing to limited bandwidth, it may not be possible to deliver the complete image in such a short period of time with traditional techniques. In this paper, we investigate progressive image delivery solutions by using JPEG 2000. An estimate of the time taken in different network bandwidths is performed to compare their relative merits. We further make use of the fact that most medical images are 12-16 bits, but would ultimately be converted to an 8-bit image via windowing for display on the monitor. We propose a windowing progressive RoI technique to exploit this and investigate JPEG 2000 RoI based compression after applying a favorite or a default window setting on the original image. Subsequent requests for different RoIs and window settings would then be processed at the server. For the windowing progressive RoI mode, we report a 50% reduction in transmission time.
A joint source-channel distortion model for JPEG compressed images.
Sabir, Muhammad F; Sheikh, Hamid Rahim; Heath, Robert W; Bovik, Alan C
2006-06-01
The need for efficient joint source-channel coding (JSCC) is growing as new multimedia services are introduced in commercial wireless communication systems. An important component of practical JSCC schemes is a distortion model that can predict the quality of compressed digital multimedia such as images and videos. The usual approach in the JSCC literature for quantifying the distortion due to quantization and channel errors is to estimate it for each image using the statistics of the image for a given signal-to-noise ratio (SNR). This is not an efficient approach in the design of real-time systems because of the computational complexity. A more useful and practical approach would be to design JSCC techniques that minimize average distortion for a large set of images based on some distortion model rather than carrying out per-image optimizations. However, models for estimating average distortion due to quantization and channel bit errors in a combined fashion for a large set of images are not available for practical image or video coding standards employing entropy coding and differential coding. This paper presents a statistical model for estimating the distortion introduced in progressive JPEG compressed images due to quantization and channel bit errors in a joint manner. Statistical modeling of important compression techniques such as Huffman coding, differential pulse-coding modulation, and run-length coding are included in the model. Examples show that the distortion in terms of peak signal-to-noise ratio (PSNR) can be predicted within a 2-dB maximum error over a variety of compression ratios and bit-error rates. To illustrate the utility of the proposed model, we present an unequal power allocation scheme as a simple application of our model. Results show that it gives a PSNR gain of around 6.5 dB at low SNRs, as compared to equal power allocation.
2015-12-24
Signal to Noise Ratio SPICE Simulation Program with Integrated Circuit Emphasis TIFF Tagged Image File Format USC University of Southern California xvii...sources can create errors in digital circuits. These effects can be simulated using Simulation Program with Integrated Circuit Emphasis ( SPICE ) or...compute summary statistics. 4.1 Circuit Simulations Noisy analog circuits can be simulated in SPICE or Cadence SpectreTM software via noisy voltage
Improved photo response non-uniformity (PRNU) based source camera identification.
Cooper, Alan J
2013-03-10
The concept of using Photo Response Non-Uniformity (PRNU) as a reliable forensic tool to match an image to a source camera is now well established. Traditionally, the PRNU estimation methodologies have centred on a wavelet based de-noising approach. Resultant filtering artefacts in combination with image and JPEG contamination act to reduce the quality of PRNU estimation. In this paper, it is argued that the application calls for a simplified filtering strategy which at its base level may be realised using a combination of adaptive and median filtering applied in the spatial domain. The proposed filtering method is interlinked with a further two stage enhancement strategy where only pixels in the image having high probabilities of significant PRNU bias are retained. This methodology significantly improves the discrimination between matching and non-matching image data sets over that of the common wavelet filtering approach. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
An Efficient Image Compressor for Charge Coupled Devices Camera
Li, Jin; Xing, Fei; You, Zheng
2014-01-01
Recently, the discrete wavelet transforms- (DWT-) based compressor, such as JPEG2000 and CCSDS-IDC, is widely seen as the state of the art compression scheme for charge coupled devices (CCD) camera. However, CCD images project on the DWT basis to produce a large number of large amplitude high-frequency coefficients because these images have a large number of complex texture and contour information, which are disadvantage for the later coding. In this paper, we proposed a low-complexity posttransform coupled with compressing sensing (PT-CS) compression approach for remote sensing image. First, the DWT is applied to the remote sensing image. Then, a pair base posttransform is applied to the DWT coefficients. The pair base are DCT base and Hadamard base, which can be used on the high and low bit-rate, respectively. The best posttransform is selected by the l p-norm-based approach. The posttransform is considered as the sparse representation stage of CS. The posttransform coefficients are resampled by sensing measurement matrix. Experimental results on on-board CCD camera images show that the proposed approach significantly outperforms the CCSDS-IDC-based coder, and its performance is comparable to that of the JPEG2000 at low bit rate and it does not have the high excessive implementation complexity of JPEG2000. PMID:25114977
A new security solution to JPEG using hyper-chaotic system and modified zigzag scan coding
NASA Astrophysics Data System (ADS)
Ji, Xiao-yong; Bai, Sen; Guo, Yu; Guo, Hui
2015-05-01
Though JPEG is an excellent compression standard of images, it does not provide any security performance. Thus, a security solution to JPEG was proposed in Zhang et al. (2014). But there are some flaws in Zhang's scheme and in this paper we propose a new scheme based on discrete hyper-chaotic system and modified zigzag scan coding. By shuffling the identifiers of zigzag scan encoded sequence with hyper-chaotic sequence and accurately encrypting the certain coefficients which have little relationship with the correlation of the plain image in zigzag scan encoded domain, we achieve high compression performance and robust security simultaneously. Meanwhile we present and analyze the flaws in Zhang's scheme through theoretical analysis and experimental verification, and give the comparisons between our scheme and Zhang's. Simulation results verify that our method has better performance in security and efficiency.
Compression of electromyographic signals using image compression techniques.
Costa, Marcus Vinícius Chaffim; Berger, Pedro de Azevedo; da Rocha, Adson Ferreira; de Carvalho, João Luiz Azevedo; Nascimento, Francisco Assis de Oliveira
2008-01-01
Despite the growing interest in the transmission and storage of electromyographic signals for long periods of time, few studies have addressed the compression of such signals. In this article we present an algorithm for compression of electromyographic signals based on the JPEG2000 coding system. Although the JPEG2000 codec was originally designed for compression of still images, we show that it can also be used to compress EMG signals for both isotonic and isometric contractions. For EMG signals acquired during isometric contractions, the proposed algorithm provided compression factors ranging from 75 to 90%, with an average PRD ranging from 3.75% to 13.7%. For isotonic EMG signals, the algorithm provided compression factors ranging from 75 to 90%, with an average PRD ranging from 3.4% to 7%. The compression results using the JPEG2000 algorithm were compared to those using other algorithms based on the wavelet transform.
High-speed low-complexity video coding with EDiCTius: a DCT coding proposal for JPEG XS
NASA Astrophysics Data System (ADS)
Richter, Thomas; Fößel, Siegfried; Keinert, Joachim; Scherl, Christian
2017-09-01
In its 71th meeting, the JPEG committee issued a call for low complexity, high speed image coding, designed to address the needs of low-cost video-over-ip applications. As an answer to this call, Fraunhofer IIS and the Computing Center of the University of Stuttgart jointly developed an embedded DCT image codec requiring only minimal resources while maximizing throughput on FPGA and GPU implementations. Objective and subjective tests performed for the 73rd meeting confirmed its excellent performance and suitability for its purpose, and it was selected as one of the two key contributions for the development of a joined test model. In this paper, its authors describe the design principles of the codec, provide a high-level overview of the encoder and decoder chain and provide evaluation results on the test corpus selected by the JPEG committee.
NASA Astrophysics Data System (ADS)
Schmalz, Mark S.; Ritter, Gerhard X.; Caimi, Frank M.
2001-12-01
A wide variety of digital image compression transforms developed for still imaging and broadcast video transmission are unsuitable for Internet video applications due to insufficient compression ratio, poor reconstruction fidelity, or excessive computational requirements. Examples include hierarchical transforms that require all, or large portion of, a source image to reside in memory at one time, transforms that induce significant locking effect at operationally salient compression ratios, and algorithms that require large amounts of floating-point computation. The latter constraint holds especially for video compression by small mobile imaging devices for transmission to, and compression on, platforms such as palmtop computers or personal digital assistants (PDAs). As Internet video requirements for frame rate and resolution increase to produce more detailed, less discontinuous motion sequences, a new class of compression transforms will be needed, especially for small memory models and displays such as those found on PDAs. In this, the third series of papers, we discuss the EBLAST compression transform and its application to Internet communication. Leading transforms for compression of Internet video and still imagery are reviewed and analyzed, including GIF, JPEG, AWIC (wavelet-based), wavelet packets, and SPIHT, whose performance is compared with EBLAST. Performance analysis criteria include time and space complexity and quality of the decompressed image. The latter is determined by rate-distortion data obtained from a database of realistic test images. Discussion also includes issues such as robustness of the compressed format to channel noise. EBLAST has been shown to perform superiorly to JPEG and, unlike current wavelet compression transforms, supports fast implementation on embedded processors with small memory models.
A new approach of objective quality evaluation on JPEG2000 lossy-compressed lung cancer CT images
NASA Astrophysics Data System (ADS)
Cai, Weihua; Tan, Yongqiang; Zhang, Jianguo
2007-03-01
Image compression has been used to increase the communication efficiency and storage capacity. JPEG 2000 compression, based on the wavelet transformation, has its advantages comparing to other compression methods, such as ROI coding, error resilience, adaptive binary arithmetic coding and embedded bit-stream. However it is still difficult to find an objective method to evaluate the image quality of lossy-compressed medical images so far. In this paper, we present an approach to evaluate the image quality by using a computer aided diagnosis (CAD) system. We selected 77 cases of CT images, bearing benign and malignant lung nodules with confirmed pathology, from our clinical Picture Archiving and Communication System (PACS). We have developed a prototype of CAD system to classify these images into benign ones and malignant ones, the performance of which was evaluated by the receiver operator characteristics (ROC) curves. We first used JPEG 2000 to compress these cases of images with different compression ratio from lossless to lossy, and used the CAD system to classify the cases with different compressed ratio, then compared the ROC curves from the CAD classification results. Support vector machine (SVM) and neural networks (NN) were used to classify the malignancy of input nodules. In each approach, we found that the area under ROC (AUC) decreases with the increment of compression ratio with small fluctuations.
A software platform for the analysis of dermatology images
NASA Astrophysics Data System (ADS)
Vlassi, Maria; Mavraganis, Vlasios; Asvestas, Panteleimon
2017-11-01
The purpose of this paper is to present a software platform developed in Python programming environment that can be used for the processing and analysis of dermatology images. The platform provides the capability for reading a file that contains a dermatology image. The platform supports image formats such as Windows bitmaps, JPEG, JPEG2000, portable network graphics, TIFF. Furthermore, it provides suitable tools for selecting, either manually or automatically, a region of interest (ROI) on the image. The automated selection of a ROI includes filtering for smoothing the image and thresholding. The proposed software platform has a friendly and clear graphical user interface and could be a useful second-opinion tool to a dermatologist. Furthermore, it could be used to classify images including from other anatomical parts such as breast or lung, after proper re-training of the classification algorithms.
Hunting the Southern Skies with SIMBA
NASA Astrophysics Data System (ADS)
2001-08-01
First Images from the New "Millimetre Camera" on SEST at La Silla Summary A new instrument, SIMBA ("SEST IMaging Bolometer Array") , has been installed at the Swedish-ESO Submillimetre Telescope (SEST) at the ESO La Silla Observatory in July 2001. It records astronomical images at a wavelength of 1.2 mm and is able to quickly map large sky areas. In order to achieve the best possible sensitivity, SIMBA is cooled to only 0.3 deg above the absolute zero on the temperature scale. SIMBA is the first imaging millimetre instrument in the southern hemisphere . Radiation at this wavelength is mostly emitted from cold dust and ionized gas in a variety of objects in the Universe. Among other, SIMBA now opens exciting prospects for in-depth studies of the "hidden" sites of star formation , deep inside dense interstellar nebulae. While such clouds are impenetrable to optical light, they are transparent to millimetre radiation and SIMBA can therefore observe the associated phenomena, in particular the dust around nascent stars . This sophisticated instrument can also search for disks of cold dust around nearby stars in which planets are being formed or which may be left-overs of this basic process. Equally important, SIMBA may observe extremely distant galaxies in the early universe , recording them while they were still in the formation stage. Various SIMBA images have been obtained during the first tests of the new instrument. The first observations confirm the great promise for unique astronomical studies of the southern sky in the millimetre wavelength region. These results also pave the way towards the Atacama Large Millimeter Array (ALMA) , the giant, joint research project that is now under study in Europe, the USA and Japan. PR Photo 28a/01 : SIMBA image centered on the infrared source IRAS 17175-3544 PR Photo 28b/01 : SIMBA image centered on the infrared source IRAS 18434-0242 PR Photo 28c/01 : SIMBA image centered on the infrared source IRAS 17271-3439 PR Photo 28d/01 : View of the SIMBA instrument First observations with SIMBA SIMBA ("SEST IMaging Bolometer Array") was built and installed at the Swedish-ESO Submillimetre Telescope (SEST) at La Silla (Chile) within an international collaboration between the University of Bochum and the Max Planck Institute for Radio Astronomy in Germany, the Swedish National Facility for Radio Astronomy and ESO . The SIMBA ("Lion" in Swahili) instrument detects radiation at a wavelength of 1.2 mm . It has 37 "horns" and acts like a camera with 37 picture elements (pixels). By changing the pointing direction of the telescope, relatively large sky fields can be imaged. As the first and only imaging millimetre instrument in the southern hemisphere , SIMBA now looks up towards rich and virgin hunting grounds in the sky. Observations at millimetre wavelengths are particularly useful for studies of star formation , deep inside dense interstellar clouds that are impenetrable to optical light. Other objects for which SIMBA is especially suited include planet-forming disks of cold dust around nearby stars and extremely distant galaxies in the early universe , still in the stage of formation. During the first observations, SIMBA was used to study the gas and dust content of star-forming regions in our own Milky Way Galaxy, as well as in the Magellanic Clouds and more distant galaxies. It was also used to record emission from planetary nebulae , clouds of matter ejected by dying stars. Moreover, attempts were made to detect distant galaxies and quasars radiating at mm-wavelengths and located in two well-studied sky fields, the "Hubble Deep Field South" and the "Chandra Deep Field" [1]. Observations with SEST and SIMBA also serve to identify objects that can be observed at higher resolution and at shorter wavelengths with future southern submm telescopes and interferometers such as APEX (see MPG Press Release 07/01 of 6 July 2001) and ALMA. SIMBA images regions of high-mass star formation ESO PR Photo 28a/01 ESO PR Photo 28a/01 [Preview - JPEG: 400 x 568 pix - 61k] [Normal - JPEG: 800 x 1136 pix - 200k] Caption : This intensity-coded, false-colour SIMBA image is centered on the infrared source IRAS 17175-3544 and covers the well-known high-mass star formation complex NGC 6334 , at a distance of 5500 light-years. The southern bright source is an ultra-compact region of ionized hydrogen ("HII region") created by a star or several stars already formed. The northern bright source has not yet developed an HII region and may be a star or a cluster of stars that are presently forming. A remarkable, narrow, linear dust filament extends over the image; it was known to exist before, but the SIMBA image now shows it to a much larger extent and much more clearly. This and the following images cover an area of about 15 arcmin x 6 arcmin on the sky and have a pixel size of 8 arcsec. ESO PR Photo 28b/01 ESO PR Photo 28b/01 [Preview - JPEG: 532 x 400 pix - 52k] [Normal - JPEG: 1064 x 800 pix - 168k] Caption : This SIMBA image is centered on the object IRAS 18434-0242 . It includes many bright sources that are associated with dense cores and compact HII regions located deep inside the cloud. A much less detailed map was made several years ago with a single channel bolometer on SEST. The new SIMBA map is more extended and shows more sources. ESO PR Photo 28c/01 ESO PR Photo 28c/01 [Preview - JPEG: 400 x 505 pix - 59k] [Normal - JPEG: 800 x 1009 pix - 160k] Caption : Another SIMBA image is centered on IRAS 17271-3439 and includes an extended bright source that is associated with several compact HII regions as well as a cluster of weaker sources. Some of the recent SIMBA images are shown above; they were taken during test observations, and within a pilot survey of high-mass starforming regions . Stars form in interstellar clouds that consist of gas and dust. The denser parts of these clouds can collapse into cold and dense cores which may form stars. Often many stars are formed in clusters, at about the same time. The newborn stars heat up the surrounding regions of the cloud . Radiation is emitted, first at mm-wavelengths and later at infrared wavelengths as the cloud core gets hotter. If very massive stars are formed, their UV-radiation ionizes the immediate surrounding gas and this ionized gas also emits at mm-wavelengths. These ionized regions are called ultra compact HII regions . Because the stars form deep inside the interstellar clouds, the obscuration at visible wavelengths is very high and it is not possible to see these regions optically. The objects selected for the SIMBA survey are from a catalog of objects, first detected at long infrared wavelengths with the IRAS satellite (launched in 1983), hence the designations indicated in Photos 28a-c/01 . From 1995 to 1998, the ESA Infrared Space Observatory (ISO) gathered an enormous amount of valuable data, obtaining images and spectra in the broad infrared wavelength region from 2.5 to 240 µm (0.025 to 0.240 mm), i.e. just shortward of the millimetre region in which SIMBA operates. ISO produced mid-infrared images of field size and angular resolution (sharpness) comparable to those of SIMBA. It will obviously be most interesting to combine the images that will be made with SIMBA with imaging and spectral data from ISO and also with those obtained by large ground-based telescopes in the near- and mid-infrared spectral regions. Some technical details about the SIMBA instrument ESO PR Photo 28d/01 ESO PR Photo 28d/01 [Preview - JPEG: 509 x 400 pix - 83k] [Normal - JPEG: 1017 x 800 pix - 528k] Caption : The SIMBA instrument - with the cover removed - in the SEST electronics laboratory. The 37 antenna horns to the right, each of which produces one picture element (pixel) of the combined image. The bolometer elements are located behind the horns. The cylindrical aluminium foil covered unit is the cooler that keeps SIMBA at extremely low temperature (-272.85 °C, or only 0.3 deg above the absolute zero) when it is mounted in the telescope. SIMBA is unique because of its ability to quickly map large sky areas due to the fast scanning mode. In order to achieve low noise and good sensitivity, the instrument is cooled to only 0.3 deg above the absolute zero, i.e., to -272.85 °C. SIMBA consists of 37 horns (each providing one pixel on the sky) arranged in a hexagonal pattern, cf. Photo 28d/01 . To form images, the sky position of the telescope is changed according to a raster pattern - in this way all of a celestial object and the surrounding sky field may be "scanned" fast, at speeds of typically 80 arcsec per second. This makes SIMBA a very efficient facility: for instance, a fully sampled image of good sensitivity with a field size of 15 arcmin x 6 arcmin can be taken in 15 minutes. If higher sensitivity is needed (to observe fainter sources), more images may be obtained of the same field and then added together. Large sky areas can be covered by combining many images taken at different positions. The image resolution (the "telescope beamsize") is 22 arcsec, corresponding to the angular resolution of this 15-m telescope at the indicated wavelength. Note [1} Observations of the HDFS and CDFS fields in other wavebands with other telescopes at the ESO observatories have been reported earlier, e.g. within the ESO Imaging Survey Project (EIS) (the "EIS Deep-Survey"). It is the ESO policy on these fields to make data public world-wide.
High bit depth infrared image compression via low bit depth codecs
NASA Astrophysics Data System (ADS)
Belyaev, Evgeny; Mantel, Claire; Forchhammer, Søren
2017-08-01
Future infrared remote sensing systems, such as monitoring of the Earth's environment by satellites, infrastructure inspection by unmanned airborne vehicles etc., will require 16 bit depth infrared images to be compressed and stored or transmitted for further analysis. Such systems are equipped with low power embedded platforms where image or video data is compressed by a hardware block called the video processing unit (VPU). However, in many cases using two 8-bit VPUs can provide advantages compared with using higher bit depth image compression directly. We propose to compress 16 bit depth images via 8 bit depth codecs in the following way. First, an input 16 bit depth image is mapped into 8 bit depth images, e.g., the first image contains only the most significant bytes (MSB image) and the second one contains only the least significant bytes (LSB image). Then each image is compressed by an image or video codec with 8 bits per pixel input format. We analyze how the compression parameters for both MSB and LSB images should be chosen to provide the maximum objective quality for a given compression ratio. Finally, we apply the proposed infrared image compression method utilizing JPEG and H.264/AVC codecs, which are usually available in efficient implementations, and compare their rate-distortion performance with JPEG2000, JPEG-XT and H.265/HEVC codecs supporting direct compression of infrared images in 16 bit depth format. A preliminary result shows that two 8 bit H.264/AVC codecs can achieve similar result as 16 bit HEVC codec.
Edge-Based Image Compression with Homogeneous Diffusion
NASA Astrophysics Data System (ADS)
Mainberger, Markus; Weickert, Joachim
It is well-known that edges contain semantically important image information. In this paper we present a lossy compression method for cartoon-like images that exploits information at image edges. These edges are extracted with the Marr-Hildreth operator followed by hysteresis thresholding. Their locations are stored in a lossless way using JBIG. Moreover, we encode the grey or colour values at both sides of each edge by applying quantisation, subsampling and PAQ coding. In the decoding step, information outside these encoded data is recovered by solving the Laplace equation, i.e. we inpaint with the steady state of a homogeneous diffusion process. Our experiments show that the suggested method outperforms the widely-used JPEG standard and can even beat the advanced JPEG2000 standard for cartoon-like images.
Kim, Bohyoung; Lee, Kyoung Ho; Kim, Kil Joong; Mantiuk, Rafal; Kim, Hye-ri; Kim, Young Hoon
2008-06-01
The objective of our study was to assess the effects of compressing source thin-section abdominal CT images on final transverse average-intensity-projection (AIP) images. At reversible, 4:1, 6:1, 8:1, 10:1, and 15:1 Joint Photographic Experts Group (JPEG) 2000 compressions, we compared the artifacts in 20 matching compressed thin sections (0.67 mm), compressed thick sections (5 mm), and AIP images (5 mm) reformatted from the compressed thin sections. The artifacts were quantitatively measured with peak signal-to-noise ratio (PSNR) and a perceptual quality metric (High Dynamic Range Visual Difference Predictor [HDR-VDP]). By comparing the compressed and original images, three radiologists independently graded the artifacts as 0 (none, indistinguishable), 1 (barely perceptible), 2 (subtle), or 3 (significant). Friedman tests and exact tests for paired proportions were used. At irreversible compressions, the artifacts tended to increase in the order of AIP, thick-section, and thin-section images in terms of PSNR (p < 0.0001), HDR-VDP (p < 0.0001), and the readers' grading (p < 0.01 at 6:1 or higher compressions). At 6:1 and 8:1, distinguishable pairs (grades 1-3) tended to increase in the order of AIP, thick-section, and thin-section images. Visually lossless threshold for the compression varied between images but decreased in the order of AIP, thick-section, and thin-section images (p < 0.0001). Compression artifacts in thin sections are significantly attenuated in AIP images. On the premise that thin sections are typically reviewed using an AIP technique, it is justifiable to compress them to a compression level currently accepted for thick sections.
Novel approach to multispectral image compression on the Internet
NASA Astrophysics Data System (ADS)
Zhu, Yanqiu; Jin, Jesse S.
2000-10-01
Still image coding techniques such as JPEG have been always applied onto intra-plane images. Coding fidelity is always utilized in measuring the performance of intra-plane coding methods. In many imaging applications, it is more and more necessary to deal with multi-spectral images, such as the color images. In this paper, a novel approach to multi-spectral image compression is proposed by using transformations among planes for further compression of spectral planes. Moreover, a mechanism of introducing human visual system to the transformation is provided for exploiting the psycho visual redundancy. The new technique for multi-spectral image compression, which is designed to be compatible with the JPEG standard, is demonstrated on extracting correlation among planes based on human visual system. A high measure of compactness in the data representation and compression can be seen with the power of the scheme taken into account.
Context-dependent JPEG backward-compatible high-dynamic range image compression
NASA Astrophysics Data System (ADS)
Korshunov, Pavel; Ebrahimi, Touradj
2013-10-01
High-dynamic range (HDR) imaging is expected, together with ultrahigh definition and high-frame rate video, to become a technology that may change photo, TV, and film industries. Many cameras and displays capable of capturing and rendering both HDR images and video are already available in the market. The popularity and full-public adoption of HDR content is, however, hindered by the lack of standards in evaluation of quality, file formats, and compression, as well as large legacy base of low-dynamic range (LDR) displays that are unable to render HDR. To facilitate the wide spread of HDR usage, the backward compatibility of HDR with commonly used legacy technologies for storage, rendering, and compression of video and images are necessary. Although many tone-mapping algorithms are developed for generating viewable LDR content from HDR, there is no consensus of which algorithm to use and under which conditions. We, via a series of subjective evaluations, demonstrate the dependency of the perceptual quality of the tone-mapped LDR images on the context: environmental factors, display parameters, and image content itself. Based on the results of subjective tests, it proposes to extend JPEG file format, the most popular image format, in a backward compatible manner to deal with HDR images also. An architecture to achieve such backward compatibility with JPEG is proposed. A simple implementation of lossy compression demonstrates the efficiency of the proposed architecture compared with the state-of-the-art HDR image compression.
Image enhancement using the hypothesis selection filter: theory and application to JPEG decoding.
Wong, Tak-Shing; Bouman, Charles A; Pollak, Ilya
2013-03-01
We introduce the hypothesis selection filter (HSF) as a new approach for image quality enhancement. We assume that a set of filters has been selected a priori to improve the quality of a distorted image containing regions with different characteristics. At each pixel, HSF uses a locally computed feature vector to predict the relative performance of the filters in estimating the corresponding pixel intensity in the original undistorted image. The prediction result then determines the proportion of each filter used to obtain the final processed output. In this way, the HSF serves as a framework for combining the outputs of a number of different user selected filters, each best suited for a different region of an image. We formulate our scheme in a probabilistic framework where the HSF output is obtained as the Bayesian minimum mean square error estimate of the original image. Maximum likelihood estimates of the model parameters are determined from an offline fully unsupervised training procedure that is derived from the expectation-maximization algorithm. To illustrate how to apply the HSF and to demonstrate its potential, we apply our scheme as a post-processing step to improve the decoding quality of JPEG-encoded document images. The scheme consistently improves the quality of the decoded image over a variety of image content with different characteristics. We show that our scheme results in quantitative improvements over several other state-of-the-art JPEG decoding methods.
Fast and accurate face recognition based on image compression
NASA Astrophysics Data System (ADS)
Zheng, Yufeng; Blasch, Erik
2017-05-01
Image compression is desired for many image-related applications especially for network-based applications with bandwidth and storage constraints. The face recognition community typical reports concentrate on the maximal compression rate that would not decrease the recognition accuracy. In general, the wavelet-based face recognition methods such as EBGM (elastic bunch graph matching) and FPB (face pattern byte) are of high performance but run slowly due to their high computation demands. The PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis) algorithms run fast but perform poorly in face recognition. In this paper, we propose a novel face recognition method based on standard image compression algorithm, which is termed as compression-based (CPB) face recognition. First, all gallery images are compressed by the selected compression algorithm. Second, a mixed image is formed with the probe and gallery images and then compressed. Third, a composite compression ratio (CCR) is computed with three compression ratios calculated from: probe, gallery and mixed images. Finally, the CCR values are compared and the largest CCR corresponds to the matched face. The time cost of each face matching is about the time of compressing the mixed face image. We tested the proposed CPB method on the "ASUMSS face database" (visible and thermal images) from 105 subjects. The face recognition accuracy with visible images is 94.76% when using JPEG compression. On the same face dataset, the accuracy of FPB algorithm was reported as 91.43%. The JPEG-compressionbased (JPEG-CPB) face recognition is standard and fast, which may be integrated into a real-time imaging device.
JPEG 2000-based compression of fringe patterns for digital holographic microscopy
NASA Astrophysics Data System (ADS)
Blinder, David; Bruylants, Tim; Ottevaere, Heidi; Munteanu, Adrian; Schelkens, Peter
2014-12-01
With the advent of modern computing and imaging technologies, digital holography is becoming widespread in various scientific disciplines such as microscopy, interferometry, surface shape measurements, vibration analysis, data encoding, and certification. Therefore, designing an efficient data representation technology is of particular importance. Off-axis holograms have very different signal properties with respect to regular imagery, because they represent a recorded interference pattern with its energy biased toward the high-frequency bands. This causes traditional images' coders, which assume an underlying 1/f2 power spectral density distribution, to perform suboptimally for this type of imagery. We propose a JPEG 2000-based codec framework that provides a generic architecture suitable for the compression of many types of off-axis holograms. This framework has a JPEG 2000 codec at its core, extended with (1) fully arbitrary wavelet decomposition styles and (2) directional wavelet transforms. Using this codec, we report significant improvements in coding performance for off-axis holography relative to the conventional JPEG 2000 standard, with Bjøntegaard delta-peak signal-to-noise ratio improvements ranging from 1.3 to 11.6 dB for lossy compression in the 0.125 to 2.00 bpp range and bit-rate reductions of up to 1.6 bpp for lossless compression.
Mutual information-based analysis of JPEG2000 contexts.
Liu, Zhen; Karam, Lina J
2005-04-01
Context-based arithmetic coding has been widely adopted in image and video compression and is a key component of the new JPEG2000 image compression standard. In this paper, the contexts used in JPEG2000 are analyzed using the mutual information, which is closely related to the compression performance. We first show that, when combining the contexts, the mutual information between the contexts and the encoded data will decrease unless the conditional probability distributions of the combined contexts are the same. Given I, the initial number of contexts, and F, the final desired number of contexts, there are S(I, F) possible context classification schemes where S(I, F) is called the Stirling number of the second kind. The optimal classification scheme is the one that gives the maximum mutual information. Instead of using an exhaustive search, the optimal classification scheme can be obtained through a modified generalized Lloyd algorithm with the relative entropy as the distortion metric. For binary arithmetic coding, the search complexity can be reduced by using dynamic programming. Our experimental results show that the JPEG2000 contexts capture the correlations among the wavelet coefficients very well. At the same time, the number of contexts used as part of the standard can be reduced without loss in the coding performance.
Progressive transmission of images over fading channels using rate-compatible LDPC codes.
Pan, Xiang; Banihashemi, Amir H; Cuhadar, Aysegul
2006-12-01
In this paper, we propose a combined source/channel coding scheme for transmission of images over fading channels. The proposed scheme employs rate-compatible low-density parity-check codes along with embedded image coders such as JPEG2000 and set partitioning in hierarchical trees (SPIHT). The assignment of channel coding rates to source packets is performed by a fast trellis-based algorithm. We examine the performance of the proposed scheme over correlated and uncorrelated Rayleigh flat-fading channels with and without side information. Simulation results for the expected peak signal-to-noise ratio of reconstructed images, which are within 1 dB of the capacity upper bound over a wide range of channel signal-to-noise ratios, show considerable improvement compared to existing results under similar conditions. We also study the sensitivity of the proposed scheme in the presence of channel estimation error at the transmitter and demonstrate that under most conditions our scheme is more robust compared to existing schemes.
Fragmentation Point Detection of JPEG Images at DHT Using Validator
NASA Astrophysics Data System (ADS)
Mohamad, Kamaruddin Malik; Deris, Mustafa Mat
File carving is an important, practical technique for data recovery in digital forensics investigation and is particularly useful when filesystem metadata is unavailable or damaged. The research on reassembly of JPEG files with RST markers, fragmented within the scan area have been done before. However, fragmentation within Define Huffman Table (DHT) segment is yet to be resolved. This paper analyzes the fragmentation within the DHT area and list out all the fragmentation possibilities. Two main contributions are made in this paper. Firstly, three fragmentation points within DHT area are listed. Secondly, few novel validators are proposed to detect these fragmentations. The result obtained from tests done on manually fragmented JPEG files, showed that all three fragmentation points within DHT are successfully detected using validators.
Setti, E; Musumeci, R
2001-06-01
The world wide web is an exciting service that allows one to publish electronic documents made of text and images on the internet. Client software called a web browser can access these documents, and display and print them. The most popular browsers are currently Microsoft Internet Explorer (Microsoft, Redmond, WA) and Netscape Communicator (Netscape Communications, Mountain View, CA). These browsers can display text in hypertext markup language (HTML) format and images in Joint Photographic Expert Group (JPEG) and Graphic Interchange Format (GIF). Currently, neither browser can display radiologic images in native Digital Imaging and Communications in Medicine (DICOM) format. With the aim to publish radiologic images on the internet, we wrote a dedicated Java applet. Our software can display radiologic and histologic images in DICOM, JPEG, and GIF formats, and provides a a number of functions like windowing and magnification lens. The applet is compatible with some web browsers, even the older versions. The software is free and available from the author.
Atmospheric Science Data Center
2014-05-15
article title: Los Alamos, New Mexico View Larger JPEG image ... kb) Multi-angle views of the Fire in Los Alamos, New Mexico, May 9, 2000. These true-color images covering north-central New Mexico ...
NASA Astrophysics Data System (ADS)
Muneyasu, Mitsuji; Odani, Shuhei; Kitaura, Yoshihiro; Namba, Hitoshi
On the use of a surveillance camera, there is a case where privacy protection should be considered. This paper proposes a new privacy protection method by automatically degrading the face region in surveillance images. The proposed method consists of ROI coding of JPEG2000 and a face detection method based on template matching. The experimental result shows that the face region can be detected and hidden correctly.
NASA Astrophysics Data System (ADS)
Kerner, H. R.; Bell, J. F., III; Ben Amor, H.
2017-12-01
The Mastcam color imaging system on the Mars Science Laboratory Curiosity rover acquires images within Gale crater for a variety of geologic and atmospheric studies. Images are often JPEG compressed before being downlinked to Earth. While critical for transmitting images on a low-bandwidth connection, this compression can result in image artifacts most noticeable as anomalous brightness or color changes within or near JPEG compression block boundaries. In images with significant high-frequency detail (e.g., in regions showing fine layering or lamination in sedimentary rocks), the image might need to be re-transmitted losslessly to enable accurate scientific interpretation of the data. The process of identifying which images have been adversely affected by compression artifacts is performed manually by the Mastcam science team, costing significant expert human time. To streamline the tedious process of identifying which images might need to be re-transmitted, we present an input-efficient neural network solution for predicting the perceived quality of a compressed Mastcam image. Most neural network solutions require large amounts of hand-labeled training data for the model to learn the target mapping between input (e.g. distorted images) and output (e.g. quality assessment). We propose an automatic labeling method using joint entropy between a compressed and uncompressed image to avoid the need for domain experts to label thousands of training examples by hand. We use automatically labeled data to train a convolutional neural network to estimate the probability that a Mastcam user would find the quality of a given compressed image acceptable for science analysis. We tested our model on a variety of Mastcam images and found that the proposed method correlates well with image quality perception by science team members. When assisted by our proposed method, we estimate that a Mastcam investigator could reduce the time spent reviewing images by a minimum of 70%.
López, Carlos; Jaén Martinez, Joaquín; Lejeune, Marylène; Escrivà, Patricia; Salvadó, Maria T; Pons, Lluis E; Alvaro, Tomás; Baucells, Jordi; García-Rojo, Marcial; Cugat, Xavier; Bosch, Ramón
2009-10-01
The volume of digital image (DI) storage continues to be an important problem in computer-assisted pathology. DI compression enables the size of files to be reduced but with the disadvantage of loss of quality. Previous results indicated that the efficiency of computer-assisted quantification of immunohistochemically stained cell nuclei may be significantly reduced when compressed DIs are used. This study attempts to show, with respect to immunohistochemically stained nuclei, which morphometric parameters may be altered by the different levels of JPEG compression, and the implications of these alterations for automated nuclear counts, and further, develops a method for correcting this discrepancy in the nuclear count. For this purpose, 47 DIs from different tissues were captured in uncompressed TIFF format and converted to 1:3, 1:23 and 1:46 compression JPEG images. Sixty-five positive objects were selected from these images, and six morphological parameters were measured and compared for each object in TIFF images and those of the different compression levels using a set of previously developed and tested macros. Roundness proved to be the only morphological parameter that was significantly affected by image compression. Factors to correct the discrepancy in the roundness estimate were derived from linear regression models for each compression level, thereby eliminating the statistically significant differences between measurements in the equivalent images. These correction factors were incorporated in the automated macros, where they reduced the nuclear quantification differences arising from image compression. Our results demonstrate that it is possible to carry out unbiased automated immunohistochemical nuclear quantification in compressed DIs with a methodology that could be easily incorporated in different systems of digital image analysis.
Image acquisition system using on sensor compressed sampling technique
NASA Astrophysics Data System (ADS)
Gupta, Pravir Singh; Choi, Gwan Seong
2018-01-01
Advances in CMOS technology have made high-resolution image sensors possible. These image sensors pose significant challenges in terms of the amount of raw data generated, energy efficiency, and frame rate. This paper presents a design methodology for an imaging system and a simplified image sensor pixel design to be used in the system so that the compressed sensing (CS) technique can be implemented easily at the sensor level. This results in significant energy savings as it not only cuts the raw data rate but also reduces transistor count per pixel; decreases pixel size; increases fill factor; simplifies analog-to-digital converter, JPEG encoder, and JPEG decoder design; decreases wiring; and reduces the decoder size by half. Thus, CS has the potential to increase the resolution of image sensors for a given technology and die size while significantly decreasing the power consumption and design complexity. We show that it has potential to reduce power consumption by about 23% to 65%.
Web surveillance system using platform-based design
NASA Astrophysics Data System (ADS)
Lin, Shin-Yo; Tsai, Tsung-Han
2004-04-01
A revolutionary methodology of SOPC platform-based design environment for multimedia communications will be developed. We embed a softcore processor to perform the image compression in FPGA. Then, we plug-in an Ethernet daughter board in the SOPC development platform system. Afterward, a web surveillance platform system is presented. The web surveillance system consists of three parts: image capture, web server and JPEG compression. In this architecture, user can control the surveillance system by remote. By the IP address configures to Ethernet daughter board, the user can access the surveillance system via browser. When user access the surveillance system, the CMOS sensor presently capture the remote image. After that, it will feed the captured image with the embedded processor. The embedded processor immediately performs the JPEG compression. Afterward, the user receives the compressed data via Ethernet. To sum up of the above mentioned, the all system will be implemented on APEX20K200E484-2X device.
Khushi, Matloob; Edwards, Georgina; de Marcos, Diego Alonso; Carpenter, Jane E; Graham, J Dinny; Clarke, Christine L
2013-02-12
Virtual microscopy includes digitisation of histology slides and the use of computer technologies for complex investigation of diseases such as cancer. However, automated image analysis, or website publishing of such digital images, is hampered by their large file sizes. We have developed two Java based open source tools: Snapshot Creator and NDPI-Splitter. Snapshot Creator converts a portion of a large digital slide into a desired quality JPEG image. The image is linked to the patient's clinical and treatment information in a customised open source cancer data management software (Caisis) in use at the Australian Breast Cancer Tissue Bank (ABCTB) and then published on the ABCTB website (http://www.abctb.org.au) using Deep Zoom open source technology. Using the ABCTB online search engine, digital images can be searched by defining various criteria such as cancer type, or biomarkers expressed. NDPI-Splitter splits a large image file into smaller sections of TIFF images so that they can be easily analysed by image analysis software such as Metamorph or Matlab. NDPI-Splitter also has the capacity to filter out empty images. Snapshot Creator and NDPI-Splitter are novel open source Java tools. They convert digital slides into files of smaller size for further processing. In conjunction with other open source tools such as Deep Zoom and Caisis, this suite of tools is used for the management and archiving of digital microscopy images, enabling digitised images to be explored and zoomed online. Our online image repository also has the capacity to be used as a teaching resource. These tools also enable large files to be sectioned for image analysis. The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/5330903258483934.
Song, Xiaoying; Huang, Qijun; Chang, Sheng; He, Jin; Wang, Hao
2018-06-01
To improve the compression rates for lossless compression of medical images, an efficient algorithm, based on irregular segmentation and region-based prediction, is proposed in this paper. Considering that the first step of a region-based compression algorithm is segmentation, this paper proposes a hybrid method by combining geometry-adaptive partitioning and quadtree partitioning to achieve adaptive irregular segmentation for medical images. Then, least square (LS)-based predictors are adaptively designed for each region (regular subblock or irregular subregion). The proposed adaptive algorithm not only exploits spatial correlation between pixels but it utilizes local structure similarity, resulting in efficient compression performance. Experimental results show that the average compression performance of the proposed algorithm is 10.48, 4.86, 3.58, and 0.10% better than that of JPEG 2000, CALIC, EDP, and JPEG-LS, respectively. Graphical abstract ᅟ.
Two VLT 8.2-m Unit Telescopes in Action
NASA Astrophysics Data System (ADS)
1999-04-01
Visitors at ANTU - Astronomical Images from KUEYEN The VLT Control Room at the Paranal Observatory is becoming a busy place indeed. From here, two specialist teams of ESO astronomers and engineers now operate two VLT 8.2-m Unit Telescopes in parallel, ANTU and KUEYEN (formerly UT1 and UT2, for more information about the naming and the pronunciation, see ESO Press Release 06/99 ). Regular science observations have just started with the first of these giant telescopes, while impressive astronomical images are being obtained with the second. The work is hard, but the mood in the control room is good. Insiders claim that there have even been occasions on which the groups have had a friendly "competition" about which telescope makes the "best" images! The ANTU-team has worked with the FORS multi-mode instrument , their colleagues at KUEYEN use the VLT Test Camera for the ongoing tests of this new telescope. While the first is a highly developed astronomical instrument with a large-field CCD imager (6.8 x 6.8 arcmin 2 in the normal mode; 3.4 x 3.4 arcmin 2 in the high-resolution mode), the other is a less complex CCD camera with a smaller field (1.5 x 1.5 arcmin 2 ), suited to verify the optical performance of the telescope. As these images demonstrate, the performance of the second VLT Unit Telescope is steadily improving and it may not be too long before its optical quality will approach that of the first. First KUEYEN photos of stars and galaxies We present here some of the first astronomical images, taken with the second telescope, KUEYEN, in late March and early April 1999. They reflect the current status of the optical, electronic and mechanical systems, still in the process of being tuned. As expected, the experience gained from ANTU last year has turned out to be invaluable and has allowed good progress during this extremely delicate process. ESO PR Photo 19a/99 ESO PR Photo 19a/99 [Preview - JPEG: 400 x 433 pix - 160k] [Normal - JPEG: 800 x 866 pix - 457k] [High-Res - JPEG: 1985 x 2148 pix - 2.0M] ESO PR Photo 19b/99 ESO PR Photo 19b/99 [Preview - JPEG: 400 x 478 pix - 165k] [Normal - JPEG: 800 x 956 pix - 594k] [High-Res - JPEG: 3000 x 3583 pix - 7.1M] Caption to PR Photo 19a/99 : This photo was obtained with VLT KUEYEN on April 4, 1999. It is reproduced from an excellent 60-second R(ed)-band exposure of the innermost region of a globular cluster, Messier 68 (NGC 4590) , in the southern constellation Hydra (The Water-Snake). The distance to this 8-mag cluster is about 35,000 light years, and the diameter is about 140 light-years. The excellent image quality is 0.38 arcsec , demonstrating a good optical and mechanical state of the telescope, already at this early stage of the commissioning phase. The field measures about 90 x 90 arcsec 2. The original scale is 0.0455 pix/arcsec and there are 2048x2048 pixels in one frame. North is up and East is left. Caption to PR Photo 19b/99 : This photo shows the central region of spiral galaxy ESO 269-57 , located in the southern constellation Centaurus at a distance of about 150 million light-years. Many galaxies are seen in this direction at about the same distance, forming a loose cluster; there are also some fainter, more distant ones in the background. The designation refers to the ESO/Uppsala Survey of the Southern Sky in the 1970's during which over 15,000 southern galaxies were catalogued. ESO 269-57 is a tightly bound object of type Sar , the "r" referring to the "ring" that surrounds the bright centre, that is overexposed here. The photo is a composite, based on three exposures (Blue - 600 sec; Yellow-Green - 300 sec; Red - 300 sec) obtained with KUEYEN on March 28, 1999. The image quality is 0.7 arcsec and the field is 90 x 90 arcsec 2. North is up and East is left. ESO PR Photo 19c/99 ESO PR Photo 19c/99 [Preview - JPEG: 400 x 478 pix - 132k] [Normal - JPEG: 800 x 956 pix - 446k] [High-Res - JPEG: 3000 x 3583 pix - 4.6M] ESO PR Photo 19d/99 ESO PR Photo 19d/99 [Preview - JPEG: 400 x 454 pix - 86k] [Normal - JPEG: 800 x 907 pix - 301k] [High-Res - JPEG: 978 x 1109 pix - 282k] Caption to PR Photo 19c/99 : Somewhat further out in space, and right on the border between the southern constellations Hydra and Centaurus lies this knotty spiral galaxy, IC 4248 ; the distance is about 210 million light-years. It was imaged with KUEYEN on March 28, 1999, with the same filters and exposure times as used for Photo 19b/99. The image quality is 0.75 arcsec and the field is 90 x 90 arcsec 2. North is up and East is left. Caption to PR Photo 19d/99 : This is a close-up view of the double galaxy NGC 5090 (right) and NGC 5091 (left), in the southern constellation Centaurus. The first is a typical S0 galaxy with a bright diffuse centre, surrounded by a fainter envelope of stars (not resolved in this picture). However, some of the starlike objects seen in this region may be globular clusters (or dwarf galaxies) in orbit around NGC 5090. The other galaxy is of type Sa (the spiral structure is more developed) and is seen at a steep angle. The three-colour composite is based on frames obtained with KUEYEN on March 29, 1999, with the same filters and exposure times as used for Photo 19b/99. The image quality is 0.7 arcsec and the field is 90 x 90 arcsec 2. North is up and East is left. ( Note inserted on April 26: The original caption text identified the second galaxy as NGC 5090B - this error has now been corrected. ESO PR Photo 19e/99 ESO PR Photo 19e/99 [Preview - JPEG: 400 x 441 pix - 282k] [Normal - JPEG: 800 x 882 pix - 966k] [High-Res - JPEG: 3000 x 3307 pix - 6,4M] Caption to PR Photo 19e/99 : Wide-angle photo of the second 8.2-m VLT Unit Telescope, KUEYEN , obtained on March 10, 1999, with the main mirror and its cell in place at the bottom of the telescope structure. The Test Camera with which the astronomical images above were made, is positioned at the Cassegrain focus, inside this mirror cell. The Paranal Inauguration on March 5, 1999, took place under this telescope that was tilted towards the horizon to accommodate nearly 300 persons on the observing floor. Astronomical observations with ANTU have started On April 1, 1999, the first 8.2-m VLT Unit Telescope, ANTU , was "handed over" to the astronomers. Last year, about 270 observing proposals competed about the first, precious observing time at Europe's largest optical telescope and more than 100 of these were accommodated within the six-month period until the end of September 1999. The complete observing schedule is available on the web. These observations will be carried out in two different modes. During the Visitor Mode , the astronomers will be present at the telescope, while in the Service Mode , ESO observers perform the observations. The latter procedure allows a greater degree of flexibility and the possibility to assign periods of particularly good observing conditions to programmes whose success is critically dependent on this. The first ten nights at ANTU were allocated to service mode observations. After some initial technical problems with the instruments, these have now started. Already in the first night, programmes at ISAAC requiring 0.4 arcsec conditions could be satisfied, and some images better than 0.3 arcsec were obtained in the near-infrared . The first astronomers to use the telescope in visitors mode will be Professors Immo Appenzeller (Heidelberg, Germany; "Photo-polarimetry of pulsars") and George Miley (Leiden, The Netherlands; "Distant radio galaxies") with their respective team colleagues. How to obtain ESO Press Information ESO Press Information is made available on the World-Wide Web (URL: http://www.eso.org../ ). ESO Press Photos may be reproduced, if credit is given to the European Southern Observatory. Note also the dedicated webarea with VLT Information.
Overview of the JPEG XS objective evaluation procedures
NASA Astrophysics Data System (ADS)
Willème, Alexandre; Richter, Thomas; Rosewarne, Chris; Macq, Benoit
2017-09-01
JPEG XS is a standardization activity conducted by the Joint Photographic Experts Group (JPEG), formally known as ISO/IEC SC29 WG1 group that aims at standardizing a low-latency, lightweight and visually lossless video compression scheme. This codec is intended to be used in applications where image sequences would otherwise be transmitted or stored in uncompressed form, such as in live production (through SDI or IP transport), display links, or frame buffers. Support for compression ratios ranging from 2:1 to 6:1 allows significant bandwidth and power reduction for signal propagation. This paper describes the objective quality assessment procedures conducted as part of the JPEG XS standardization activity. Firstly, this paper discusses the objective part of the experiments that led to the technology selection during the 73th WG1 meeting in late 2016. This assessment consists of PSNR measurements after a single and multiple compression decompression cycles at various compression ratios. After this assessment phase, two proposals among the six responses to the CfP were selected and merged to form the first JPEG XS test model (XSM). Later, this paper describes the core experiments (CEs) conducted so far on the XSM. These experiments are intended to evaluate its performance in more challenging scenarios, such as insertion of picture overlays, robustness to frame editing, assess the impact of the different algorithmic choices, and also to measure the XSM performance using the HDR VDP metric.
Digital cinema system using JPEG2000 movie of 8-million pixel resolution
NASA Astrophysics Data System (ADS)
Fujii, Tatsuya; Nomura, Mitsuru; Shirai, Daisuke; Yamaguchi, Takahiro; Fujii, Tetsuro; Ono, Sadayasu
2003-05-01
We have developed a prototype digital cinema system that can store, transmit and display extra high quality movies of 8-million pixel resolution, using JPEG2000 coding algorithm. The image quality is 4 times better than HDTV in resolution, and enables us to replace conventional films with digital cinema archives. Using wide-area optical gigabit IP networks, cinema contents are distributed and played back as a video-on-demand (VoD) system. The system consists of three main devices, a video server, a real-time JPEG2000 decoder, and a large-venue LCD projector. All digital movie data are compressed by JPEG2000 and stored in advance. The coded streams of 300~500 Mbps can be continuously transmitted from the PC server using TCP/IP. The decoder can perform the real-time decompression at 24/48 frames per second, using 120 parallel JPEG2000 processing elements. The received streams are expanded into 4.5Gbps raw video signals. The prototype LCD projector uses 3 pieces of 3840×2048 pixel reflective LCD panels (D-ILA) to show RGB 30-bit color movies fed by the decoder. The brightness exceeds 3000 ANSI lumens for a 300-inch screen. The refresh rate is chosen to 96Hz to thoroughly eliminate flickers, while preserving compatibility to cinema movies of 24 frames per second.
Jaferzadeh, Keyvan; Gholami, Samaneh; Moon, Inkyu
2016-12-20
In this paper, we evaluate lossless and lossy compression techniques to compress quantitative phase images of red blood cells (RBCs) obtained by an off-axis digital holographic microscopy (DHM). The RBC phase images are numerically reconstructed from their digital holograms and are stored in 16-bit unsigned integer format. In the case of lossless compression, predictive coding of JPEG lossless (JPEG-LS), JPEG2000, and JP3D are evaluated, and compression ratio (CR) and complexity (compression time) are compared against each other. It turns out that JP2k can outperform other methods by having the best CR. In the lossy case, JP2k and JP3D with different CRs are examined. Because some data is lost in a lossy way, the degradation level is measured by comparing different morphological and biochemical parameters of RBC before and after compression. Morphological parameters are volume, surface area, RBC diameter, sphericity index, and the biochemical cell parameter is mean corpuscular hemoglobin (MCH). Experimental results show that JP2k outperforms JP3D not only in terms of mean square error (MSE) when CR increases, but also in compression time in the lossy compression way. In addition, our compression results with both algorithms demonstrate that with high CR values the three-dimensional profile of RBC can be preserved and morphological and biochemical parameters can still be within the range of reported values.
Quality Scalability Aware Watermarking for Visual Content.
Bhowmik, Deepayan; Abhayaratne, Charith
2016-11-01
Scalable coding-based content adaptation poses serious challenges to traditional watermarking algorithms, which do not consider the scalable coding structure and hence cannot guarantee correct watermark extraction in media consumption chain. In this paper, we propose a novel concept of scalable blind watermarking that ensures more robust watermark extraction at various compression ratios while not effecting the visual quality of host media. The proposed algorithm generates scalable and robust watermarked image code-stream that allows the user to constrain embedding distortion for target content adaptations. The watermarked image code-stream consists of hierarchically nested joint distortion-robustness coding atoms. The code-stream is generated by proposing a new wavelet domain blind watermarking algorithm guided by a quantization based binary tree. The code-stream can be truncated at any distortion-robustness atom to generate the watermarked image with the desired distortion-robustness requirements. A blind extractor is capable of extracting watermark data from the watermarked images. The algorithm is further extended to incorporate a bit-plane discarding-based quantization model used in scalable coding-based content adaptation, e.g., JPEG2000. This improves the robustness against quality scalability of JPEG2000 compression. The simulation results verify the feasibility of the proposed concept, its applications, and its improved robustness against quality scalable content adaptation. Our proposed algorithm also outperforms existing methods showing 35% improvement. In terms of robustness to quality scalable video content adaptation using Motion JPEG2000 and wavelet-based scalable video coding, the proposed method shows major improvement for video watermarking.
Mixed raster content (MRC) model for compound image compression
NASA Astrophysics Data System (ADS)
de Queiroz, Ricardo L.; Buckley, Robert R.; Xu, Ming
1998-12-01
This paper will describe the Mixed Raster Content (MRC) method for compressing compound images, containing both binary test and continuous-tone images. A single compression algorithm that simultaneously meets the requirements for both text and image compression has been elusive. MRC takes a different approach. Rather than using a single algorithm, MRC uses a multi-layered imaging model for representing the results of multiple compression algorithms, including ones developed specifically for text and for images. As a result, MRC can combine the best of existing or new compression algorithms and offer different quality-compression ratio tradeoffs. The algorithms used by MRC set the lower bound on its compression performance. Compared to existing algorithms, MRC has some image-processing overhead to manage multiple algorithms and the imaging model. This paper will develop the rationale for the MRC approach by describing the multi-layered imaging model in light of a rate-distortion trade-off. Results will be presented comparing images compressed using MRC, JPEG and state-of-the-art wavelet algorithms such as SPIHT. MRC has been approved or proposed as an architectural model for several standards, including ITU Color Fax, IETF Internet Fax, and JPEG 2000.
Random Walk Graph Laplacian-Based Smoothness Prior for Soft Decoding of JPEG Images.
Liu, Xianming; Cheung, Gene; Wu, Xiaolin; Zhao, Debin
2017-02-01
Given the prevalence of joint photographic experts group (JPEG) compressed images, optimizing image reconstruction from the compressed format remains an important problem. Instead of simply reconstructing a pixel block from the centers of indexed discrete cosine transform (DCT) coefficient quantization bins (hard decoding), soft decoding reconstructs a block by selecting appropriate coefficient values within the indexed bins with the help of signal priors. The challenge thus lies in how to define suitable priors and apply them effectively. In this paper, we combine three image priors-Laplacian prior for DCT coefficients, sparsity prior, and graph-signal smoothness prior for image patches-to construct an efficient JPEG soft decoding algorithm. Specifically, we first use the Laplacian prior to compute a minimum mean square error initial solution for each code block. Next, we show that while the sparsity prior can reduce block artifacts, limiting the size of the overcomplete dictionary (to lower computation) would lead to poor recovery of high DCT frequencies. To alleviate this problem, we design a new graph-signal smoothness prior (desired signal has mainly low graph frequencies) based on the left eigenvectors of the random walk graph Laplacian matrix (LERaG). Compared with the previous graph-signal smoothness priors, LERaG has desirable image filtering properties with low computation overhead. We demonstrate how LERaG can facilitate recovery of high DCT frequencies of a piecewise smooth signal via an interpretation of low graph frequency components as relaxed solutions to normalized cut in spectral clustering. Finally, we construct a soft decoding algorithm using the three signal priors with appropriate prior weights. Experimental results show that our proposal outperforms the state-of-the-art soft decoding algorithms in both objective and subjective evaluations noticeably.
Compressing images for the Internet
NASA Astrophysics Data System (ADS)
Beretta, Giordano B.
1998-01-01
The World Wide Web has rapidly become the hot new mass communications medium. Content creators are using similar design and layout styles as in printed magazines, i.e., with many color images and graphics. The information is transmitted over plain telephone lines, where the speed/price trade-off is much more severe than in the case of printed media. The standard design approach is to use palettized color and to limit as much as possible the number of colors used, so that the images can be encoded with a small number of bits per pixel using the Graphics Interchange Format (GIF) file format. The World Wide Web standards contemplate a second data encoding method (JPEG) that allows color fidelity but usually performs poorly on text, which is a critical element of information communicated on this medium. We analyze the spatial compression of color images and describe a methodology for using the JPEG method in a way that allows a compact representation while preserving full color fidelity.
JHelioviewer. Time-dependent 3D visualisation of solar and heliospheric data
NASA Astrophysics Data System (ADS)
Müller, D.; Nicula, B.; Felix, S.; Verstringe, F.; Bourgoignie, B.; Csillaghy, A.; Berghmans, D.; Jiggens, P.; García-Ortiz, J. P.; Ireland, J.; Zahniy, S.; Fleck, B.
2017-09-01
Context. Solar observatories are providing the world-wide community with a wealth of data, covering wide time ranges (e.g. Solar and Heliospheric Observatory, SOHO), multiple viewpoints (Solar TErrestrial RElations Observatory, STEREO), and returning large amounts of data (Solar Dynamics Observatory, SDO). In particular, the large volume of SDO data presents challenges; the data are available only from a few repositories, and full-disk, full-cadence data for reasonable durations of scientific interest are difficult to download, due to their size and the download rates available to most users. From a scientist's perspective this poses three problems: accessing, browsing, and finding interesting data as efficiently as possible. Aims: To address these challenges, we have developed JHelioviewer, a visualisation tool for solar data based on the JPEG 2000 compression standard and part of the open source ESA/NASA Helioviewer Project. Since the first release of JHelioviewer in 2009, the scientific functionality of the software has been extended significantly, and the objective of this paper is to highlight these improvements. Methods: The JPEG 2000 standard offers useful new features that facilitate the dissemination and analysis of high-resolution image data and offers a solution to the challenge of efficiently browsing petabyte-scale image archives. The JHelioviewer software is open source, platform independent, and extendable via a plug-in architecture. Results: With JHelioviewer, users can visualise the Sun for any time period between September 1991 and today; they can perform basic image processing in real time, track features on the Sun, and interactively overlay magnetic field extrapolations. The software integrates solar event data and a timeline display. Once an interesting event has been identified, science quality data can be accessed for in-depth analysis. As a first step towards supporting science planning of the upcoming Solar Orbiter mission, JHelioviewer offers a virtual camera model that enables users to set the vantage point to the location of a spacecraft or celestial body at any given time.
Forensic Analysis of Digital Image Tampering
2004-12-01
analysis of when each method fails, which Chapter 4 discusses. Finally, a test image containing an invisible watermark using LSB steganography is...2.2 – Example of invisible watermark using Steganography Software F5 ............. 8 Figure 2.3 – Example of copy-move image forgery [12...Figure 3.11 – Algorithm for JPEG Block Technique ....................................................... 54 Figure 3.12 – “Forged” Image with Result
A Unified Steganalysis Framework
2013-04-01
contains more than 1800 images of different scenes. In the experiments, we used four JPEG based steganography techniques: Out- guess [13], F5 [16], model...also compressed these images again since some of the steganography meth- ods are double compressing the images . Stego- images are generated by embedding...randomly chosen messages (in bits) into 1600 grayscale images using each of the four steganography techniques. A random message length was determined
NASA Astrophysics Data System (ADS)
Kim, Christopher Y.
1999-05-01
Endoscopic images p lay an important role in describing many gastrointestinal (GI) disorders. The field of radiology has been on the leading edge of creating, archiving and transmitting digital images. With the advent of digital videoendoscopy, endoscopists now have the ability to generate images for storage and transmission. X-rays can be compressed 30-40X without appreciable decline in quality. We reported results of a pilot study using JPEG compression of 24-bit color endoscopic images. For that study, the result indicated that adequate compression ratios vary according to the lesion and that images could be compressed to between 31- and 99-fold smaller than the original size without an appreciable decline in quality. The purpose of this study was to expand upon the methodology of the previous sty with an eye towards application for the WWW, a medium which would expand both clinical and educational purposes of color medical imags. The results indicate that endoscopists are able to tolerate very significant compression of endoscopic images without loss of clinical image quality. This finding suggests that even 1 MB color images can be compressed to well under 30KB, which is considered a maximal tolerable image size for downloading on the WWW.
Sharper and Deeper Views with MACAO-VLTI
NASA Astrophysics Data System (ADS)
2003-05-01
"First Light" with Powerful Adaptive Optics System for the VLT Interferometer Summary On April 18, 2003, a team of engineers from ESO celebrated the successful accomplishment of "First Light" for the MACAO-VLTI Adaptive Optics facility on the Very Large Telescope (VLT) at the Paranal Observatory (Chile). This is the second Adaptive Optics (AO) system put into operation at this observatory, following the NACO facility ( ESO PR 25/01 ). The achievable image sharpness of a ground-based telescope is normally limited by the effect of atmospheric turbulence. However, with Adaptive Optics (AO) techniques, this major drawback can be overcome so that the telescope produces images that are as sharp as theoretically possible, i.e., as if they were taken from space. The acronym "MACAO" stands for "Multi Application Curvature Adaptive Optics" which refers to the particular way optical corrections are made which "eliminate" the blurring effect of atmospheric turbulence. The MACAO-VLTI facility was developed at ESO. It is a highly complex system of which four, one for each 8.2-m VLT Unit Telescope, will be installed below the telescopes (in the Coudé rooms). These systems correct the distortions of the light beams from the large telescopes (induced by the atmospheric turbulence) before they are directed towards the common focus at the VLT Interferometer (VLTI). The installation of the four MACAO-VLTI units of which the first one is now in place, will amount to nothing less than a revolution in VLT interferometry . An enormous gain in efficiency will result, because of the associated 100-fold gain in sensitivity of the VLTI. Put in simple words, with MACAO-VLTI it will become possible to observe celestial objects 100 times fainter than now . Soon the astronomers will be thus able to obtain interference fringes with the VLTI ( ESO PR 23/01 ) of a large number of objects hitherto out of reach with this powerful observing technique, e.g. external galaxies. The ensuing high-resolution images and spectra will open entirely new perspectives in extragalactic research and also in the studies of many faint objects in our own galaxy, the Milky Way. During the present period, the first of the four MACAO-VLTI facilties was installed, integrated and tested by means of a series of observations. For these tests, an infrared camera was specially developed which allowed a detailed evaluation of the performance. It also provided some first, spectacular views of various celestial objects, some of which are shown here. PR Photo 12a/03 : View of the first MACAO-VLTI facility at Paranal PR Photo 12b/03 : The star HIC 59206 (uncorrected image). PR Photo 12c/03 : HIC 59206 (AO corrected image) PR Photo 12e/03 : HIC 69495 (AO corrected image) PR Photo 12f/03 : 3-D plot of HIC 69495 images (without and with AO correction) PR Photo 12g/03 : 3-D plot of the artificially dimmed star HIC 74324 (without and with AO correction) PR Photo 12d/03 : The MACAO-VLTI commissioning team at "First Light" PR Photo 12h/03 : K-band image of the Galactic Center PR Photo 12i/03 : K-band image of the unstable star Eta Carinae PR Photo 12j/03 : K-band image of the peculiar star Frosty Leo MACAO - the Multi Application Curvature Adaptive Optics facility ESO PR Photo 12a/03 ESO PR Photo 12a/03 [Preview - JPEG: 408 x 400 pix - 56k [Normal - JPEG: 815 x 800 pix - 720k] Captions : PR Photo 12a/03 is a front view of the first MACAO-VLTI unit, now installed at the 8.2-m VLT KUEYEN telescope. Adaptive Optics (AO) systems work by means of a computer-controlled deformable mirror (DM) that counteracts the image distortion induced by atmospheric turbulence. It is based on real-time optical corrections computed from image data obtained by a "wavefront sensor" (a special camera) at very high speed, many hundreds of times each second. The ESO Multi Application Curvature Adaptive Optics (MACAO) system uses a 60-element bimorph deformable mirror (DM) and a 60-element curvature wavefront sensor, with a "heartbeat" of 350 Hz (times per second). With this high spatial and temporal correcting power, MACAO is able to nearly restore the theoretically possible ("diffraction-limited") image quality of an 8.2-m VLT Unit Telescope in the near-infrared region of the spectrum, at a wavelength of about 2 µm. The resulting image resolution (sharpness) of the order of 60 milli-arcsec is an improvement by more than a factor of 10 as compared to standard seeing-limited observations. Without the benefit of the AO technique, such image sharpness could only be obtained if the telescope were placed above the Earth's atmosphere. The technical development of MACAO-VLTI in its present form was begun in 1999 and with project reviews at 6 months' intervals, the project quickly reached cruising speed. The effective design is the result of a very fruitful collaboration between the AO department at ESO and European industry which contributed with the diligent fabrication of numerous high-tech components, including the bimorph DM with 60 actuators, a fast-reaction tip-tilt mount and many others. The assembly, tests and performance-tuning of this complex real-time system was assumed by ESO-Garching staff. Installation at Paranal The first crates of the 60+ cubic-meter shipment with MACAO components arrived at the Paranal Observatory on March 12, 2003. Shortly thereafter, ESO engineers and technicians began the painstaking assembly of this complex instrument, below the VLT 8.2-m KUEYEN telescope (formerly UT2). They followed a carefully planned scheme, involving installation of the electronics, water cooling systems, mechanical and optical components. At the end, they performed the demanding optical alignment, delivering a fully assembled instrument one week before the planned first test observations. This extra week provided a very welcome and useful opportunity to perform a multitude of tests and calibrations in preparation of the actual observations. AO to the service of Interferometry The VLT Interferometer (VLTI) combines starlight captured by two or more 8.2- VLT Unit Telescopes (later also from four moveable1.8-m Auxiliary Telescopes) and allows to vastly increase the image resolution. The light beams from the telescopes are brought together "in phase" (coherently). Starting out at the primary mirrors, they undergo numerous reflections along their different paths over total distances of several hundred meters before they reach the interferometric Laboratory where they are combined to within a fraction of a wavelength, i.e., within nanometers! The gain by the interferometric technique is enormous - combining the light beams from two telescopes separated by 100 metres allows observation of details which could otherwise only be resolved by a single telescope with a diameter of 100 metres. Sophisticated data reduction is necessary to interpret interferometric measurements and to deduce important physical parameters of the observed objects like the diameters of stars, etc., cf. ESO PR 22/02 . The VLTI measures the degree of coherence of the combined beams as expressed by the contrast of the observed interferometric fringe pattern. The higher the degree of coherence between the individual beams, the stronger is the measured signal. By removing wavefront aberrations introduced by atmospheric turbulence, the MACAO-VLTI systems enormously increase the efficiency of combining the individual telescope beams. In the interferometric measurement process, the starlight must be injected into optical fibers which are extremely small in order to accomplish their function; only 6 µm (0.006 mm) in diameter. Without the "refocussing" action of MACAO, only a tiny fraction of the starlight captured by the telescopes can be injected into the fibers and the VLTI would not be working at the peak of efficiency for which it has been designed. MACAO-VLTI will now allow a gain of a factor 100 in the injected light flux - this will be tested in detail when two VLT Unit Telescopes, both equipped with MACAO-VLTI's, work together. However, the very good performance actually achieved with the first system makes the engineers very confident that a gain of this order will indeed be reached. This ultimate test will be performed as soon as the second MACAO-VLTI system has been installed later this year. MACAO-VLTI First Light After one month of installation work and following tests by means of an artificial light source installed in the Nasmyth focus of KUEYEN, MACAO-VLTI had "First Light" on April 18 when it received "real" light from several astronomical obejcts. During the preceding performance tests to measure the image improvement (sharpness, light energy concentration) in near-infrared spectral bands at 1.2, 1.6 and 2.2 µm, MACAO-VLTI was checked by means of a custom-made Infrared Test Camera developed for this purpose by ESO. This intermediate test was required to ensure the proper functioning of MACAO before it is used to feed a corrected beam of light into the VLTI. After only a few nights of testing and optimizing of the various functions and operational parameters, MACAO-VLTI was ready to be used for astronomical observations. The images below were taken under average seeing conditions and illustrate the improvement of the image quality when using MACAO-VLTI . MACAO-VLTI - First Images Here are some of the first images obtained with the test camera at the first MACAO-VLTI system, now installed at the 8.2-m VLT KUEYEN telescope. ESO PR Photo 12b/03 ESO PR Photo 12b/03 [Preview - JPEG: 400 x 468 pix - 25k [Normal - JPEG: 800 x 938 pix - 291k] ESO PR Photo 12c/03 ESO PR Photo 12c/03 [Preview - JPEG: 400 x 469 pix - 14k [Normal - JPEG: 800 x 938 pix - 135k] Captions : PR Photos 12b-c/03 show the first image, obtained by the first MACAO-VLTI system at the 8.2-m VLT KUEYEN telescope in the infrared K-band (wavelength 2.2 µm). It displays images of the star HIC 59206 (visual magnitude 10) obtained before (left; Photo 12b/03 ) and after (right; Photo 12c/03 ) the adaptive optics system was switched on. The binary is separated by 0.120 arcsec and the image was taken under medium seeing conditions (0.75 arcsec) seeing. The dramatic improvement in image quality is obvious. ESO PR Photo 12d/03 ESO PR Photo 12d/03 [Preview - JPEG: 400 x 427 pix - 18k [Normal - JPEG: 800 x 854 pix - 205k] ESO PR Photo 12e/03 ESO PR Photo 12e/03 [Preview - JPEG: 483 x 400 pix - 17k [Normal - JPEG: 966 x 800 pix - 169k] Captions : PR Photo 12d/03 shows one of the best images obtained with MACAO-VLTI (logarithmic intensity scale). The seeing was 0.8 arcsec at the time of the observations and three diffraction rings can clearly be seen around the star HIC 69495 of visual magnitude 9.9. This pattern is only well visible when the image resolution is very close to the theoretical limit. The exposure of the point-like source lasted 100 seconds through a narrow K-band filter. It has a Strehl ratio (a measure of light concentration) of about 55% and a Full-Width- Half-Maximum (FWHM) of 0.060 arcsec. The 3-D plot ( PRPhoto 12e/03 ) demonstrates the tremendous gain in peak intensity of the AO image (right) in peak intensity as compared to "open-loop" image (the "noise" to the left) obtained without the benefit of AO. ESO PR Photo 12f/03 ESO PR Photo 12f/03 [Preview - JPEG: 494 x 400 pix - 20k [Normal - JPEG: 988 x 800 pix - 204k] Caption : PR Photo 12f/03 demonstrates the correction performance of MACAO-VLTI when using a faint guide star. The observed star ( HIC 74324 (stellar spectral type G0 and visual magnitude 9.4) was artificially dimmed by a neutral optical filter to visual magnitude 16.5. The observation was carried out in 0.55 arcsec seeing and with a rather short atmospheric correlation time of 3 milliseconds at visible wavelengths. The Strehl ratio in the 25-second K-band exposure is about 10% and the FWHM is 0.14 arcseconds. The uncorrected image is shown to the left for comparison. The improvement is again impressive, even for a star as faint as this, indicating that guide stars of this magnitude are feasible during future observations. ESO PR Photo 12g/03 ESO PR Photo 12g/03 [Preview - JPEG: 528 x 400 pix - 48k [Normal - JPEG: 1055 x 800 pix - 542k] Captions : PR Photo 12g/03 shows some of the MACAO-VLTI commissioning team members in the VLT Control Room at the moment of "First Light" during the night between April 18-19, 2003. Sitting: Markus Kasper, Enrico Fedrigo - Standing: Robin Arsenault, Sebastien Tordo, Christophe Dupuy, Toomas Erm, Jason Spyromilio, Rob Donaldson (all from ESO). PR Photos 12b-c/03 show the first image in the infrared K-band (wavelength 2.2 µm) of a star (visual magnitude 10) obtained without and with image corrections by means of adaptive optics. PR Photo 12d/03 displays one of the best images obtained with MACAO-VLTI during the early tests. It shows a Strehl ratio (measure of light concentration) that fulfills the specifications according to which MACAO-VLTI was built. This enormous improvement when using AO techniques is clearly demonstrated in PR Photo 12e/03 , with the uncorrected image profile (left) hardly visible when compared to the corrected profile (right). PR Photo 11f/03 demonstrates the correction capabilities of MACAO-VLTI when using a faint guide star. Tests using different spectral types showed that the limiting visual magnitude varies between 16 for early-type B-stars and about 18 for late-type M-stars. Astronomical Objects seen at the Diffraction Limit The following examples of MACAO-VLTI observations of two well-known astronomical objects were obtained in order to provisionally evaluate the research opportunities now opening with MACAO-VLTI. They may well be compared with space-based images. The Galactic Center ESO PR Photo 12h/03 ESO PR Photo 12h/03 [Preview - JPEG: 693 x 400 pix - 46k [Normal - JPEG: 1386 x 800 pix - 403k] Caption : PR Photo 12h/03 shows a 90-second K-band exposure of the central 6 x 13 arcsec 2 around the Galactic Center obtained by MACAO-VLTI under average atmospheric conditions (0.8 arcsec seeing). Although the 14.6 magnitude guide star is located roughly 20 arcsec from the field center - this leading to isoplanatic degradation of image sharpness - the present image is nearly diffraction limited and has a point-source FWHM of about 0.115 arcsec. The center of our own galaxy is located in the Sagittarius constellation at a distance of approximately 30,000 light-years. PR Photo 12h/03 shows a short-exposure infrared view of this region, obtained by MACAO-VLTI during the early test phase. Recent AO observations using the NACO facility at the VLT provide compelling evidence that a supermassive black hole with 2.6 million solar masses is located at the very center, cf. ESO PR 17/02 . This result, based on astrometric observations of a star orbiting the black hole and approaching it to within a distance of only 17 light-hours, would not have been possible without images of diffraction limited resolution. Eta Carinae ESO PR Photo 12i/03 ESO PR Photo 12i/03 [Preview - JPEG: 400 x 482 pix - 25k [Normal - JPEG: 800 x 963 pix - 313k] Caption : PR Photo 12i/03 displays an infrared narrow K-band image of the massive star Eta Carinae . The image quality is difficult to estimate because the central star saturated the detector, but the clear structure of the diffraction spikes and the size of the smallest features visible in the photo indicate a near-diffraction limited performance. The field measures about 6.5 x 6.5 arcsec 2. Eta Carinae is one of the heaviest stars known, with a mass that probably exceeds 100 solar masses. It is about 4 million times brighter than the Sun, making it one of the most luminous stars known. Such a massive star has a comparatively short lifetime of about 1 million years only and - measured in the cosmic timescale- Eta Carinae must have formed quite recently. This star is highly unstable and prone to violent outbursts. They are caused by the very high radiation pressure at the star's upper layers, which blows significant portions of the matter at the "surface" into space during violent eruptions that may last several years. The last of these outbursts occurred between 1835 and 1855 and peaked in 1843. Despite its comparaticely large distance - some 7,500 to 10,000 light-years - Eta Carinae briefly became the second brightest star in the sky at that time (with an apparent magnitude -1), only surpassed by Sirius. Frosty Leo ESO PR Photo 12j/03 ESO PR Photo 12j/03 [Preview - JPEG: 411 x 400 pix - 22k [Normal - JPEG: 821 x 800 pix - 344k] Caption : PR Photo 12j/03 shows a 5 x 5 arcsec 2 K-band image of the peculiar star known as "Frosty Leo" obtained in 0.7 arcsec seeing. Although the object is comparatively bright (visual magnitude 11), it is a difficult AO target because of its extension of about 3 arcsec at visible wavelengths. The corrected image quality is about FWHM 0.1 arcsec. Frosty Leo is a magnitude 11 (post-AGB) star surrounded by an envelope of gas, dust, and large amounts of ice (hence the name). The associated nebula is of "butterfly" shape (bipolar morphology) and it is one of the best known examples of the brief transitional phase between two late evolutionary stages, asymptotic giant branch (AGB) and the subsequent planetary nebulae (PNe). For a three-solar-mass object like this one, this phase is believed to last only a few thousand years, the wink of an eye in the life of the star. Hence, objects like this one are very rare and Frosty Leo is one of the nearest and brightest among them.
A study on multiresolution lossless video coding using inter/intra frame adaptive prediction
NASA Astrophysics Data System (ADS)
Nakachi, Takayuki; Sawabe, Tomoko; Fujii, Tetsuro
2003-06-01
Lossless video coding is required in the fields of archiving and editing digital cinema or digital broadcasting contents. This paper combines a discrete wavelet transform and adaptive inter/intra-frame prediction in the wavelet transform domain to create multiresolution lossless video coding. The multiresolution structure offered by the wavelet transform facilitates interchange among several video source formats such as Super High Definition (SHD) images, HDTV, SDTV, and mobile applications. Adaptive inter/intra-frame prediction is an extension of JPEG-LS, a state-of-the-art lossless still image compression standard. Based on the image statistics of the wavelet transform domains in successive frames, inter/intra frame adaptive prediction is applied to the appropriate wavelet transform domain. This adaptation offers superior compression performance. This is achieved with low computational cost and no increase in additional information. Experiments on digital cinema test sequences confirm the effectiveness of the proposed algorithm.
21 CFR 892.2030 - Medical image digitizer.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Medical image digitizer. 892.2030 Section 892.2030 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED... Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std.). [63 FR 23387, Apr. 29...
21 CFR 892.2040 - Medical image hardcopy device.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Medical image hardcopy device. 892.2040 Section 892.2040 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES... Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture...
ImageJ: Image processing and analysis in Java
NASA Astrophysics Data System (ADS)
Rasband, W. S.
2012-06-01
ImageJ is a public domain Java image processing program inspired by NIH Image. It can display, edit, analyze, process, save and print 8-bit, 16-bit and 32-bit images. It can read many image formats including TIFF, GIF, JPEG, BMP, DICOM, FITS and "raw". It supports "stacks", a series of images that share a single window. It is multithreaded, so time-consuming operations such as image file reading can be performed in parallel with other operations.
2013-01-01
Background Virtual microscopy includes digitisation of histology slides and the use of computer technologies for complex investigation of diseases such as cancer. However, automated image analysis, or website publishing of such digital images, is hampered by their large file sizes. Results We have developed two Java based open source tools: Snapshot Creator and NDPI-Splitter. Snapshot Creator converts a portion of a large digital slide into a desired quality JPEG image. The image is linked to the patient’s clinical and treatment information in a customised open source cancer data management software (Caisis) in use at the Australian Breast Cancer Tissue Bank (ABCTB) and then published on the ABCTB website (http://www.abctb.org.au) using Deep Zoom open source technology. Using the ABCTB online search engine, digital images can be searched by defining various criteria such as cancer type, or biomarkers expressed. NDPI-Splitter splits a large image file into smaller sections of TIFF images so that they can be easily analysed by image analysis software such as Metamorph or Matlab. NDPI-Splitter also has the capacity to filter out empty images. Conclusions Snapshot Creator and NDPI-Splitter are novel open source Java tools. They convert digital slides into files of smaller size for further processing. In conjunction with other open source tools such as Deep Zoom and Caisis, this suite of tools is used for the management and archiving of digital microscopy images, enabling digitised images to be explored and zoomed online. Our online image repository also has the capacity to be used as a teaching resource. These tools also enable large files to be sectioned for image analysis. Virtual Slides The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/5330903258483934 PMID:23402499
An RBF-based compression method for image-based relighting.
Leung, Chi-Sing; Wong, Tien-Tsin; Lam, Ping-Man; Choy, Kwok-Hung
2006-04-01
In image-based relighting, a pixel is associated with a number of sampled radiance values. This paper presents a two-level compression method. In the first level, the plenoptic property of a pixel is approximated by a spherical radial basis function (SRBF) network. That means that the spherical plenoptic function of each pixel is represented by a number of SRBF weights. In the second level, we apply a wavelet-based method to compress these SRBF weights. To reduce the visual artifact due to quantization noise, we develop a constrained method for estimating the SRBF weights. Our proposed approach is superior to JPEG, JPEG2000, and MPEG. Compared with the spherical harmonics approach, our approach has a lower complexity, while the visual quality is comparable. The real-time rendering method for our SRBF representation is also discussed.
The Chandra Source Catalog : Google Earth Interface
NASA Astrophysics Data System (ADS)
Glotfelty, Kenny; McLaughlin, W.; Evans, I.; Evans, J.; Anderson, C. S.; Bonaventura, N. R.; Davis, J. E.; Doe, S. M.; Fabbiano, G.; Galle, E. C.; Gibbs, D. G., II; Grier, J. D.; Hain, R.; Hall, D. M.; Harbo, P. N.; He, H.; Houck, J. C.; Karovska, M.; Kashyap, V. L.; Lauer, J.; McCollough, M. L.; McDowell, J. C.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Mossman, A. E.; Nichols, J. S.; Nowak, M. A.; Plummer, D. A.; Primini, F. A.; Refsdal, B. L.; Rots, A. R.; Siemiginowska, A. L.; Sundheim, B. A.; Tibbetts, M. S.; van Stone, D. W.; Winkelman, S. L.; Zografou, P.
2009-09-01
The Chandra Source Catalog (CSC) contains multi-resolution, exposure corrected, background subtracted, full-field images that are stored as individual FITS files and as three-color JPEG files. In this poster we discuss how we took these data and were able to, with relatively minimal effort, convert them for use with the Google Earth application in its ``Sky'' mode. We will highlight some of the challenges which include converting the data to the required Mercator projection, reworking the 3-color algorithm for pipeline processing, and ways to reduce the data volume through re-binning, using color-maps, and special Keyhole Markup Language (kml) tags to only load images on-demand. The result is a collection of some 11,000 3-color images that are available for all the individual observation in the CSC Release 1. We also have made available all ˜4000 Field-of-View outlines (with per-chip regions), which turns out are trivial to produce starting with a simple dmlist command. In the first week of release, approximately 40% of the images have been accessed at least once through some 50,000 individual web hits which have served over 4Gb of data to roughly 750 users in 60+ countries. We will also highlight some future directions we are exploring, including real-time catalog access to individual source properties and eventual access to file based products such as FITS images, spectra, and light-curves.
Modeling of video compression effects on target acquisition performance
NASA Astrophysics Data System (ADS)
Cha, Jae H.; Preece, Bradley; Espinola, Richard L.
2009-05-01
The effect of video compression on image quality was investigated from the perspective of target acquisition performance modeling. Human perception tests were conducted recently at the U.S. Army RDECOM CERDEC NVESD, measuring identification (ID) performance on simulated military vehicle targets at various ranges. These videos were compressed with different quality and/or quantization levels utilizing motion JPEG, motion JPEG2000, and MPEG-4 encoding. To model the degradation on task performance, the loss in image quality is fit to an equivalent Gaussian MTF scaled by the Structural Similarity Image Metric (SSIM). Residual compression artifacts are treated as 3-D spatio-temporal noise. This 3-D noise is found by taking the difference of the uncompressed frame, with the estimated equivalent blur applied, and the corresponding compressed frame. Results show good agreement between the experimental data and the model prediction. This method has led to a predictive performance model for video compression by correlating various compression levels to particular blur and noise input parameters for NVESD target acquisition performance model suite.
Quantization Distortion in Block Transform-Compressed Data
NASA Technical Reports Server (NTRS)
Boden, A. F.
1995-01-01
The popular JPEG image compression standard is an example of a block transform-based compression scheme; the image is systematically subdivided into block that are individually transformed, quantized, and encoded. The compression is achieved by quantizing the transformed data, reducing the data entropy and thus facilitating efficient encoding. A generic block transform model is introduced.
NASA Technical Reports Server (NTRS)
Stanboli, Alice
2013-01-01
Phxtelemproc is a C/C++ based telemetry processing program that processes SFDU telemetry packets from the Telemetry Data System (TDS). It generates Experiment Data Records (EDRs) for several instruments including surface stereo imager (SSI); robotic arm camera (RAC); robotic arm (RA); microscopy, electrochemistry, and conductivity analyzer (MECA); and the optical microscope (OM). It processes both uncompressed and compressed telemetry, and incorporates unique subroutines for the following compression algorithms: JPEG Arithmetic, JPEG Huffman, Rice, LUT3, RA, and SX4. This program was in the critical path for the daily command cycle of the Phoenix mission. The products generated by this program were part of the RA commanding process, as well as the SSI, RAC, OM, and MECA image and science analysis process. Its output products were used to advance science of the near polar regions of Mars, and were used to prove that water is found in abundance there. Phxtelemproc is part of the MIPL (Multi-mission Image Processing Laboratory) system. This software produced Level 1 products used to analyze images returned by in situ spacecraft. It ultimately assisted in operations, planning, commanding, science, and outreach.
Color image lossy compression based on blind evaluation and prediction of noise characteristics
NASA Astrophysics Data System (ADS)
Ponomarenko, Nikolay N.; Lukin, Vladimir V.; Egiazarian, Karen O.; Lepisto, Leena
2011-03-01
The paper deals with JPEG adaptive lossy compression of color images formed by digital cameras. Adaptation to noise characteristics and blur estimated for each given image is carried out. The dominant factor degrading image quality is determined in a blind manner. Characteristics of this dominant factor are then estimated. Finally, a scaling factor that determines quantization steps for default JPEG table is adaptively set (selected). Within this general framework, two possible strategies are considered. A first one presumes blind estimation for an image after all operations in digital image processing chain just before compressing a given raster image. A second strategy is based on prediction of noise and blur parameters from analysis of RAW image under quite general assumptions concerning characteristics parameters of transformations an image will be subject to at further processing stages. The advantages of both strategies are discussed. The first strategy provides more accurate estimation and larger benefit in image compression ratio (CR) compared to super-high quality (SHQ) mode. However, it is more complicated and requires more resources. The second strategy is simpler but less beneficial. The proposed approaches are tested for quite many real life color images acquired by digital cameras and shown to provide more than two time increase of average CR compared to SHQ mode without introducing visible distortions with respect to SHQ compressed images.
[Development of a video image system for wireless capsule endoscopes based on DSP].
Yang, Li; Peng, Chenglin; Wu, Huafeng; Zhao, Dechun; Zhang, Jinhua
2008-02-01
A video image recorder to record video picture for wireless capsule endoscopes was designed. TMS320C6211 DSP of Texas Instruments Inc. is the core processor of this system. Images are periodically acquired from Composite Video Broadcast Signal (CVBS) source and scaled by video decoder (SAA7114H). Video data is transported from high speed buffer First-in First-out (FIFO) to Digital Signal Processor (DSP) under the control of Complex Programmable Logic Device (CPLD). This paper adopts JPEG algorithm for image coding, and the compressed data in DSP was stored to Compact Flash (CF) card. TMS320C6211 DSP is mainly used for image compression and data transporting. Fast Discrete Cosine Transform (DCT) algorithm and fast coefficient quantization algorithm are used to accelerate operation speed of DSP and decrease the executing code. At the same time, proper address is assigned for each memory, which has different speed;the memory structure is also optimized. In addition, this system uses plenty of Extended Direct Memory Access (EDMA) to transport and process image data, which results in stable and high performance.
Confidential storage and transmission of medical image data.
Norcen, R; Podesser, M; Pommer, A; Schmidt, H-P; Uhl, A
2003-05-01
We discuss computationally efficient techniques for confidential storage and transmission of medical image data. Two types of partial encryption techniques based on AES are proposed. The first encrypts a subset of bitplanes of plain image data whereas the second encrypts parts of the JPEG2000 bitstream. We find that encrypting between 20% and 50% of the visual data is sufficient to provide high confidentiality.
2001-10-25
Table III. In spite of the same quality in ROI, it is decided that the images in the cases where QF is 1.3, 1.5 or 2.0 are not good for diagnosis. Of...but (b) is not good for diagnosis by decision of ultrasonographer. Results reveal that wavelet transform achieves higher quality of image compared
Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain
Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo
2012-01-01
An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality. PMID:23049544
Medical image compression based on vector quantization with variable block sizes in wavelet domain.
Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo
2012-01-01
An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.
NASA Technical Reports Server (NTRS)
2002-01-01
Full-size images June 17, 2001 (2.0 MB JPEG) June 14, 2000 (2.1 MB JPEG) Light snowfall in the winter of 2000-01 led to a dry summer in the Pacific Northwest. The drought led to a conflict between farmers and fishing communities in the Klamath River Basin over water rights, and a series of forest fires in Washington, Oregon, and Northern California. The pair of images above, both acquired by the Enhanced Thematic Mapper Plus (ETM+) aboard the Landsat 7 satellite, show the snowpack on Mt. Shasta in June 2000 and 2001. On June 14, 2000, the snow extends to the lower slopes of the 4,317-meter (14,162-foot) volcano. At nearly the same time this year (June 17, 2001) the snow had retreated well above the tree-line. The drought in the region was categorized as moderate to severe by the National Oceanographic and Atmospheric Administration (NOAA), and the United States Geological Survey (USGS) reported that streamflow during June was only about 25 percent of the average. Above and to the left of Mt. Shasta is Lake Shastina, a reservoir which is noticeably lower in the 2001 image than the 2000 image. Images courtesy USGS EROS Data Center and the Landsat 7 Science Team
Analyzing huge pathology images with open source software.
Deroulers, Christophe; Ameisen, David; Badoual, Mathilde; Gerin, Chloé; Granier, Alexandre; Lartaud, Marc
2013-06-06
Digital pathology images are increasingly used both for diagnosis and research, because slide scanners are nowadays broadly available and because the quantitative study of these images yields new insights in systems biology. However, such virtual slides build up a technical challenge since the images occupy often several gigabytes and cannot be fully opened in a computer's memory. Moreover, there is no standard format. Therefore, most common open source tools such as ImageJ fail at treating them, and the others require expensive hardware while still being prohibitively slow. We have developed several cross-platform open source software tools to overcome these limitations. The NDPITools provide a way to transform microscopy images initially in the loosely supported NDPI format into one or several standard TIFF files, and to create mosaics (division of huge images into small ones, with or without overlap) in various TIFF and JPEG formats. They can be driven through ImageJ plugins. The LargeTIFFTools achieve similar functionality for huge TIFF images which do not fit into RAM. We test the performance of these tools on several digital slides and compare them, when applicable, to standard software. A statistical study of the cells in a tissue sample from an oligodendroglioma was performed on an average laptop computer to demonstrate the efficiency of the tools. Our open source software enables dealing with huge images with standard software on average computers. They are cross-platform, independent of proprietary libraries and very modular, allowing them to be used in other open source projects. They have excellent performance in terms of execution speed and RAM requirements. They open promising perspectives both to the clinician who wants to study a single slide and to the research team or data centre who do image analysis of many slides on a computer cluster. The virtual slide(s) for this article can be found here:http://www.diagnosticpathology.diagnomx.eu/vs/5955513929846272.
Analyzing huge pathology images with open source software
2013-01-01
Background Digital pathology images are increasingly used both for diagnosis and research, because slide scanners are nowadays broadly available and because the quantitative study of these images yields new insights in systems biology. However, such virtual slides build up a technical challenge since the images occupy often several gigabytes and cannot be fully opened in a computer’s memory. Moreover, there is no standard format. Therefore, most common open source tools such as ImageJ fail at treating them, and the others require expensive hardware while still being prohibitively slow. Results We have developed several cross-platform open source software tools to overcome these limitations. The NDPITools provide a way to transform microscopy images initially in the loosely supported NDPI format into one or several standard TIFF files, and to create mosaics (division of huge images into small ones, with or without overlap) in various TIFF and JPEG formats. They can be driven through ImageJ plugins. The LargeTIFFTools achieve similar functionality for huge TIFF images which do not fit into RAM. We test the performance of these tools on several digital slides and compare them, when applicable, to standard software. A statistical study of the cells in a tissue sample from an oligodendroglioma was performed on an average laptop computer to demonstrate the efficiency of the tools. Conclusions Our open source software enables dealing with huge images with standard software on average computers. They are cross-platform, independent of proprietary libraries and very modular, allowing them to be used in other open source projects. They have excellent performance in terms of execution speed and RAM requirements. They open promising perspectives both to the clinician who wants to study a single slide and to the research team or data centre who do image analysis of many slides on a computer cluster. Virtual slides The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/5955513929846272 PMID:23829479
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-06
... Resident. We will not accept group or family photographs; you must include a separate photograph for each... new digital image: The image file format must be in the Joint Photographic Experts Group (JPEG) format... Web site four to six weeks before the scheduled interviews with U.S. consular officers at overseas...
A Posteriori Restoration of Block Transform-Compressed Data
NASA Technical Reports Server (NTRS)
Brown, R.; Boden, A. F.
1995-01-01
The Galileo spacecraft will use lossy data compression for the transmission of its science imagery over the low-bandwidth communication system. The technique chosen for image compression is a block transform technique based on the Integer Cosine Transform, a derivative of the JPEG image compression standard. Considered here are two known a posteriori enhancement techniques, which are adapted.
Pine Island Glacier, Antarctica, MISR Multi-angle Composite
Atmospheric Science Data Center
2013-12-17
... View Larger Image (JPEG) A large iceberg has finally separated from the calving front ... next due to stereo parallax. This parallax is used in MISR processing to retrieve cloud heights over snow and ice. Additionally, a plume ...
JPEG2000 Image Compression on Solar EUV Images
NASA Astrophysics Data System (ADS)
Fischer, Catherine E.; Müller, Daniel; De Moortel, Ineke
2017-01-01
For future solar missions as well as ground-based telescopes, efficient ways to return and process data have become increasingly important. Solar Orbiter, which is the next ESA/NASA mission to explore the Sun and the heliosphere, is a deep-space mission, which implies a limited telemetry rate that makes efficient onboard data compression a necessity to achieve the mission science goals. Missions like the Solar Dynamics Observatory (SDO) and future ground-based telescopes such as the Daniel K. Inouye Solar Telescope, on the other hand, face the challenge of making petabyte-sized solar data archives accessible to the solar community. New image compression standards address these challenges by implementing efficient and flexible compression algorithms that can be tailored to user requirements. We analyse solar images from the Atmospheric Imaging Assembly (AIA) instrument onboard SDO to study the effect of lossy JPEG2000 (from the Joint Photographic Experts Group 2000) image compression at different bitrates. To assess the quality of compressed images, we use the mean structural similarity (MSSIM) index as well as the widely used peak signal-to-noise ratio (PSNR) as metrics and compare the two in the context of solar EUV images. In addition, we perform tests to validate the scientific use of the lossily compressed images by analysing examples of an on-disc and off-limb coronal-loop oscillation time-series observed by AIA/SDO.
Capacity is the Wrong Paradigm
2002-01-01
short, steganography values detection over ro- bustness, whereas watermarking values robustness over de - tection.) Hiding techniques for JPEG images ...world length of the code. D: If the algorithm is known, this method is trivially de - tectable if we are sending images (with no encryption). If we are...implications of the work of Chaitin and Kolmogorov on algorithmic complex- ity [5]. We have also concentrated on screen images in this paper and have not
A Novel Image Compression Algorithm for High Resolution 3D Reconstruction
NASA Astrophysics Data System (ADS)
Siddeq, M. M.; Rodrigues, M. A.
2014-06-01
This research presents a novel algorithm to compress high-resolution images for accurate structured light 3D reconstruction. Structured light images contain a pattern of light and shadows projected on the surface of the object, which are captured by the sensor at very high resolutions. Our algorithm is concerned with compressing such images to a high degree with minimum loss without adversely affecting 3D reconstruction. The Compression Algorithm starts with a single level discrete wavelet transform (DWT) for decomposing an image into four sub-bands. The sub-band LL is transformed by DCT yielding a DC-matrix and an AC-matrix. The Minimize-Matrix-Size Algorithm is used to compress the AC-matrix while a DWT is applied again to the DC-matrix resulting in LL2, HL2, LH2 and HH2 sub-bands. The LL2 sub-band is transformed by DCT, while the Minimize-Matrix-Size Algorithm is applied to the other sub-bands. The proposed algorithm has been tested with images of different sizes within a 3D reconstruction scenario. The algorithm is demonstrated to be more effective than JPEG2000 and JPEG concerning higher compression rates with equivalent perceived quality and the ability to more accurately reconstruct the 3D models.
2015-03-26
Fourier Analysis and Applications, vol. 14, pp. 838–858, 2008. 11. D. J. Cooke, “A discrete X - ray transform for chromotomographic hyperspectral imaging ... medical imaging , e.g., magnetic resonance imaging (MRI). Since the early 1980s, MRI has granted doctors the ability to distinguish between healthy tissue...i.e., at most K entries of x are nonzero. In many settings, this is a valid signal model; for example, JPEG2000 exploits the fact that natural images
Helioviewer.org: An Open-source Tool for Visualizing Solar Data
NASA Astrophysics Data System (ADS)
Hughitt, V. Keith; Ireland, J.; Schmiedel, P.; Dimitoglou, G.; Mueller, D.; Fleck, B.
2009-05-01
As the amount of solar data available to scientists continues to increase at faster and faster rates, it is important that there exist simple tools for navigating this data quickly with a minimal amount of effort. By combining heterogeneous solar physics datatypes such as full-disk images and coronagraphs, along with feature and event information, Helioviewer offers a simple and intuitive way to browse multiple datasets simultaneously. Images are stored in a repository using the JPEG 2000 format and tiled dynamically upon a client's request. By tiling images and serving only the portions of the image requested, it is possible for the client to work with very large images without having to fetch all of the data at once. Currently, Helioviewer enables users to browse the entire SOHO data archive, updated hourly, as well as data feature/event catalog data from eight different catalogs including active region, flare, coronal mass ejection, type II radio burst data. In addition to a focus on intercommunication with other virtual observatories and browsers (VSO, HEK, etc), Helioviewer will offer a number of externally-available application programming interfaces (APIs) to enable easy third party use, adoption and extension. Future functionality will include: support for additional data-sources including TRACE, SDO and STEREO, dynamic movie generation, a navigable timeline of recorded solar events, social annotation, and basic client-side image processing.
Study and validation of tools interoperability in JPSEC
NASA Astrophysics Data System (ADS)
Conan, V.; Sadourny, Y.; Jean-Marie, K.; Chan, C.; Wee, S.; Apostolopoulos, J.
2005-08-01
Digital imagery is important in many applications today, and the security of digital imagery is important today and is likely to gain in importance in the near future. The emerging international standard ISO/IEC JPEG-2000 Security (JPSEC) is designed to provide security for digital imagery, and in particular digital imagery coded with the JPEG-2000 image coding standard. One of the primary goals of a standard is to ensure interoperability between creators and consumers produced by different manufacturers. The JPSEC standard, similar to the popular JPEG and MPEG family of standards, specifies only the bitstream syntax and the receiver's processing, and not how the bitstream is created or the details of how it is consumed. This paper examines the interoperability for the JPSEC standard, and presents an example JPSEC consumption process which can provide insights in the design of JPSEC consumers. Initial interoperability tests between different groups with independently created implementations of JPSEC creators and consumers have been successful in providing the JPSEC security services of confidentiality (via encryption) and authentication (via message authentication codes, or MACs). Further interoperability work is on-going.
Research on lossless compression of true color RGB image with low time and space complexity
NASA Astrophysics Data System (ADS)
Pan, ShuLin; Xie, ChengJun; Xu, Lin
2008-12-01
Eliminating correlated redundancy of space and energy by using a DWT lifting scheme and reducing the complexity of the image by using an algebraic transform among the RGB components. An improved Rice Coding algorithm, in which presents an enumerating DWT lifting scheme that fits any size images by image renormalization has been proposed in this paper. This algorithm has a coding and decoding process without backtracking for dealing with the pixels of an image. It support LOCO-I and it can also be applied to Coder / Decoder. Simulation analysis indicates that the proposed method can achieve a high image compression. Compare with Lossless-JPG, PNG(Microsoft), PNG(Rene), PNG(Photoshop), PNG(Anix PicViewer), PNG(ACDSee), PNG(Ulead photo Explorer), JPEG2000, PNG(KoDa Inc), SPIHT and JPEG-LS, the lossless image compression ratio improved 45%, 29%, 25%, 21%, 19%, 17%, 16%, 15%, 11%, 10.5%, 10% separately with 24 pieces of RGB image provided by KoDa Inc. Accessing the main memory in Pentium IV,CPU2.20GHZ and 256MRAM, the coding speed of the proposed coder can be increased about 21 times than the SPIHT and the efficiency of the performance can be increased 166% or so, the decoder's coding speed can be increased about 17 times than the SPIHT and the efficiency of the performance can be increased 128% or so.
A Steganographic Embedding Undetectable by JPEG Compatibility Steganalysis
2002-01-01
itd.nrl.navy.mil Abstract. Steganography and steganalysis of digital images is a cat- and-mouse game. In recent work, Fridrich, Goljan and Du introduced a method...proposed embedding method. 1 Introduction Steganography and steganalysis of digital images is a cat-and-mouse game. Ever since Kurak and McHugh’s seminal...paper on LSB embeddings in images [10], various researchers have published work on either increasing the payload, im- proving the resistance to
On LSB Spatial Domain Steganography and Channel Capacity
2008-03-21
reveal the hidden information should not be taken as proof that the image is now clean. The survivability of LSB type spatial domain steganography ...the mindset that JPEG compressing an image is sufficient to destroy the steganography for spatial domain LSB type stego. We agree that JPEGing...modeling of 2 bit LSB steganography shows that theoretically there is non-zero stego payload possible even though the image has been JPEGed. We wish to
Observer performance assessment of JPEG-compressed high-resolution chest images
NASA Astrophysics Data System (ADS)
Good, Walter F.; Maitz, Glenn S.; King, Jill L.; Gennari, Rose C.; Gur, David
1999-05-01
The JPEG compression algorithm was tested on a set of 529 chest radiographs that had been digitized at a spatial resolution of 100 micrometer and contrast sensitivity of 12 bits. Images were compressed using five fixed 'psychovisual' quantization tables which produced average compression ratios in the range 15:1 to 61:1, and were then printed onto film. Six experienced radiologists read all cases from the laser printed film, in each of the five compressed modes as well as in the non-compressed mode. For comparison purposes, observers also read the same cases with reduced pixel resolutions of 200 micrometer and 400 micrometer. The specific task involved detecting masses, pneumothoraces, interstitial disease, alveolar infiltrates and rib fractures. Over the range of compression ratios tested, for images digitized at 100 micrometer, we were unable to demonstrate any statistically significant decrease (p greater than 0.05) in observer performance as measured by ROC techniques. However, the observers' subjective assessments of image quality did decrease significantly as image resolution was reduced and suggested a decreasing, but nonsignificant, trend as the compression ratio was increased. The seeming discrepancy between our failure to detect a reduction in observer performance, and other published studies, is likely due to: (1) the higher resolution at which we digitized our images; (2) the higher signal-to-noise ratio of our digitized films versus typical CR images; and (3) our particular choice of an optimized quantization scheme.
NASA Astrophysics Data System (ADS)
Yabuta, Kenichi; Kitazawa, Hitoshi; Tanaka, Toshihisa
2006-02-01
Recently, monitoring cameras for security have been extensively increasing. However, it is normally difficult to know when and where we are monitored by these cameras and how the recorded images are stored and/or used. Therefore, how to protect privacy in the recorded images is a crucial issue. In this paper, we address this problem and introduce a framework for security monitoring systems considering the privacy protection. We state requirements for monitoring systems in this framework. We propose a possible implementation that satisfies the requirements. To protect privacy of recorded objects, they are made invisible by appropriate image processing techniques. Moreover, the original objects are encrypted and watermarked into the image with the "invisible" objects, which is coded by the JPEG standard. Therefore, the image decoded by a normal JPEG viewer includes the objects that are unrecognized or invisible. We also introduce in this paper a so-called "special viewer" in order to decrypt and display the original objects. This special viewer can be used by limited users when necessary for crime investigation, etc. The special viewer allows us to choose objects to be decoded and displayed. Moreover, in this proposed system, real-time processing can be performed, since no future frame is needed to generate a bitstream.
DCTune Perceptual Optimization of Compressed Dental X-Rays
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Null, Cynthia H. (Technical Monitor)
1996-01-01
In current dental practice, x-rays of completed dental work are often sent to the insurer for verification. It is faster and cheaper to transmit instead digital scans of the x-rays. Further economies result if the images are sent in compressed form. DCTune is a technology for optimizing DCT (digital communication technology) quantization matrices to yield maximum perceptual quality for a given bit-rate, or minimum bit-rate for a given perceptual quality. Perceptual optimization of DCT color quantization matrices. In addition, the technology provides a means of setting the perceptual quality of compressed imagery in a systematic way. The purpose of this research was, with respect to dental x-rays, 1) to verify the advantage of DCTune over standard JPEG (Joint Photographic Experts Group), 2) to verify the quality control feature of DCTune, and 3) to discover regularities in the optimized matrices of a set of images. We optimized matrices for a total of 20 images at two resolutions (150 and 300 dpi) and four bit-rates (0.25, 0.5, 0.75, 1.0 bits/pixel), and examined structural regularities in the resulting matrices. We also conducted psychophysical studies (1) to discover the DCTune quality level at which the images became 'visually lossless,' and (2) to rate the relative quality of DCTune and standard JPEG images at various bitrates. Results include: (1) At both resolutions, DCTune quality is a linear function of bit-rate. (2) DCTune quantization matrices for all images at all bitrates and resolutions are modeled well by an inverse Gaussian, with parameters of amplitude and width. (3) As bit-rate is varied, optimal values of both amplitude and width covary in an approximately linear fashion. (4) Both amplitude and width vary in systematic and orderly fashion with either bit-rate or DCTune quality; simple mathematical functions serve to describe these relationships. (5) In going from 150 to 300 dpi, amplitude parameters are substantially lower and widths larger at corresponding bit-rates or qualities. (6) Visually lossless compression occurs at a DCTune quality value of about 1. (7) At 0.25 bits/pixel, comparative ratings give DCTune a substantial advantage over standard JPEG. As visually lossless bit-rates are approached, this advantage of necessity diminishes. We have concluded that DCTune optimized quantization matrices provide better visual quality than standard JPEG. Meaningful quality levels may be specified by means of the DCTune metric. Optimized matrices are very similar across the class of dental x-rays, suggesting the possibility of a 'class-optimal' matrix. DCTune technology appears to provide some value in the context of compressed dental x-rays.
A multicenter observer performance study of 3D JPEG2000 compression of thin-slice CT.
Erickson, Bradley J; Krupinski, Elizabeth; Andriole, Katherine P
2010-10-01
The goal of this study was to determine the compression level at which 3D JPEG2000 compression of thin-slice CTs of the chest and abdomen-pelvis becomes visually perceptible. A secondary goal was to determine if residents in training and non-physicians are substantially different from experienced radiologists in their perception of compression-related changes. This study used multidetector computed tomography 3D datasets with 0.625-1-mm thickness slices of standard chest, abdomen, or pelvis, clipped to 12 bits. The Kakadu v5.2 JPEG2000 compression algorithm was used to compress and decompress the 80 examinations creating four sets of images: lossless, 1.5 bpp (8:1), 1 bpp (12:1), and 0.75 bpp (16:1). Two randomly selected slices from each examination were shown to observers using a flicker mode paradigm in which observers rapidly toggled between two images, the original and a compressed version, with the task of deciding whether differences between them could be detected. Six staff radiologists, four residents, and six PhDs experienced in medical imaging (from three institutions) served as observers. Overall, 77.46% of observers detected differences at 8:1, 94.75% at 12:1, and 98.59% at 16:1 compression levels. Across all compression levels, the staff radiologists noted differences 64.70% of the time, the resident's detected differences 71.91% of the time, and the PhDs detected differences 69.95% of the time. Even mild compression is perceptible with current technology. The ability to detect differences does not equate to diagnostic differences, although perception of compression artifacts could affect diagnostic decision making and diagnostic workflow.
The effect of lossy image compression on image classification
NASA Technical Reports Server (NTRS)
Paola, Justin D.; Schowengerdt, Robert A.
1995-01-01
We have classified four different images, under various levels of JPEG compression, using the following classification algorithms: minimum-distance, maximum-likelihood, and neural network. The training site accuracy and percent difference from the original classification were tabulated for each image compression level, with maximum-likelihood showing the poorest results. In general, as compression ratio increased, the classification retained its overall appearance, but much of the pixel-to-pixel detail was eliminated. We also examined the effect of compression on spatial pattern detection using a neural network.
Limited distortion in LSB steganography
NASA Astrophysics Data System (ADS)
Kim, Younhee; Duric, Zoran; Richards, Dana
2006-02-01
It is well known that all information hiding methods that modify the least significant bits introduce distortions into the cover objects. Those distortions have been utilized by steganalysis algorithms to detect that the objects had been modified. It has been proposed that only coefficients whose modification does not introduce large distortions should be used for embedding. In this paper we propose an effcient algorithm for information hiding in the LSBs of JPEG coefficients. Our algorithm uses parity coding to choose the coefficients whose modifications introduce minimal additional distortion. We derive the expected value of the additional distortion as a function of the message length and the probability distribution of the JPEG quantization errors of cover images. Our experiments show close agreement between the theoretical prediction and the actual additional distortion.
VIMOS - a Cosmology Machine for the VLT
NASA Astrophysics Data System (ADS)
2002-03-01
Successful Test Observations With Powerful New Instrument at Paranal [1] Summary One of the most fundamental tasks of modern astrophysics is the study of the evolution of the Universe . This is a daunting undertaking that requires extensive observations of large samples of objects in order to produce reasonably detailed maps of the distribution of galaxies in the Universe and to perform statistical analysis. Much effort is now being put into mapping the relatively nearby space and thereby to learn how the Universe looks today . But to study its evolution, we must compare this with how it looked when it still was young . This is possible, because astronomers can "look back in time" by studying remote objects - the larger their distance, the longer the light we now observe has been underway to us, and the longer is thus the corresponding "look-back time". This may sound easy, but it is not. Very distant objects are very dim and can only be observed with large telescopes. Looking at one object at a time would make such a study extremely time-consuming and, in practical terms, impossible. To do it anyhow, we need the largest possible telescope with a highly specialised, exceedingly sensitive instrument that is able to observe a very large number of (faint) objects in the remote universe simultaneously . The VLT VIsible Multi-Object Spectrograph (VIMOS) is such an instrument. It can obtain many hundreds of spectra of individual galaxies in the shortest possible time; in fact, in one special observing mode, up to 6400 spectra of the galaxies in a remote cluster during a single exposure, augmenting the data gathering power of the telescope by the same proportion. This marvellous science machine has just been installed at the 8.2-m MELIPAL telescope, the third unit of the Very Large Telescope (VLT) at the ESO Paranal Observatory. A main task will be to carry out 3-dimensional mapping of the distant Universe from which we can learn its large-scale structure . "First light" was achieved on February 26, 2002, and a first series of test observations has successfully demonstrated the huge potential of this amazing facility. Much work on VIMOS is still ahead during the coming months in order to put into full operation and fine-tune the most efficient "galaxy cruncher" in the world. VIMOS is the outcome of a fruitful collaboration between ESO and several research institutes in France and Italy, under the responsibility of the Laboratoire d'Astrophysique de Marseille (CNRS, France). The other partners in the "VIRMOS Consortium" are the Laboratoire d'Astrophysique de Toulouse, Observatoire Midi-Pyrénées, and Observatoire de Haute-Provence in France, and Istituto di Radioastronomia (Bologna), Istituto di Fisica Cosmica e Tecnologie Relative (Milano), Osservatorio Astronomico di Bologna, Osservatorio Astronomico di Brera (Milano) and Osservatorio Astronomico di Capodimonte (Naples) in Italy. PR Photo 09a/02 : VIMOS image of the Antennae Galaxies (centre). PR Photo 09b/02 : First VIMOS Multi-Object Spectrum (full field) PR Photo 09c/02 : The VIMOS instrument on VLT MELIPAL PR Photo 09d/02 : The VIMOS team at "First Light". PR Photo 09e/02 : "First Light" image of NGC 5364 PR Photo 09f/02 : Image of the Crab Nebula PR Photo 09g/02 : Image of spiral galaxy NGC 2613 PR Photo 09h/02 : Image of spiral galaxy Messier 100 PR Photo 09i/02 : Image of cluster of galaxies ACO 3341 PR Photo 09j/02 : Image of cluster of galaxies MS 1008.1-1224 PR Photo 09k/02 : Mask design for MOS exposure PR Photo 09l/02 : First VIMOS Multi-Object Spectrum (detail) PR Photo 09m/02 : Integrated Field Spectroscopy of central area of the "Antennae Galaxies" PR Photo 09n/02 : Integrated Field Spectroscopy of central area of the "Antennae Galaxies" (detail) Science with VIMOS ESO PR Photo 09a/02 ESO PR Photo 09a/02 [Preview - JPEG: 400 x 469 pix - 152k] [Normal - JPEG: 800 x 938 pix - 408k] ESO PR Photo 09b/02 ESO PR Photo 09b/02 [Preview - JPEG: 400 x 511 pix - 304k] [Normal - JPEG: 800 x 1022 pix - 728k] Caption : PR Photo 09a/02 : One of the first images from the new VIMOS facility, obtained right after the moment of "first light" on Ferbruary 26, 2002. It shows the famous "Antennae Galaxies" (NGC 4038/39), the result of a recent collision between two galaxies. As an immediate outcome of this dramatic event, stars are born within massive complexes that appear blue in this composite photo, based on exposures through green, orange and red optical filtres. PR Photo 09b/02 : Some of the first spectra of distant galaxies obtained with VIMOS in Multi-Object-Spectroscopy (MOS) mode. More than 220 galaxies were observed simultaneously, an unprecedented efficiency for such a "deep" exposure, reaching so far out in space. These spectra allow to obtain the redshift, a measure of distance, as well as to assess the physical status of the gas and stars in each of these galaxies. A part of this photo is enlarged as PR Photo 09l/02. Technical information about these photos is available below. Other "First Light" images from VIMOS are shown in the photo gallery below. The next in the long series of front-line instruments to be installed on the ESO Very Large Telescope (VLT), VIMOS (and its complementary, infrared-sensitive counterpart NIRMOS, now in the design stage) will allow mapping of the distribution of galaxies, clusters, and quasars during a time interval spanning more than 90% of the age of the universe. It will let us look back in time to a moment only ~1.5 billion years after the Big Bang (corresponding to a redshift of about 5). Like archaeologists, astronomers can then dig deep into those early ages when the first building blocks of galaxies were still in the process of formation. They will be able to determine when most of the star formation occurred in the universe and how it evolved with time. They will analyse how the galaxies cluster in space, and how this distribution varies with time. Such observations will put important constraints on evolution models, in particular on the average density of matter in the Universe. Mapping the distant universe requires to determine the distances of the enormous numbers of remote galaxies seen in deep pictures of the sky, adding depth - the third, indispensible dimension - to the photo. VIMOS offers this capability, and very efficiently. Multi-object spectroscopy is a technique by which many objects are observed simultaneously. VIMOS can observe the spectra of about 1000 galaxies in one exposure, from which redshifts, hence distances, can be measured [2]. The possibility to observe two galaxies at once would be equivalent to having a telescope twice the size of a VLT Unit Telescope. VIMOS thus effectively "increases" the size of the VLT hundreds of times. From these spectra, the stellar and gaseous content and internal velocities of galaxies can be infered, forming the base for detailed physical studies. At present the distances of only a few thousand galaxies and quasars have been measured in the distant universe. VIMOS aims at observing 100 times more, over one hundred thousand of those remote objects. This will form a solid base for unprecedented and detailed statistical studies of the population of galaxies and quasars in the very early universe. The international VIRMOS Consortium VIMOS is one of two major astronomical instruments to be delivered by the VIRMOS Consortium of French and Italian institutes under a contract signed in the summer of 1997 between the European Southern Observatory (ESO) and the French Centre National de la Recherche Scientifique (CNRS). The participating institutes are: in France: * Laboratoire d'Astrophysique de Marseille (LAM), Observatoire Marseille-Provence (project responsible) * Laboratoire d'Astrophysique de Toulouse, Observatoire Midi-Pyrénées * Observatoire de Haute-Provence (OHP) in Italy: * Istituto di Radioastronomia (IRA-CNR) (Bologna) * Istituto di Fisica Cosmica e Tecnologie Relative (IFCTR) (Milano) * Osservatorio Astronomico di Capodimonte (OAC) (Naples) * Osservatorio Astronomico di Bologna (OABo) * Osservatorio Astronomico di Brera (OABr) (Milano) VIMOS at the VLT: a unique and powerful combination ESO PR Photo 09c/02 ESO PR Photo 09c/02 [Preview - JPEG: 501 x 400 pix - 312k] [Normal - JPEG: 1002 x 800 pix - 840k] Caption : PR Photo 09c/02 shows the new VIMOS instrument on one of the Nasmyth platforms of the 8.2-m VLT MELIPAL telescope at Paranal. VIMOS is installed on the Nasmyth "Focus B" platform of the 8.2-m VLT MELIPAL telescope, cf. PR Photo 09c/02 . It may be compared to four multi-mode instruments of the FORS-type (cf. ESO PR 14/98 ), joined in one stiff structure. The construction of VIMOS has involved the production of large and complex optical elements and their integration in more than 30 remotely controlled, finely moving functions in the instrument. In the configuration employed for the "first light", VIMOS made use of two of its four channels. The two others will be put into operation in the next commissioning period during the coming months. However, VIMOS is already now the most efficient multi-object spectrograph in the world , with an equivalent (accumulated) slit length of up to 70 arcmin on the sky. VIMOS has a field-of-view as large as half of the full moon (14 x 16 arcmin 2 for the four quadrants), the largest sky field to be imaged so far by the VLT. It has excellent sensitivity in the blue region of the spectrum (about 60% more efficient than any other similar instruments in the ultraviolet band), and it is also very sensitive in all other visible spectral regions, all the way to the red limit. But the absolutely unique feature of VIMOS is its capability to take large numbers of spectra simultaneously , leading to exceedingly efficient use of the observing time. Up to about 1000 objects can be observed in a single exposure in multi-slit mode. And no less than 6400 spectra can be recorded with the Integral Field Unit , in which a closely packed fibre optics bundle can simultaneously observe a continuous sky area measuring no less than 56 x 56 arcsec 2. A dedicated machine, the Mask Manufacturing Unit (MMU) , cuts the slits for the entrance apertures of the spectrograph. The laser is capable of cutting 200 slits in less than 15 minutes. This facility was put into operation at Paranal by the VIRMOS Consortium already in August 2000 and has since been extensively used for observations with the FORS2 instrument; more details are available in ESO PR 19/99. Fast start-up of VIMOS at Paranal ESO PR Photo 09d/02 ESO PR Photo 09d/02 [Preview - JPEG: 473 x 400 pix - 280k] [Normal - JPEG: 946 x 1209 pix - 728k] ESO PR Photo 09e/02 ESO PR Photo 09e/02 [Preview - JPEG: 400 x 438 pix - 176k] [Normal - JPEG: 800 x 876 pix - 664k] Caption : PR Photo 09d/02 : The VIRMOS team in the MELIPAL control room, moments after "First Light" on February 26, 2002. From left to right: Oreste Caputi, Marco Scodeggio, Giovanni Sciarretta , Olivier Le Fevre, Sylvie Brau-Nogue, Christian Lucuix, Bianca Garilli, Markus Kissler-Patig (in front), Xavier Reyes, Michel Saisse, Luc Arnold and Guido Mancini . PR Photo 09e/02 : The spiral galaxy NGC 5364 was the first object to be observed by VIMOS. This false-colour near-infrared, raw "First Light" photo shows the extensive spiral arms. Technical information about this photo is available below. VIMOS was shipped from Observatoire de Haute-Provence (France) at the end of 2001, and reassembled at Paranal during a first period in January 2002. From mid-February, the instrument was made ready for installation on the VLT MELIPAL telescope; this happened on February 24, 2002. VIMOS saw "First Light" just two days later, on February 26, 2000, cf. PR Photo 09e/02 . During the same night, a number of excellent images were obtained of various objects, demonstrating the fine capabilities of the instrument in the "direct imaging"-mode. The first spectra were successfully taken during the night of March 2 - 3, 2002 . The slit masks that were used on this occasion were prepared with dedicated software that also optimizes the object selection, cf. PR Photo 09k/02 , and were then cut with the laser machine. From the first try on, the masks have been well aligned on the sky objects. The first observations with large numbers of spectra were obtained shortly thereafter. First accomplishments Images of nearby galaxies, clusters of galaxies, and distant galaxy fields were among the first to be obtained, using the VIMOS imaging mode and demonstrating the excellent efficiency of the instrument, various examples are shown below. The first observations of multi-spectra were performed in a selected sky field in which many faint galaxies are present; it is known as the "VIRMOS-VLT Deep Survey Field at 1000+02". Thanks to the excellent sensitivity of VIMOS, the spectra of galaxies as faint as (red) magnitude R = 23 (i.e. over 6 million times fainter than what can be perceived with the unaided eye) are visible on exposures lasting only 15 minutes. Some of the first observations with the Integral Field Unit were made of the core of the famous Antennae Galaxies (NGC 4038/39) . They will form the basis for a detailed map of the strong emission produced by the current, dramatic collision of the two galaxies. First Images and Spectra from VIMOS - a Gallery The following photos are from a collection of the first images and spectra obtained with VIMOS . See also PR Photos 09a/02 , 09b/02 and 09e/02 , reproduced above. Technical information about all of them is available below. ESO PR Photo 09f/02 ESO PR Photo 09f/02 [Preview - JPEG: 400 x 469 pix - 224k] [Normal - JPEG: 800 x 937 pix - 544k] [HiRes - JPEG: 2001 x 2343 pix - 3.6M] Caption : PR Photo 09f/02 : The Crab Nebula (Messier 1) , as observed by VIMOS. This well-known object is the remnant of a stellar explosion in the year 1054. ESO PR Photo 09g/02 ESO PR Photo 09g/02 [Preview - JPEG: 478 x 400 pix - 184k] [Normal - JPEG: 956 x 1209 pix - 416k] [HiRes - JPEG: 1801 x 1507 pix - 1.4M] Caption : PR Photo 09g/02 : VIMOS photo of NGC 2613 , a spiral galaxy that ressembles our own Milky Way. ESO PR Photo 09h/02 ESO PR Photo 09h/02 [Preview - JPEG: 400 x 469 pix - 152k] [Normal - JPEG: 800 x 938 pix - 440k] [HiRes - JPEG: 1800 x 2100 pix - 2.0M] Caption : PR Photo 09h/02 : Messier 100 is one of the largest and brightest spiral galaxies in the sky. ESO PR Photo 09i/02 ESO PR Photo 09i/02 [Preview - JPEG: 400 x 405 pix - 144k] [Normal - JPEG: 800 x 810 pix - 312k] Caption : PR Photo 09i/02 : The cluster of galaxies ACO 3341 is located at a distance of about 300 million light-years (redshift z = 0.037), i.e., comparatively nearby in cosmological terms. It contains a large number of galaxies of different size and brightness that are bound together by gravity. ESO PR Photo 09j/02 ESO PR Photo 09j/02 [Preview - JPEG: 447 x 400 pix - 200k] [Normal - JPEG: 893 x 800 pix - 472k] [HiRes - JPEG: 1562 x 1399 pix - 1.1M] Caption : PR Photo 09j/02 : The distant cluster of galaxies MS 1008.1-1224 is some 3 billion light-years distant (redshift z = 0.301). The galaxies in this cluster - that we observe as they were 3 billion years ago - are different from galaxies in our neighborhood; their stellar populations, on the average, are younger. ESO PR Photo 09k/02 ESO PR Photo 09k/02 [Preview - JPEG: 400 x 455 pix - 280k] [Normal - JPEG: 800 x 909 pix - 696k] Caption : PR Photo 09k/02 : Design of a Mask for Multi-Object Spectroscopy (MOS) observations with VIMOS. The mask serves to block, as far as possible, unwanted background light from the "night sky" (radiation from atoms and molecules in the Earth's upper atmosphere). During the set-up process for multi-object observations, the VIMOS software optimizes the position of the individual slits in the mask (one for each object for which a spectrum will be obtained) before these are cut. The photo shows an example of this fitting process, with the slit contours superposed on a short pre-exposure of the sky field to be observed. ESO PR Photo 09l/02 ESO PR Photo 09l/02 [Preview - JPEG: 470 x 400 pix - 200k] [Normal - JPEG: 939 x 800 pix - 464k] Caption : PR Photo 09l/02 : First Multi-Object Spectroscopy (MOS) observations with VIMOS; enlargement of a small part of the field shown in PR Photo 09b/02. The light from each galaxy passes through the dedicated slit in the mask (see PR Photo 09k/02 ) and produces a spectrum on the detector. Each vertical rectangle contains the spectrum of one galaxy that is located several billion light-years away. The horizontal lines are the strong emission from the "night sky" (radiation from atoms and molecules in the Earth's upper atmosphere), while the vertical traces are the spectral signatures of the galaxies. The full field contains the spectra of over 220 galaxies that were observed simultaneously, illustrating the great efficiency of this technique. Later, about 1000 spectra will be obtained in one exposure. ESO PR Photo 09m/02 ESO PR Photo 09m/02 [Preview - JPEG: 470 x 400 pix - 264k] [Normal - JPEG: 939 x 800 pix - 720k] Caption : PR Photo 09m/02 : was obtained with the Integral Field Spectroscopy mode of VIMOS. In one single exposure, more than 3000 spectra were taken of the central area of the Antennae Galaxies ( PR Photo 09a/02 ). ESO PR Photo 09n/02 ESO PR Photo 09n/02 [Preview - JPEG: 532 x 400 pix - 320k] [Normal - JPEG: 1063 x 800 pix - 864k] Caption : PR Photo 09n/02 : An enlargement of a small area in PR Photo 09m/02. This observation allows mapping of the distribution of elements like hydrogen (H) and sulphur (S II), for which the signatures are clearly identified in these spectra. The wavelength increases towards the top (arrow). Notes [1]: This is a joint Press Release of ESO , Centre National de la Recherche Scientifique (CNRS) in France, and Consiglio Nazionale delle Ricerche (CNR) and Istituto Nazionale di Astrofisica (INAF) in Italy. [2]: In astronomy, the redshift denotes the fraction by which the lines in the spectrum of an object are shifted towards longer wavelengths. The observed redshift of a distant galaxy gives a direct estimate of the apparent recession velocity as caused by the universal expansion. Since the expansion rate increases with distance, the velocity is itself a function (the Hubble relation) of the distance to the object. Technical information about the photos PR Photo 09a/01 : Composite VRI image of NGC 4038/39, obtained on 26 February 2002, in a bright sky (full moon). Individual exposures of 60 sec each; image quality 0.6 arcsec FWHM; the field measures 3.5 x 3.5 arcmin 2. North is up and East is left. PR Photo 09b/02 : MOS-spectra obtained with two quadrants totalling 221 slits + 6 reference objects (stars placed in square holes to ensure a correct alignment). Exposure time 15 min; LR(red) grism. This is the raw (unprocessed) image of the spectra. PR Photo 09e/02 : A 60 sec i exposure of NGC 5364 on February 26, 2002; image quality 0.6 arcsec FWHM; full moon; 3.5 x 3.5 arcmin 2 ; North is up and East is left. PR Photo 09f/02 : Composite VRI image of Messier 1, obtained on March 4, 2002. The individual exposures lasted 180 sec; image quality 0.7 arcsec FWHM; field 7 x 7 arcmin 2 ; North is up and East is left. PR Photo 09g/02 : Composite VRI image of NGC 2613, obtained on February 28, 2002. The individual exposures lasted 180 sec; image quality 0.7 arcsec FWHM; field 7 x 7 arcmin 2 ; North is up and East is left. PR Photo 09h/02 : Composite VRI image of Messier 100, obtained on March 3, 2002. The individual exposures lasted 180 sec, image quality 0.7 arcsec FWHM; field 7 x 7 arcmin 2 ; North is up and East is left. PR Photo 09i/02 : R-band image of galaxy cluster ACO 3341, obtained on March 4, 2002. Exposure 300 sec, image quality 0.5 arcsec FWHM;. field 7 x 7 arcmin 2 ; North is up and East is left. PR Photo 09j/02 : Composite VRI image of the distant cluster of galaxies MS 1008.1-1224. The individual exposures lasted 300 sec; image quality 0.8 arcsec FWHM; field 5 x 3 arcmin 2 ; North is to the right and East is up. PR Photo 09k/02 : Mask design made with the VMMPS tool, overlaying a pre-image. The selected objects are seen at the centre of the yellow squares, where a 1 arcsec slit is cut along the spatial X-axis. The rectangles in white represent the dispersion in wavelength of the spectra along the Y-axis. Masks are cut with the Mask Manufacturing Unit (MMU) built by the Virmos Consortium. PR Photo 09l/02 : Enlargement of a small area of PR Photo 09b/02. PR Photo 09m/02 : Spectra of the central area of NGC 4038/39, obtained with the Integral Field Unit on February 26, 2002. The exposure lasted 5 min and was made with the low resolution red grating. PR Photo 09m/02 : Zoom-in on small area of PR Photo 09m/02. The strong emission lines of hydrogen (H-alpha) and ionized sulphur (S II) are seen.
JPEG XS call for proposals subjective evaluations
NASA Astrophysics Data System (ADS)
McNally, David; Bruylants, Tim; Willème, Alexandre; Ebrahimi, Touradj; Schelkens, Peter; Macq, Benoit
2017-09-01
In March 2016 the Joint Photographic Experts Group (JPEG), formally known as ISO/IEC SC29 WG1, issued a call for proposals soliciting compression technologies for a low-latency, lightweight and visually transparent video compression scheme. Within the JPEG family of standards, this scheme was denominated JPEG XS. The subjective evaluation of visually lossless compressed video sequences at high resolutions and bit depths poses particular challenges. This paper describes the adopted procedures, the subjective evaluation setup, the evaluation process and summarizes the obtained results which were achieved in the context of the JPEG XS standardization process.
NASA Astrophysics Data System (ADS)
2001-04-01
A Window towards the Distant Universe Summary The Osservatorio Astronomico Capodimonte Deep Field (OACDF) is a multi-colour imaging survey project that is opening a new window towards the distant universe. It is conducted with the ESO Wide Field Imager (WFI) , a 67-million pixel advanced camera attached to the MPG/ESO 2.2-m telescope at the La Silla Observatory (Chile). As a pilot project at the Osservatorio Astronomico di Capodimonte (OAC) [1], the OACDF aims at providing a large photometric database for deep extragalactic studies, with important by-products for galactic and planetary research. Moreover, it also serves to gather experience in the proper and efficient handling of very large data sets, preparing for the arrival of the VLT Survey Telescope (VST) with the 1 x 1 degree 2 OmegaCam facility. PR Photo 15a/01 : Colour composite of the OACDF2 field . PR Photo 15b/01 : Interacting galaxies in the OACDF2 field. PR Photo 15c/01 : Spiral galaxy and nebulous object in the OACDF2 field. PR Photo 15d/01 : A galaxy cluster in the OACDF2 field. PR Photo 15e/01 : Another galaxy cluster in the OACDF2 field. PR Photo 15f/01 : An elliptical galaxy in the OACDF2 field. The Capodimonte Deep Field ESO PR Photo 15a/01 ESO PR Photo 15a/01 [Preview - JPEG: 400 x 426 pix - 73k] [Normal - JPEG: 800 x 851 pix - 736k] [Hi-Res - JPEG: 3000 x 3190 pix - 7.3M] Caption : This three-colour image of about 1/4 of the Capodimonte Deep Field (OACDF) was obtained with the Wide-Field Imager (WFI) on the MPG/ESO 2.2-m telescope at the la Silla Observatory. It covers "OACDF Subfield no. 2 (OACDF2)" with an area of about 35 x 32 arcmin 2 (about the size of the full moon), and it is one of the "deepest" wide-field images ever obtained. Technical information about this photo is available below. With the comparatively few large telescopes available in the world, it is not possible to study the Universe to its outmost limits in all directions. Instead, astronomers try to obtain the most detailed information possible in selected viewing directions, assuming that what they find there is representative for the Universe as a whole. This is the philosophy behind the so-called "deep-field" projects that subject small areas of the sky to intensive observations with different telescopes and methods. The astronomers determine the properties of the objects seen, as well as their distances and are then able to obtain a map of the space within the corresponding cone-of-view (the "pencil beam"). Recent, successful examples of this technique are the "Hubble Deep Field" (cf. ESO PR Photo 26/98 ) and the "Chandra Deep Field" ( ESO PR 05/01 ). In this context, the Capodimonte Deep Field (OACDF) is a pilot research project, now underway at the Osservatorio Astronomico di Capodimonte (OAC) in Napoli (Italy). It is a multi-colour imaging survey performed with the Wide Field Imager (WFI) , a 67-million pixel (8k x 8k) digital camera that is installed at the 2.2-m MPG/ESO Telescope at ESO's La Silla Observatory in Chile. The scientific goal of the OACDF is to provide an important database for subsequent extragalactic, galactic and planetary studies. It will allow the astronomers at OAC - who are involved in the VLT Survey Telescope (VST) project - to gain insight into the processing (and use) of the large data flow from a camera similar to, but four times smaller than the OmegaCam wide-field camera that will be installed at the VST. The field selection for the OACDF was based on the following criteria: * There must be no stars brighter than about 9th magnitude in the field, in order to avoid saturation of the CCD detector and effects from straylight in the telescope and camera. No Solar System planets should be near the field during the observations; * It must be located far from the Milky Way plane (at high galactic latitude) in order to reduce the number of galactic stars seen in this direction; * It must be located in the southern sky in order to optimize observing conditions (in particular, the altitude of the field above the horizon), as seen from the La Silla and Paranal sites; * There should be little interstellar material in this direction that may obscure the view towards the distant Universe; * Observations in this field should have been made with the Hubble Space Telescope (HST) that may serve for comparison and calibration purposes. Based on these criteria, the astronomers selected a field measuring about 1 x 1 deg 2 in the southern constellation of Corvus (The Raven). This is now known as the Capodimonte Deep Field (OACDF) . The above photo ( PR Photo 15a/01 ) covers one-quarter of the full field (Subfield No. 2 - OACDF2) - some of the objects seen in this area are shown below in more detail. More than 35,000 objects have been found in this area; the faintest are nearly 100 million fainter than what can be perceived with the unaided eye in the dark sky. Selected objects in the Capodimonte Deep Field ESO PR Photo 15b/01 ESO PR Photo 15b/01 [Preview - JPEG: 400 x 435 pix - 60k] [Normal - JPEG: 800 x 870 pix - 738k] [Hi-Res - JPEG: 3000 x 3261 pix - 5.1M] Caption : Enlargement of the interacting galaxies that are seen in the upper left corner of the OACDF2 field shown in PR Photo 15a/01 . The enlargement covers 1250 x 1130 WFI pixels (1 pixel = 0.24 arcsec), or about 5.0 x 4.5 arcmin 2 in the sky. The lower spiral is itself an interactive double. ESO PR Photo 15c/01 ESO PR Photo 15c/01 [Preview - JPEG: 557 x 400 pix - 93k] [Normal - JPEG: 1113 x 800 pix - 937k] [Hi-Res - JPEG: 3000 x 2156 pix - 4.0M] Caption : Enlargement of a spiral galaxy and a nebulous object in this area. The field shown covers 1250 x 750 pixels, or about 5 x 3 arcmin 2 in the sky. Note the very red objects next to the two bright stars in the lower-right corner. The colours of these objects are consistent with those of spheroidal galaxies at intermediate distances (redshifts). ESO PR Photo 15d/01 ESO PR Photo 15d/01 [Preview - JPEG: 400 x 530 pix - 68k] [Normal - JPEG: 800 x 1060 pix - 870k] [Hi-Res - JPEG: 2768 x 3668 pix - 6.2M] Caption : A further enlargement of a galaxy cluster of which most members are located in the north-east quadrant (upper left) and have a reddish colour. The nebulous object to the upper left is a dwarf galaxy of spheroidal shape. The red object, located near the centre of the field and resembling a double star, is very likely a gravitational lens [2]. Some of the very red, point-like objects in the field may be distant quasars, very-low mass stars or, possibly, relatively nearby brown dwarf stars. The field shown covers 1380 x 1630 pixels, or 5.5 x 6.5 arcmin 2. ESO PR Photo 15e/01 ESO PR Photo 15e/01 [Preview - JPEG: 400 x 418 pix - 56k] [Normal - JPEG: 800 x 835 pix - 700k] [Hi-Res - JPEG: 3000 x 3131 pix - 5.0M] Caption : Enlargement of a moderately distant galaxy cluster in the south-east quadrant (lower left) of the OACDF2 field. The field measures 1380 x 1260 pixels, or about 5.5 x 5.0 arcmin 2 in the sky. ESO PR Photo 15f/01 ESO PR Photo 15f/01 [Preview - JPEG: 449 x 400 pix - 68k] [Normal - JPEG: 897 x 800 pix - 799k] [Hi-Res - JPEG: 3000 x 2675 pix - 5.6M] Caption : Enlargement of the elliptical galaxy that is located to the west (right) in the OACDF2 field. The numerous tiny objects surrounding the galaxy may be globular clusters. The fuzzy object on the right edge of the field may be a dwarf spheroidal galaxy. The size of the field is about 6 x 5 arcmin 2. Technical Information about the OACDF Survey The observations for the OACDF project were performed in three different ESO periods (18-22 April 1999, 7-12 March 2000 and 26-30 April 2000). Some 100 Gbyte of raw data were collected during each of the three observing runs. The first OACDF run was done just after the commissioning of the ESO-WFI. The observational strategy was to perform a 1 x 1 deg 2 short-exposure ("shallow") survey and then a 0.5 x 1 deg 2 "deep" survey. The shallow survey was performed in the B, V, R and I broad-band filters. Four adjacent 30 x 30 arcmin 2 fields, together covering a 1 x 1 deg 2 field in the sky, were observed for the shallow survey. Two of these fields were chosen for the 0.5 x 1 deg 2 deep survey; OACDF2 shown above is one of these. The deep survey was performed in the B, V, R broad-bands and in other intermediate-band filters. The OACDF data are fully reduced and the catalogue extraction has started. A two-processor (500 Mhz each) DS20 machine with 100 Gbyte of hard disk, specifically acquired at the OAC for WFI data reduction, was used. The detailed guidelines of the data reduction, as well as the catalogue extraction, are reported in a research paper that will appear in the European research journal Astronomy & Astrophysics . Notes [1]: The team members are: Massimo Capaccioli, Juan M. Alcala', Roberto Silvotti, Magda Arnaboldi, Vincenzo Ripepi, Emanuella Puddu, Massimo Dall'Ora, Giuseppe Longo and Roberto Scaramella . [2]: This is a preliminary result by Juan Alcala', Massimo Capaccioli, Giuseppe Longo, Mikhail Sazhin, Roberto Silvotti and Vincenzo Testa , based on recent observations with the Telescopio Nazionale Galileo (TNG) which show that the spectra of the two objects are identical. Technical information about the photos PR Photo 15a/01 has been obtained by the combination of the B, V, and R stacked images of the OACDF2 field. The total exposure times in the three bands are 2 hours in B and V (12 ditherings of 10 min each were stacked to produce the B and V images) and 3 hours in R (13 ditherings of 15 min each). The mosaic images in the B and V bands were aligned relative to the R-band image and adjusted to a logarithmic intensity scale prior to the combination. The typical seeing was of the order of 1 arcsec in each of the three bands. Preliminary estimates of the three-sigma limiting magnitudes in B, V and R indicate 25.5, 25.0 and 25.0, respectively. More than 35,000 objects are detected above the three-sigma level. PR Photos 15b-f/01 display selected areas of the field shown in PR Photo 15a/01 at the original WFI scale, hereby also demonstrating the enormous amount of information contained in these wide-field images. In all photos, North is up and East is left.
The Pixon Method for Data Compression Image Classification, and Image Reconstruction
NASA Technical Reports Server (NTRS)
Puetter, Richard; Yahil, Amos
2002-01-01
As initially proposed, this program had three goals: (1) continue to develop the highly successful Pixon method for image reconstruction and support other scientist in implementing this technique for their applications; (2) develop image compression techniques based on the Pixon method; and (3) develop artificial intelligence algorithms for image classification based on the Pixon approach for simplifying neural networks. Subsequent to proposal review the scope of the program was greatly reduced and it was decided to investigate the ability of the Pixon method to provide superior restorations of images compressed with standard image compression schemes, specifically JPEG-compressed images.
Digital image forensics for photographic copying
NASA Astrophysics Data System (ADS)
Yin, Jing; Fang, Yanmei
2012-03-01
Image display technology has greatly developed over the past few decades, which make it possible to recapture high-quality images from the display medium, such as a liquid crystal display(LCD) screen or a printed paper. The recaptured images are not regarded as a separate image class in the current research of digital image forensics, while the content of the recaptured images may have been tempered. In this paper, two sets of features based on the noise and the traces of double JPEG compression are proposed to identify these recaptured images. Experimental results showed that our proposed features perform well for detecting photographic copying.
NASA Technical Reports Server (NTRS)
Linares, Irving; Mersereau, Russell M.; Smith, Mark J. T.
1994-01-01
Two representative sample images of Band 4 of the Landsat Thematic Mapper are compressed with the JPEG algorithm at 8:1, 16:1 and 24:1 Compression Ratios for experimental browsing purposes. We then apply the Optimal PSNR Estimated Spectra Adaptive Postfiltering (ESAP) algorithm to reduce the DCT blocking distortion. ESAP reduces the blocking distortion while preserving most of the image's edge information by adaptively postfiltering the decoded image using the block's spectral information already obtainable from each block's DCT coefficients. The algorithm iteratively applied a one dimensional log-sigmoid weighting function to the separable interpolated local block estimated spectra of the decoded image until it converges to the optimal PSNR with respect to the original using a 2-D steepest ascent search. Convergence is obtained in a few iterations for integer parameters. The optimal logsig parameters are transmitted to the decoder as a negligible byte of overhead data. A unique maxima is guaranteed due to the 2-D asymptotic exponential overshoot shape of the surface generated by the algorithm. ESAP is based on a DFT analysis of the DCT basis functions. It is implemented with pixel-by-pixel spatially adaptive separable FIR postfilters. PSNR objective improvements between 0.4 to 0.8 dB are shown together with their corresponding optimal PSNR adaptive postfiltered images.
Integration of radiographic images with an electronic medical record.
Overhage, J. M.; Aisen, A.; Barnes, M.; Tucker, M.; McDonald, C. J.
2001-01-01
Radiographic images are important and expensive diagnostic tests. However, the provider caring for the patient often does not review the images directly due to time constraints. Institutions can use picture archiving and communications systems to make images more available to the provider, but this may not be the best solution. We integrated radiographic image review into the Regenstrief Medical Record System in order to address this problem. To achieve adequate performance, we store JPEG compressed images directly in the RMRS. Currently, physicians review about 5% of all radiographic studies using the RMRS image review function. PMID:11825241
Deepest Wide-Field Colour Image in the Southern Sky
NASA Astrophysics Data System (ADS)
2003-01-01
LA SILLA CAMERA OBSERVES CHANDRA DEEP FIELD SOUTH ESO PR Photo 02a/03 ESO PR Photo 02a/03 [Preview - JPEG: 400 x 437 pix - 95k] [Normal - JPEG: 800 x 873 pix - 904k] [HiRes - JPEG: 4000 x 4366 pix - 23.1M] Caption : PR Photo 02a/03 shows a three-colour composite image of the Chandra Deep Field South (CDF-S) , obtained with the Wide Field Imager (WFI) camera on the 2.2-m MPG/ESO telescope at the ESO La Silla Observatory (Chile). It was produced by the combination of about 450 images with a total exposure time of nearly 50 hours. The field measures 36 x 34 arcmin 2 ; North is up and East is left. Technical information is available below. The combined efforts of three European teams of astronomers, targeting the same sky field in the southern constellation Fornax (The Oven) have enabled them to construct a very deep, true-colour image - opening an exceptionally clear view towards the distant universe . The image ( PR Photo 02a/03 ) covers an area somewhat larger than the full moon. It displays more than 100,000 galaxies, several thousand stars and hundreds of quasars. It is based on images with a total exposure time of nearly 50 hours, collected under good observing conditions with the Wide Field Imager (WFI) on the MPG/ESO 2.2m telescope at the ESO La Silla Observatory (Chile) - many of them extracted from the ESO Science Data Archive . The position of this southern sky field was chosen by Riccardo Giacconi (Nobel Laureate in Physics 2002) at a time when he was Director General of ESO, together with Piero Rosati (ESO). It was selected as a sky region towards which the NASA Chandra X-ray satellite observatory , launched in July 1999, would be pointed while carrying out a very long exposure (lasting a total of 1 million seconds, or 278 hours) in order to detect the faintest possible X-ray sources. The field is now known as the Chandra Deep Field South (CDF-S) . The new WFI photo of CDF-S does not reach quite as deep as the available images of the "Hubble Deep Fields" (HDF-N in the northern and HDF-S in the southern sky, cf. e.g. ESO PR Photo 35a/98 ), but the field-of-view is about 200 times larger. The present image displays about 50 times more galaxies than the HDF images, and therefore provides a more representative view of the universe . The WFI CDF-S image will now form a most useful basis for the very extensive and systematic census of the population of distant galaxies and quasars, allowing at once a detailed study of all evolutionary stages of the universe since it was about 2 billion years old . These investigations have started and are expected to provide information about the evolution of galaxies in unprecedented detail. They will offer insights into the history of star formation and how the internal structure of galaxies changes with time and, not least, throw light on how these two evolutionary aspects are interconnected. GALAXIES IN THE WFI IMAGE ESO PR Photo 02b/03 ESO PR Photo 02b/03 [Preview - JPEG: 488 x 400 pix - 112k] [Normal - JPEG: 896 x 800 pix - 1.0M] [Full-Res - JPEG: 2591 x 2313 pix - 8.6M] Caption : PR Photo 02b/03 contains a collection of twelve subfields from the full WFI Chandra Deep Field South (WFI CDF-S), centred on (pairs or groups of) galaxies. Each of the subfields measures 2.5 x 2.5 arcmin 2 (635 x 658 pix 2 ; 1 pixel = 0.238 arcsec). North is up and East is left. Technical information is available below. The WFI CDF-S colour image - of which the full field is shown in PR Photo 02a/03 - was constructed from all available observations in the optical B- ,V- and R-bands obtained under good conditions with the Wide Field Imager (WFI) on the 2.2-m MPG/ESO telescope at the ESO La Silla Observatory (Chile), and now stored in the ESO Science Data Archive. It is the "deepest" image ever taken with this instrument. It covers a sky field measuring 36 x 34 arcmin 2 , i.e., an area somewhat larger than that of the full moon. The observations were collected during a period of nearly four years, beginning in January 1999 when the WFI instrument was first installed (cf. ESO PR 02/99 ) and ending in October 2002. Altogether, nearly 50 hours of exposure were collected in the three filters combined here, cf. the technical information below. Although it is possible to identify more than 100,000 galaxies in the image - some of which are shown in PR Photo 02b/03 - it is still remarkably "empty" by astronomical standards. Even the brightest stars in the field (of visual magnitude 9) can hardly be seen by human observers with binoculars. In fact, the area density of bright, nearby galaxies is only half of what it is in "normal" sky fields. Comparatively empty fields like this one provide an unsually clear view towards the distant regions in the universe and thus open a window towards the earliest cosmic times . Research projects in the Chandra Deep Field South ESO PR Photo 02c/03 ESO PR Photo 02c/03 [Preview - JPEG: 400 x 513 pix - 112k] [Normal - JPEG: 800 x 1026 pix - 1.2M] [Full-Res - JPEG: 1717 x 2201 pix - 5.5M] ESO PR Photo 02d/03 ESO PR Photo 02d/03 [Preview - JPEG: 400 x 469 pix - 112k] [Normal - JPEG: 800 x 937 pix - 1.0M] [Full-Res - JPEG: 2545 x 2980 pix - 10.7M] Caption : PR Photo 02c-d/03 shows two sky fields within the WFI image of CDF-S, reproduced at full (pixel) size to illustrate the exceptional information richness of these data. The subfields measure 6.8 x 7.8 arcmin 2 (1717 x 1975 pixels) and 10.1 x 10.5 arcmin 2 (2545 x 2635 pixels), respectively. North is up and East is left. Technical information is available below. Astronomers from different teams and disciplines have been quick to join forces in a world-wide co-ordinated effort around the Chandra Deep Field South. Observations of this area are now being performed by some of the most powerful astronomical facilities and instruments. They include space-based X-ray and infrared observations by the ESA XMM-Newton , the NASA CHANDRA , Hubble Space Telescope (HST) and soon SIRTF (scheduled for launch in a few months), as well as imaging and spectroscopical observations in the infrared and optical part of the spectrum by telescopes at the ground-based observatories of ESO (La Silla and Paranal) and NOAO (Kitt Peak and Tololo). A huge database is currently being created that will help to analyse the evolution of galaxies in all currently feasible respects. All participating teams have agreed to make their data on this field publicly available, thus providing the world-wide astronomical community with a unique opportunity to perform competitive research, joining forces within this vast scientific project. Concerted observations The optical true-colour WFI image presented here forms an important part of this broad, concerted approach. It combines observations of three scientific teams that have engaged in complementary scientific projects, thereby capitalizing on this very powerful combination of their individual observations. The following teams are involved in this work: * COMBO-17 (Classifying Objects by Medium-Band Observations in 17 filters) : an international collaboration led by Christian Wolf and other scientists at the Max-Planck-Institut für Astronomie (MPIA, Heidelberg, Germany). This team used 51 hours of WFI observing time to obtain images through five broad-band and twelve medium-band optical filters in the visual spectral region in order to measure the distances (by means of "photometric redshifts") and star-formation rates of about 10,000 galaxies, thereby also revealing their evolutionary status. * EIS (ESO Imaging Survey) : a team of visiting astronomers from the ESO community and beyond, led by Luiz da Costa (ESO). They observed the CDF-S for 44 hours in six optical bands with the WFI camera on the MPG/ESO 2.2-m telescope and 28 hours in two near-infrared bands with the SOFI instrument at the ESO 3.5-m New Technology Telescope (NTT) , both at La Silla. These observations form part of the Deep Public Imaging Survey that covers a total sky area of 3 square degrees. * GOODS (The Great Observatories Origins Deep Survey) : another international team (on the ESO side, led by Catherine Cesarsky ) that focusses on the coordination of deep space- and ground-based observations on a smaller, central area of the CDF-S in order to image the galaxies in many differerent spectral wavebands, from X-rays to radio. GOODS has contributed with 40 hours of WFI time for observations in three broad-band filters that were designed for the selection of targets to be spectroscopically observed with the ESO Very Large Telescope (VLT) at the Paranal Observatory (Chile), for which over 200 hours of observations are planned. About 10,000 galaxies will be spectroscopically observed in order to determine their redshift (distance), star formation rate, etc. Another important contribution to this large research undertaking will come from the GEMS project. This is a "HST treasury programme" (with Hans-Walter Rix from MPIA as Principal Investigator) which observes the 10,000 galaxies identified in COMBO-17 - and eventually the entire WFI-field with HST - to show the evolution of their shapes with time. Great questions With the combination of data from many wavelength ranges now at hand, the astronomers are embarking upon studies of the many different processes in the universe. They expect to shed more light on several important cosmological questions, such as: * How and when was the first generation of stars born? * When exactly was the neutral hydrogen in the universe ionized the first time by powerful radiation emitted from the first stars and active galactic nuclei? * How did galaxies and groups of galaxies evolve during the past 13 billion years? * What is the true nature of those elusive objects that are only seen at the infrared and submillimetre wavelengths (cf. ESO PR 23/02 )? * Which fraction of galaxies had an "active" nucleus (probably with a black hole at the centre) in their past, and how long did this phase last? Moreover, since these extensive optical observations were obtained in the course of a dozen observing periods during several years, it is also possible to perform studies of certain variable phenomena: * How many variable sources are seen and what are their types and properties? * How many supernovae are detected per time interval, i.e. what is the supernovae frequency at different cosmic epochs? * How do those processes depend on each other? This is just a short and very incomplete list of questions astronomers world-wide will address using all the complementary observations. No doubt that the coming studies of the Chandra Deep Field South - with this and other data - will be most exciting and instructive! Other wide-field images Other wide-field images from the WFI have been published in various ESO press releases during the past four years - they are also available at the WFI Photo Gallery . A collection of full-resolution files (TIFF-format) is available on a WFI CD-ROM . Technical Information The very extensive data reduction and colour image processing needed to produce these images were performed by Mischa Schirmer and Thomas Erben at the "Wide Field Expertise Center" of the Institut für Astrophysik und Extraterrestrische Forschung der Universität Bonn (IAEF) in Germany. It was done by means of a software pipeline specialised for reduction of multiple CCD wide-field imaging camera data. This pipeline is mainly based on publicly available software modules and algorithms ( EIS , FLIPS , LDAC , Terapix , Wifix ). The image was constructed from about 150 exposures in each of the following wavebands: B-band (centred at wavelength 456 nm; here rendered as blue, 15.8 hours total exposure time), V-band (540 nm; green, 15.6 hours) and R-band (652 nm; red, 17.8 hours). Only images taken under sufficiently good observing conditions (defined as seeing less than 1.1 arcsec) were included. In total, 450 images were assembled to produce this colour image, together with about as many calibration images (biases, darks and flats). More than 2 Terabyte (TB) of temporary files were produced during the extensive data reduction. Parallel processing of all data sets took about two weeks on a four-processor Sun Enterprise 450 workstation and a 1.8 GHz dual processor Linux PC. The final colour image was assembled in Adobe Photoshop. The observations were performed by ESO (GOODS, EIS) and the COMBO-17 collaboration in the period 1/1999-10/2002.
Dynamic power scheduling system for JPEG2000 delivery over wireless networks
NASA Astrophysics Data System (ADS)
Martina, Maurizio; Vacca, Fabrizio
2003-06-01
Third generation mobile terminals diffusion is encouraging the development of new multimedia based applications. The reliable transmission of audiovisual content will gain major interest being one of the most valuable services. Nevertheless, mobile scenario is severely power constrained: high compression ratios and refined energy management strategies are highly advisable. JPEG2000 as the source encoding stage assures excellent performance with extremely good visual quality. However the limited power budged imposes to limit the computational effort in order to save as much power as possible. Starting from an error prone environment, as the wireless one, high error-resilience features need to be employed. This paper tries to investigate the trade-off between quality and power in such a challenging environment.
NASA Astrophysics Data System (ADS)
Karam, Lina J.; Zhu, Tong
2015-03-01
The varying quality of face images is an important challenge that limits the effectiveness of face recognition technology when applied in real-world applications. Existing face image databases do not consider the effect of distortions that commonly occur in real-world environments. This database (QLFW) represents an initial attempt to provide a set of labeled face images spanning the wide range of quality, from no perceived impairment to strong perceived impairment for face detection and face recognition applications. Types of impairment include JPEG2000 compression, JPEG compression, additive white noise, Gaussian blur and contrast change. Subjective experiments are conducted to assess the perceived visual quality of faces under different levels and types of distortions and also to assess the human recognition performance under the considered distortions. One goal of this work is to enable automated performance evaluation of face recognition technologies in the presence of different types and levels of visual distortions. This will consequently enable the development of face recognition systems that can operate reliably on real-world visual content in the presence of real-world visual distortions. Another goal is to enable the development and assessment of visual quality metrics for face images and for face detection and recognition applications.
Improved compression technique for multipass color printers
NASA Astrophysics Data System (ADS)
Honsinger, Chris
1998-01-01
A multipass color printer prints a color image by printing one color place at a time in a prescribed order, e.g., in a four-color systems, the cyan plane may be printed first, the magenta next, and so on. It is desirable to discard the data related to each color plane once it has been printed, so that data from the next print may be downloaded. In this paper, we present a compression scheme that allows the release of a color plane memory, but still takes advantage of the correlation between the color planes. The compression scheme is based on a block adaptive technique for decorrelating the color planes followed by a spatial lossy compression of the decorrelated data. A preferred method of lossy compression is the DCT-based JPEG compression standard, as it is shown that the block adaptive decorrelation operations can be efficiently performed in the DCT domain. The result of the compression technique are compared to that of using JPEG on RGB data without any decorrelating transform. In general, the technique is shown to improve the compression performance over a practical range of compression ratios by at least 30 percent in all images, and up to 45 percent in some images.
Multi-Class Classification for Identifying JPEG Steganography Embedding Methods
2008-09-01
B.H. (2000). STEGANOGRAPHY: Hidden Images, A New Challenge in the Fight Against Child Porn . UPDATE, Volume 13, Number 2, pp. 1-4, Retrieved June 3...Other crimes involving the use of steganography include child pornography where the stego files are used to hide a predator’s location when posting
Another Look at an Enigmatic New World
NASA Astrophysics Data System (ADS)
2005-02-01
VLT NACO Performs Outstanding Observations of Titan's Atmosphere and Surface On January 14, 2005, the ESA Huygens probe arrived at Saturn's largest satellite, Titan. After a faultless descent through the dense atmosphere, it touched down on the icy surface of this strange world from where it continued to transmit precious data back to the Earth. Several of the world's large ground-based telescopes were also active during this exciting event, observing Titan before and near the Huygens encounter, within the framework of a dedicated campaign coordinated by the members of the Huygens Project Scientist Team. Indeed, large astronomical telescopes with state-of-the art adaptive optics systems allow scientists to image Titan's disc in quite some detail. Moreover, ground-based observations are not restricted to the limited period of the fly-by of Cassini and landing of Huygens. They hence complement ideally the data gathered by this NASA/ESA mission, further optimising the overall scientific return. A group of astronomers [1] observed Titan with ESO's Very Large Telescope (VLT) at the Paranal Observatory (Chile) during the nights from 14 to 16 January, by means of the adaptive optics NAOS/CONICA instrument mounted on the 8.2-m Yepun telescope [2]. The observations were carried out in several modes, resulting in a series of fine images and detailed spectra of this mysterious moon. They complement earlier VLT observations of Titan, cf. ESO Press Photos 08/04 and ESO Press Release 09/04. The highest contrast images ESO PR Photo 04a/05 ESO PR Photo 04a/05 Titan's surface (NACO/VLT) [Preview - JPEG: 400 x 712 pix - 64k] [Normal - JPEG: 800 x 1424 pix - 524k] ESO PR Photo 04b/05 ESO PR Photo 04b/05 Map of Titan's Surface (NACO/VLT) [Preview - JPEG: 400 x 651 pix - 41k] [Normal - JPEG: 800 x 1301 pix - 432k] Caption: ESO PR Photo 04a/05 shows Titan's trailing hemisphere [3] with the Huygens landing site marked as an "X". The left image was taken with NACO and a narrow-band filter centred at 2 microns. On the right is the NACO/SDI image of the same location showing Titan's surface through the 1.6 micron methane window. A spherical projection with coordinates on Titan is overplotted. ESO PR Photo 04b/05 is a map of Titan taken with NACO at 1.28 micron (a methane window allowing it to probe down to the surface). On the leading side of Titan, the bright equatorial feature ("Xanadu") is dominating. On the trailing side, the landing site of the Huygens probe is indicated. ESO PR Photo 04c/05 ESO PR Photo 04c/05 Titan, the Enigmatic Moon, and Huygens Landing Site (NACO-SDI/VLT and Cassini/ISS) [Preview - JPEG: 400 x 589 pix - 40k] [Normal - JPEG: 800 x 1178 pix - 290k] Caption: ESO PR Photo 04c/05 is a comparison between the NACO/SDI image and an image taken by Cassini/ISS while approaching Titan. The Cassini image shows the Huygens landing site map wrapped around Titan, rotated to the same position as the January NACO SDI observations. The yellow "X" marks the landing site of the ESA Huygens probe. The Cassini/ISS image is courtesy of NASA, JPL, Space Science Institute (see http://sci.esa.int/science-e/www/object/index.cfm?fobjectid=36222). The coloured lines delineate the regions that were imaged by Cassini at differing resolutions. The lower-resolution imaging sequences are outlined in blue. Other areas have been specifically targeted for moderate and high resolution mosaicking of surface features. These include the site where the European Space Agency's Huygens probe has touched down in mid-January (marked with the yellow X), part of the bright region named Xanadu (easternmost extent of the area covered), and a boundary between dark and bright regions. ESO PR Photo 04d/05 ESO PR Photo 04d/05 Evolution of the Atmosphere of Titan (NACO/VLT) [Preview - JPEG: 400 x 902 pix - 40k] [Normal - JPEG: 800 x 1804 pix - 320k] Caption: ESO PR Photo 04d/05 is an image of Titan's atmosphere at 2.12 microns as observed with NACO on the VLT at three different epochs from 2002 till now. Titan's atmosphere exhibits seasonal and meteorological changes which can clearly be seen here : the North-South asymmetry - indicative of changes in the chemical composition in one pole or the other, depending on the season - is now clearly in favour of the North pole. Indeed, the situation has reversed with respect to a few years ago when the South pole was brighter. Also visible in these images is a bright feature in the South pole, found to be presently dimming after having appeared very bright from 2000 to 2003. The differences in size are due to the variation in the distance to Earth of Saturn and its planetary system. The new images show Titan's atmosphere and surface at various near-infrared spectral bands. The surface of Titan's trailing side is visible in images taken through narrow-band filters at wavelengths 1.28, 1.6 and 2.0 microns. They correspond to the so-called "methane windows" which allow to peer all the way through the lower Titan atmosphere to the surface. On the other hand, Titan's atmosphere is visible through filters centred in the wings of these methane bands, e.g. at 2.12 and 2.17 microns. Eric Gendron of the Paris Observatory in France and leader of the team, is extremely pleased: "We believe that some of these images are the highest-contrast images of Titan ever taken with any ground-based or earth-orbiting telescope." The excellent images of Titan's surface show the location of the Huygens landing site in much detail. In particular, those centred at wavelength 1.6 micron and obtained with the Simultaneous Differential Imager (SDI) on NACO [4] provide the highest contrast and best views. This is firstly because the filters match the 1.6 micron methane window most accurately. Secondly, it is possible to get an even clearer view of the surface by subtracting accurately the simultaneously recorded images of the atmospheric haze, taken at wavelength 1.625 micron. The images show the great complexity of Titan's trailing side, which was earlier thought to be very dark. However, it is now obvious that bright and dark regions cover the field of these images. The best resolution achieved on the surface features is about 0.039 arcsec, corresponding to 200 km on Titan. ESO PR Photo 04c/04 illustrates the striking agreement between the NACO/SDI image taken with the VLT from the ground and the ISS/Cassini map. The images of Titan's atmosphere at 2.12 microns show a still-bright south pole with an additional atmospheric bright feature, which may be clouds or some other meteorological phenomena. The astronomers have followed it since 2002 with NACO and notice that it seems to be fading with time. At 2.17 microns, this feature is not visible and the north-south asymmetry - also known as "Titan's smile" - is clearly in favour in the north. The two filters probe different altitude levels and the images thus provide information about the extent and evolution of the north-south asymmetry. Probing the composition of the surface ESO PR Photo 04e/05 ESO PR Photo 04e/05 Spectrum of Two Regions on Titan (NACO/VLT) [Preview - JPEG: 400 x 623 pix - 44k] [Normal - JPEG: 800 x 1246 pix - 283k] Caption: ESO PR Photo 04e/05 represents two of the many spectra obtained on January 16, 2005 with NACO and covering the 2.02 to 2.53 micron range. The blue spectrum corresponds to the brightest region on Titan's surface within the slit, while the red spectrum corresponds to the dark area around the Huygens landing site. In the methane band, the two spectra are equal, indicating a similar atmospheric content; in the methane window centred at 2.0 microns, the spectra show differences in brightness, but are in phase. This suggests that there is no real variation in the composition beyond different atmospheric mixings. ESO PR Photo 04f/05 ESO PR Photo 04f/05 Imaging Titan with a Tunable Filter (NACO Fabry-Perot/VLT) [Preview - JPEG: 400 x 718 pix - 44k] [Normal - JPEG: 800 x 1435 pix - 326k] Caption: ESO PR Photo 04f/05 presents a series of images of Titan taken around the 2.0 micron methane window probing different layers of the atmosphere and the surface. The images are currently under thorough processing and analysis so as to reveal any subtle variations in wavelength that could be indicative of the spectral response of the various surface components, thus allowing the astronomers to identify them. Because the astronomers have also obtained spectroscopic data at different wavelengths, they will be able to recover useful information on the surface composition. The Cassini/VIMS instrument explores Titan's surface in the infrared range and, being so close to this moon, it obtains spectra with a much better spatial resolution than what is possible with Earth-based telescopes. However, with NACO at the VLT, the astronomers have the advantage of observing Titan with considerably higher spectral resolution, and thus to gain more detailed spectral information about the composition, etc. The observations therefore complement each other. Once the composition of the surface at the location of the Huygens landing is known from the detailed analysis of the in-situ measurements, it should become possible to learn the nature of the surface features elsewhere on Titan by combining the Huygens results with more extended cartography from Cassini as well as from VLT observations to come. More information Results on Titan obtained with data from NACO/VLT are in press in the journal Icarus ("Maps of Titan's surface from 1 to 2.5 micron" by A. Coustenis et al.). Previous images of Titan obtained with NACO and with NACO/SDI are accessible as ESO PR Photos 08/04 and ESO PR Photos 11/04. See also these Press Releases for additional scientific references.
Computational scalability of large size image dissemination
NASA Astrophysics Data System (ADS)
Kooper, Rob; Bajcsy, Peter
2011-01-01
We have investigated the computational scalability of image pyramid building needed for dissemination of very large image data. The sources of large images include high resolution microscopes and telescopes, remote sensing and airborne imaging, and high resolution scanners. The term 'large' is understood from a user perspective which means either larger than a display size or larger than a memory/disk to hold the image data. The application drivers for our work are digitization projects such as the Lincoln Papers project (each image scan is about 100-150MB or about 5000x8000 pixels with the total number to be around 200,000) and the UIUC library scanning project for historical maps from 17th and 18th century (smaller number but larger images). The goal of our work is understand computational scalability of the web-based dissemination using image pyramids for these large image scans, as well as the preservation aspects of the data. We report our computational benchmarks for (a) building image pyramids to be disseminated using the Microsoft Seadragon library, (b) a computation execution approach using hyper-threading to generate image pyramids and to utilize the underlying hardware, and (c) an image pyramid preservation approach using various hard drive configurations of Redundant Array of Independent Disks (RAID) drives for input/output operations. The benchmarks are obtained with a map (334.61 MB, JPEG format, 17591x15014 pixels). The discussion combines the speed and preservation objectives.
New procedures to evaluate visually lossless compression for display systems
NASA Astrophysics Data System (ADS)
Stolitzka, Dale F.; Schelkens, Peter; Bruylants, Tim
2017-09-01
Visually lossless image coding in isochronous display streaming or plesiochronous networks reduces link complexity and power consumption and increases available link bandwidth. A new set of codecs developed within the last four years promise a new level of coding quality, but require new techniques that are sufficiently sensitive to the small artifacts or color variations induced by this new breed of codecs. This paper begins with a summary of the new ISO/IEC 29170-2, a procedure for evaluation of lossless coding and reports the new work by JPEG to extend the procedure in two important ways, for HDR content and for evaluating the differences between still images, panning images and image sequences. ISO/IEC 29170-2 relies on processing test images through a well-defined process chain for subjective, forced-choice psychophysical experiments. The procedure sets an acceptable quality level equal to one just noticeable difference. Traditional image and video coding evaluation techniques, such as, those used for television evaluation have not proven sufficiently sensitive to the small artifacts that may be induced by this breed of codecs. In 2015, JPEG received new requirements to expand evaluation of visually lossless coding for high dynamic range images, slowly moving images, i.e., panning, and image sequences. These requirements are the basis for new amendments of the ISO/IEC 29170-2 procedures described in this paper. These amendments promise to be highly useful for the new content in television and cinema mezzanine networks. The amendments passed the final ballot in April 2017 and are on track to be published in 2018.
Non-linear Post Processing Image Enhancement
NASA Technical Reports Server (NTRS)
Hunt, Shawn; Lopez, Alex; Torres, Angel
1997-01-01
A non-linear filter for image post processing based on the feedforward Neural Network topology is presented. This study was undertaken to investigate the usefulness of "smart" filters in image post processing. The filter has shown to be useful in recovering high frequencies, such as those lost during the JPEG compression-decompression process. The filtered images have a higher signal to noise ratio, and a higher perceived image quality. Simulation studies comparing the proposed filter with the optimum mean square non-linear filter, showing examples of the high frequency recovery, and the statistical properties of the filter are given,
Google Books: making the public domain universally accessible
NASA Astrophysics Data System (ADS)
Langley, Adam; Bloomberg, Dan S.
2007-01-01
Google Book Search is working with libraries and publishers around the world to digitally scan books. Some of those works are now in the public domain and, in keeping with Google's mission to make all the world's information useful and universally accessible, we wish to allow users to download them all. For users, it is important that the files are as small as possible and of printable quality. This means that a single codec for both text and images is impractical. We use PDF as a container for a mixture of JBIG2 and JPEG2000 images which are composed into a final set of pages. We discuss both the implementation of an open source JBIG2 encoder, which we use to compress text data, and the design of the infrastructure needed to meet the technical, legal and user requirements of serving many scanned works. We also cover the lessons learnt about dealing with different PDF readers and how to write files that work on most of the readers, most of the time.
Impact of JPEG2000 compression on spatial-spectral endmember extraction from hyperspectral data
NASA Astrophysics Data System (ADS)
Martín, Gabriel; Ruiz, V. G.; Plaza, Antonio; Ortiz, Juan P.; García, Inmaculada
2009-08-01
Hyperspectral image compression has received considerable interest in recent years. However, an important issue that has not been investigated in the past is the impact of lossy compression on spectral mixture analysis applications, which characterize mixed pixels in terms of a suitable combination of spectrally pure spectral substances (called endmembers) weighted by their estimated fractional abundances. In this paper, we specifically investigate the impact of JPEG2000 compression of hyperspectral images on the quality of the endmembers extracted by algorithms that incorporate both the spectral and the spatial information (useful for incorporating contextual information in the spectral endmember search). The two considered algorithms are the automatic morphological endmember extraction (AMEE) and the spatial spectral endmember extraction (SSEE) techniques. Experimental results are conducted using a well-known data set collected by AVIRIS over the Cuprite mining district in Nevada and with detailed ground-truth information available from U. S. Geological Survey. Our experiments reveal some interesting findings that may be useful to specialists applying spatial-spectral endmember extraction algorithms to compressed hyperspectral imagery.
Sharpest Ever VLT Images at NAOS-CONICA "First Light"
NASA Astrophysics Data System (ADS)
2001-12-01
Very Promising Start-Up of New Adaptive Optics Instrument at Paranal Summary A team of astronomers and engineers from French and German research institutes and ESO at the Paranal Observatory is celebrating the successful accomplishment of "First Light" for the NAOS-CONICA Adaptive Optics facility . With this event, another important milestone for the Very Large Telescope (VLT) project has been passed. Normally, the achievable image sharpness of a ground-based telescope is limited by the effect of atmospheric turbulence. However, with the Adaptive Optics (AO) technique, this drawback can be overcome and the telescope produces images that are at the theoretical limit, i.e., as sharp as if it were in space . Adaptive Optics works by means of a computer-controlled, flexible mirror that counteracts the image distortion induced by atmospheric turbulence in real time. The larger the main mirror of the telescope is, and the shorter the wavelength of the observed light, the sharper will be the images recorded. During a preceding four-week period of hard and concentrated work, the expert team assembled and installed this major astronomical instrument at the 8.2-m VLT YEPUN Unit Telescope (UT4). On November 25, 2001, following careful adjustments of this complex apparatus, a steady stream of photons from a southern star bounced off the computer-controlled deformable mirror inside NAOS and proceeded to form in CONICA the sharpest image produced so far by one of the VLT telescopes. With a core angular diameter of only 0.07 arcsec, this image is near the theoretical limit possible for a telescope of this size and at the infrared wavelength used for this demonstration (the K-band at 2.2 µm). Subsequent tests reached the spectacular performance of 0.04 arcsec in the J-band (wavelength 1.2 µm). "I am proud of this impressive achievement", says ESO Director General Catherine Cesarsky. "It shows the true potential of European science and technology and it provides a fine demonstration of the value of international collaboration. ESO and its partner institutes and companies in France and Germany have worked a long time towards this goal - with the first, extremely promising results, we shall soon be able to offer a new and fully tuned instrument to our wide research community." The NAOS adaptive optics corrector was built, under an ESO contract, by Office National d'Etudes et de Recherches Aérospatiales (ONERA) , Laboratoire d'Astrophysique de Grenoble (LAOG) and the DESPA and DASGAL laboratories of the Observatoire de Paris in France, in collaboration with ESO. The CONICA infra-red camera was built, under an ESO contract, by the Max-Planck-Institut für Astronomie (MPIA) (Heidelberg) and the Max-Planck Institut für Extraterrestrische Physik (MPE) (Garching) in Germany, in collaboration with ESO. The present event happens less than four weeks after "First Fringes" were achieved for the VLT Interferometer (VLTI) with two of the 8.2-m Unit Telescopes. No wonder that a spirit of great enthusiasm reigns at Paranal! Information for the media: ESO is producing a Video News Release ( ESO Video News Reel No. 13 ) with sequences from the NAOS-CONICA "First Light" event at Paranal, a computer animation illustrating the principle of adaptive optics in NAOS-CONICA, as well as the first astronomical images obtained. In addition to the usual distribution, this VNR will also be transmitted via satellite Friday 7 December 2001 from 09:00 to 09:15 CET (10:00 to 10:15 UT) on "Europe by Satellite" . These video images may be used free of charge by broadcasters. Satellite details, the script and the shotlist will be on-line from 6 December on the ESA TV Service Website http://television.esa.int. Also a pre-view Real Video Stream of the video news release will be available as of that date from this URL. Video Clip 07/01 : Various video scenes related to the NAOS-CONICA "First Light" Event ( ESO Video News Reel No. 13 ). PR Photo 33a/01 : NAOS-CONICA "First light" image of an 8-mag star. PR Photo 33b/01 : The moment of "First Light" at the YEPUN Control Consoles. PR Photo 33c/01 : Image of NGC 3603 (K-band) area (NAOS-CONICA) . PR Photo 33d/01 : Image of NGC 3603 wider field (ISAAC) PR Photo 33e/01 : I-band HST-WFPC2 image of NGC 3603 field . PR Photo 33f/01 : Animated GIF, with NAOS-CONICA (K-band) and HST-WFPC2 (I-band) images of NGC 3603 area PR Photo 33g/01 : Image of the Becklin-Neugebauer Object . PR Photo 33h/01 : Image of a very close double star . PR Photo 33i/01 : Image of a 17-magnitude reference star PR Photo 33j/01 : Image of the central area of the 30 Dor star cluster . PR Photo 33k/01 : The top of the Paranal Mountain (November 25, 2001). PR Photo 33l/01 : The NAOS-CONICA instrument attached to VLT YEPUN.. A very special moment at Paranal! First light for NAOS-CONICA at the VLT - PR Video Clip 07/01] ESO PR Video Clip 07/01 "First Light for NAOS-CONICA" (25 November 2001) (3850 frames/2:34 min) [MPEG Video+Audio; 160x120 pix; 3.6Mb] [MPEG Video+Audio; 320x240 pix; 8.9Mb] [RealMedia; streaming; 34kps] [RealMedia; streaming; 200kps] ESO Video Clip 07/01 provides some background scenes and images around the NAOS-CONICA "First Light" event on November 25, 2001 (extracted from ESO Video News Reel No. 13 ). Contents: NGC 3603 image from ISAAC and a smaller field as observed by NAOS-CONICA ; the Paranal platform in the afternoon, before the event; YEPUN and NAOS-CONICA with cryostat sounds; Tension is rising in the VLT Control Room; Wavefront Sensor display; the "Loop is Closed"; happy team members; the first corrected image on the screen; Images of NGC 3603 by HST and VLT; 30 Doradus central cluster; BN Object in Orion; Statement by the Head of the ESO Instrument Division. ESO PR Photo 33a/01 ESO PR Photo 33a/01 [Preview - JPEG: 317 x 400 pix - 27k] [Normal - JPEG: 800 x 634 pix - 176k] ESO PR Photo 33b/01 ESO PR Photo 33b/01 [Preview - JPEG: 400 x 322 pix - 176k] [Normal - JPEG: 800 x 644 pix - 360k] ESO PR Photo 33a/01 shows the first image in the infrared K-band (wavelength 2.2 µm) of a star (visual magnitude 8) obtained - before (left) and after (right) the adaptive optics was switched on (see the text). The middle panel displays the 3-D intensity profiles of these images, demonstrating the tremendous gain, both in image sharpness and central intensity. ESO PR Photo 33b/01 shows some of the NAOS-CONICA team members in the VLT Control Room at the moment of "First Light" in the night between November 25-26, 2001. From left to right: Thierry Fusco (ONERA), Clemens Storz (MPIA), Robin Arsenault (ESO), Gerard Rousset (ONERA). The numerous boxes with the many NAOS and CONICA parts arrived at the ESO Paranal Observatory on October 24, 2001. Astronomers and engineers from ESO and the participating institutes and organisations then began the painstaking assembly of these very complex instruments on one of the Nasmyth platforms on the fourth VLT 8.2-m Unit Telescope, YEPUN . Then followed days of technical tests and adjustments, working around the clock. In the afternoon of Sunday, November 25, the team finally declared the instrument fit to attempt its "First Light" observation. The YEPUN dome was opened at sunset and a small, rather apprehensive group gathered in the VLT Control Room, peering intensively at the computer screens over the shoulders of their colleagues, the telescope and instrument operators. Time passed imperceptibly to those present, as the basic calibrations required at this early stage to bring NAOS-CONICA to full operational state were successfully completed. Everybody sensed the special moment approaching when, finally, the telescope operator pushed a button and the giant telescope started to turn smoothly towards the first test object, an otherwise undistinguished star in our Milky Way. Its non-corrected infra-red image was recorded by the CONICA detector array and soon appeared on the computer screen. It was already very good by astronomical standards, with a diameter of only 0.50 arsec (FWHM), cf. PR Photo 33a/01 (left) . Then, by another command, the instrument operator switched on the NAOS adaptive optics system , thereby "closing the loop" for the first time on a sky field, by using that ordinary star as a reference light source to measure the atmospheric turbulence. Obediently, the deformable mirror in NAOS began to follow the "orders" that were issued 500 times per second by its powerful control computer.... As if by magics, that stellar image on the computer screen pulled itself together....! What seconds before had been a jumping, rather blurry patch of light suddenly became a rock-steady, razor-sharp and brilliant spot of light. The entire room burst into applause - there were happy faces and smiles all over, and then the operator announced the measured image diameter - a truly impressive 0.068 arcsec, already at this first try, cf. PR Photo 33a/01 (right) ! All the team members who were lucky to be there sent a special thought to those many others who had also put in over four years' hard and dedicated work to make this event a reality. The time of this historical moment was November 25, 2001, 23:00 Chilean time (November 26, 2001, 02:00 am UT) . During this and the following nights, more images were made of astronomcal objects, opening a new chapter of the long tradition of Adaptive Optics at ESO. More information about the NAOS-CONICA international collaboration , technical details about this instrument and its special advantages are available below. The first images The star-forming region around NGC 3603 ESO PR Photo 33c/01 ESO PR Photo 33c/01 [Preview - JPEG: 326 x 400 pix - 200k] [Normal - JPEG: 651 x 800 pix - 480k] ESO PR Photo 33d/01 ESO PR Photo 33d/01 [Preview - JPEG: 348 x 400 pix - 240k] [Normal - JPEG: 695 x 800 pix - 592k] Caption : PR Photo 33c/01 displays a NAOS-CONICA image of the starburst cluster NGC 3603, obtained during the second night of NAOS-CONICA operation. The sky region shown is some 20 arcsec to the North of the centre of the cluster. NAOS was compensating atmospheric disturbances by analyzing light from the central star with its visual wavefront sensor, while CONICA was observing in the K-band. The image is nearly diffraction-limited and has a Full-Width-Half-Maximum (FWHM) diameter of 0.07 arcsec, with a central Strehl ratio of 56% (a measure of the degree of concentration of the light). The exposure lasted 300 seconds. North is up and East is left. The field measures 27 x 27 arcsec. On PR Photo 33d/01 , the sky area shown in this NAOS-CONICA high-resolution image is indicated on an earlier image of a much larger area, obtained in 1999 with the ISAAC multi-mode instrument on VLT ANTU ( ESO PR 16/99 ) Among the first images to be obtained of astronomical objects was one of the stellar cluster NGC 3603 that is located in the Carina spiral arm in the Milky Way at a distance of about 20,000 light-years, cf. PR Photo 33c/01 . With its central starburst cluster, it is one of the densest and most massive star forming regions in our Galaxy. Some of the most massive stars - with masses up to 120 times the mass of our Sun - can be found in this cluster. For a long time astronomers have suspected that the formation of low-mass stars is suppressed by the presence of high-mass stars, but two years ago, stars with masses as low as 10% of the mass of our Sun were detected in NGC 3603 with the ISAAC multi-mode instrument at VLT ANTU, cf. PR Photo 33d/01 and ESO PR 16/99. The high stellar density in this region, however, prevented the search for objects with still lower masses, so-called Brown Dwarfs. The new, high-resolution K-band images like PR Photo 33c/01 , obtained with NAOS-CONICA at YEPUN, now for the first time facilitate the study of the elusive class of brown dwarfs in such a starburst environment. This will, among others, offer very valuable insight into the fundamental problem about the total amount of matter that is deposited into stars in star-forming regions. An illustration of the potential of Adaptive Optics ESO PR Photo 33e/01 ESO PR Photo 33e/01 [Preview - JPEG: 376 x 400 pix - 128k] [Normal - JPEG: 752 x 800 pix - 336k] ESO PR Photo 33f/01 ESO PR Photo 33f/01 [Animated GIF: 400 x 425 pix - 71k] Caption : PR Photo 33e/01 was obtained with the WFPC2 camera on the Hubble Space Telescope (HST) in the I-band (800nm). It is a 400-sec exposure and shows the same sky region as in the NAOS-CONICA image shown in PR Photo 33c/01. PR Photo 33f/01 provides a direct comparison of the two images (animated GIF). The HST image was extracted from archival data. HST is operated by NASA and ESA. Normally, the achievable image sharpness of a ground-based telescope is limited by the effect of atmospheric turbulence . However, the Adaptive Optics (AO) technique overcomes this problem and when the AO instrument is optimized, the telescope produces images that are at the theoretical limit, i.e., as sharp as if it were in space . The theoretical image diameter is inversely proportional to the diameter of the main mirror of the telescope and proportional to the wavelength of the observed light. Thus, the larger the telescope and the shorter the wavelength, the sharper will be the images recorded . To illustrate this, a comparison of the NAOS-CONICA image of NGC 3603 ( PR Photo 33c/01 ) is here made with a near-infrared image obtained earlier by the Hubble Space Telescope (HST) covering the same sky area ( PR Photo 33e/01 ). Both images are close to the theoretical limit ("diffraction limited"). However, the diameter of the VLT YEPUN mirror (8.2-m) is somewhat more than three times that of that of HST (2.4-m). This is "compensated" by the fact that the wavelength of the NAOS-CONICA image (2.2 µm) is about two-and-a-half times longer that than of the HST image (0.8 µm). The measured image diameters are therefore not too different, approx. 0.085 arcsec (HST) vrs. approx. 0.068 arcsec (VLT). Although the exposure times are similar (300 sec for the VLT image; 400 sec for the HST image), the VLT image shows considerably fainter objects. This is partly due to the larger mirror, partly because by observing at a longer wavelength, NAOS-CONICA can detect a host of cool low-mass stars. The Becklin-Neugebauer object and its associated nebulosity ESO PR Photo 33g/01 ESO PR Photo 33g/01 [Preview - JPEG: 299 x 400 pix - 128k] [Normal - JPEG: 597 x 800 pix - 272k] Caption : PR Photo 33g/01 is a composite (false-) colour image obtained by NAOS-CONICA of the region around the Becklin-Neugebauer object that is deeply embedded in the Orion Nebula. It is based on two exposures, one in the light of shock-excited molecular hydrogen line (H 2 ; wavelength 2.12 µm; here rendered as blue) and one in the broader K-band (2.2 µm; red) from ionized hydrogen. A third (green) image was produced as an "average" of the H 2 and K-band images. The field-of-view measures 20 x 25 arcsec 2 , cf. the 1 x 1 arcsec 2 square. North is up and east to the left. PR Photo 33g/01 is a composite image of the region around the Becklin-Neugebauer object (generally refered to as "BN" ). With its associated Kleinmann-Low nebula, it is located in the Orion star forming region at a distance of approx. 1500 light-years. It is the nearest high-mass star-forming complex. The immediate vicinity of BN (the brightest star in the image) is highly dynamic with outflows and cloudlets glowing in the light of shock-excited molecular hydrogen. While many masers and outflows have been detected, the identification of their driving sources is still lacking. Deep images in the infrared K and H bands, as well as in the light of molecular hydrogen emission were obtained with NAOS-CONICA at VLT YEPUN during the current tests. The new images facilitate the detection of fainter and smaller structures in the cloud than ever before. More details on the embedded star cluster are revealed as well. These observations were only made possible by the infrared wavefront sensor of NAOS. The latter is a unique capability of NAOS and allows to do adaptive optics on highly embedded infrared sources, which are practically invisible at optical wavelengths. Exploring the limits ESO PR Photo 33h/01 ESO PR Photo 33h/01 [Preview - JPEG: 400 x 260 pix - 44k] [Normal - JPEG: 800 x 520 pix - 112k] Caption : PR Photo 33h/01 shows a NAOS-CONICA image of the double star GJ 263 for which the angular distance between the two components is only 0.030 arcsec . The raw image, as directly recorded by CONICA, is shown in the middle, with a computer-processed (using the ONERA MISTRAL myopic deconvolution algorithm) version to the right. The recorded Point-Spread-Function (PSF) is shown to the left. For this, the C50S camera (0.01325 arcsec/pixel) was used, with an FeII filter at the near-infrared wavelength 1.257 µm. The exposure time was 10 seconds. ESO PR Photo 33i/01 ESO PR Photo 33i/01 [Preview - JPEG: 400 x 316 pix - 82k] [Normal - JPEG: 800 x 631 pix - 208k] Caption : PR Photo 33i/01 shows the near-diffraction-limited image of a 17-mag reference star , as recorded with NAOS-CONICA during a 200-second exposure in the K-band under 0.60 arcsec seeing. The 3D-profile is also shown. ESO PR Photo 33j/01 ESO PR Photo 33j/01 [Preview - JPEG: 342 x 400 pix - 83k] [Normal - JPEG: 684 x 800 pix - 200k] Caption : PR Photo 33j/01 shows the central cluster in the 30 Doradus HII region in the Large Magellanic Cloud (LMC), a satellite of our Milky Way Galaxy. It was obtained by NAOS-CONICA in the infrared K-band during a 600 second exposure. The field shown here measures 15 x 15 arcsec 2. PR Photos 33h-j/01 provide three examples of images obtained during specific tests where the observers pushed NAOS-CONICA towards the limits to explore the potential of the new instrument. Although, as expected, these images are not "perfect", they bear clear witness to the impressive performance, already at this early stage of the commissioning programme. The first PR Photo 33h/01 shows how diffraction-limited imaging with NAOS-CONICA at a wavelength of 1.257 µm allows to view the individual components of a close double star, here the binary star GJ 263 for which the angular distance between the two stars is only 0.030 arcsec (i.e., the angle subtended by a 1 Euro coin at a distance of 160 km). Spatially resolved observations of binary stars like this one will allow the determination of orbital parameters, and ultimately of the masses of the individual binary star components. After few days of optimisation and calibration, NAOS-CONICA was able to "close the loop" on a reference star as faint as visual magnitude 17 and to provide a fine diffraction-limited K-band image with Strehl ratio 19% under 0.6 arcsec seeing. PR Photo 33i/01 provides a view of this image, as seen in the recorder frame and as a 3D-profile. The exposure time was 200 seconds. The ability to use reference stars as faint as this is an enormous asset for NAOS-CONICA - it will be first to offer this capability to non-specialist users with an instrument on an 8-10 m class telescope . This permits to access many sky fields and already get significant AO corrections, without having to wait for the artificial laser guide star now being constructed for the VLT, see below. 30 Doradus in the Large Magellanic Cloud (LMC - a satellite of our Galaxy) is the most luminous, giant HII region in the Local Group of Galaxies. It is powered by a massive star cluster with more than 100 ultra-luminous stars (of the "Wolf-Rayet"-type and O-stars). The NAOS CONICA K-band image PR Photo 33x/01 resolves the dense stellar core of high-mass stars at the centre of the cluster, revealing thousands of lower mass cluster members. Due to the lack of a sufficiently bright, isolated and single reference star in this sky field, the observers used instead the bright central star complex (R136a) to generate the corrective signals to the flexible mirror, needed to compensate for the atmospheric turbulence. However, R136a is not a round object; it is strongly elongated in the "5 hour"-direction. As a result, all star images seen in this photo are slightly elongated in the same direction as R136a. Nevertheless, this is a small penalty to pay for the large improvement obtained over a direct (seeing-limited) image! Adaptive Optics at ESO - a long tradition ESO PR Photo 33k/01 ESO PR Photo 33k/01 [Preview - JPEG: 400 x 320 pix - 144k] [Normal - JPEG: 800 x 639 pix - 344k] [Hi-Res - JPEG: 3000 x 2398 pix - 3.0M] ESO PR Photo 33l/01 ESO PR Photo 33l/01 [Preview - JPEG: 400 x 367 pix - 47k] [Normal - JPEG: 800 x 734 pix - 592k] [Hi-Res - JPEG: 3000 x 2754 pix - 3.9M] Caption : PR Photo 33k/01 is a view of the upper platform at the ESO Paranal Observatory with the four enclosures for the VLT 8.2-m Unit Telescopes and the partly subterranean Interferometric Laboratory (at centre). YEPUN (UT4) is housed in the enclosure to the right. This photo was obtained in the evening of November 25, 2001, some hours before "First Light" was achieved for the new NAOS-CONICA instrument, mounted at that telescope. PR Photo 33l/01 NAOS-CONICA installed on the Nasmyth B platform of the 8.2-m VLT YEPUN Unit Telescope. From left to right: the telescope adapter/rotator (dark blue), NAOS (light blue) and the CONICA cryostat (red). The control electronics is housed in the white cabinet. "Adaptive Optics" is a modern buzzword of astronomy. It embodies the seemingly magic way by which ground-based telescopes can overcome the undesirable blurring effect of atmospheric turbulence that has plagued astronomers for centuries. With "Adaptive Optics", the images of stars and galaxies captured by these instruments are now as sharp as theoretically possible. Or, as the experts like to say, "it is as if a giant ground-based telescope is 'lifted' into space by a magic hand!" . Adaptive Optics works by means of a computer-controlled, flexible mirror that counteracts the image distortion induced by atmospheric turbulence in real time. The concept is not new. Already in 1989, the first Adaptive Optics system ever built for Astronomy (aptly named "COME-ON" ) was installed on the 3.6-m telescope at the ESO La Silla Observatory, as the early fruit of a highly successful continuing collaboration between ESO and French research institutes (ONERA and Observatoire de Paris). Ten years ago, ESO initiated an Adaptive Optics program , to serve the needs for its frontline VLT project. In 1993, the Adaptive Optics facility (ADONIS) was offered to Europe's astronomers, as the first instrument of its kind, available for non-specialists. It is still in operation and continues to produce frontline results, cf. ESO PR 22/01. In 1997, ESO launched a collaborative effort with a French Consortium ( see below) for the development of the NAOS Nasmyth Adaptive Optics System . With its associated CONICA IR high angular resolution camera , developed with a German Consortium ( see below), it provides a full high angular resolution capability on the VLT at Paranal. With the successful "First Light" on November 25, 2001, this project is now about to enter into the operational phase. The advantages of NAOS-CONICA NAOS-CONICA belongs to a new generation of sophisticated adaptive optics (AO) devices. They have certain advantages over past systems. In particular, NAOS is unique in being equipped with an infrared-sensitive Wavefront Sensor (WFS) that permits to look inside regions that are highly obscured by interstellar dust and therefore unobservable in visible light. With its other WFS for visible light , NAOS should be able to achieve the highest degree of light concentration (the so-called "Strehl ratio") obtained at any existing 8-m class telescope. It also provides partially corrected images, using reference stars (see PR Photo 33e/01 ) as faint as visual magnitude 18, fainter than demonstrated so far at any other AO system at such large telescope. A major advantage of CONICA is to offer the large format and very high image quality required to fully match NAOS' performance , as well as a variety of observing modes. Moreover, NAOS-CONICA is the first astronomical AO instrument to be offered with a full end-to-end observing capability. It is completely integrated into the VLT dataflow system , with a seamless process from the preparation of the observations, including optimization of the instrument, to their execution at the telescope and on to automatic data quality assessment and storage in the VLT Archive. Collaboration and Institutes The Nasmyth Adaptive Optics System (NAOS) has been developed, with the support of INSU-CNRS, by a French Consortium in collaboration with ESO. The French consortium consists of Office National d'Etudes et de Recherches Aérospatiales (ONERA) , Laboratoire d'Astrophysique de Grenoble (LAOG) and Observatoire de Paris (DESPA and DASGAL). The Project Manager is Gérard Rousset (ONERA), the Instrument Responsible is François Lacombe (Observatoire de Paris) and the Project Scientist is Anne-Marie Lagrange (Laboratoire d'Astrophysique de Grenoble). The CONICA Near-Infrared CAmera has been developed by a German Consortium, with an extensive ESO collaboration. The Consortium consists of Max-Planck-Institut für Astronomie (MPIA) (Heidelberg) and the Max-Planck-Institut für Extraterrestrische Physik (MPE) (Garching). The Principal Investigator (PI) is Rainer Lenzen (MPIA), with Reiner Hofmann (MPE) as Co-Investigator. Contacts Norbert Hubin European Southern Observatory Garching, Germany Tel.: +4989-3200-6517 email: nhubin@eso.org Alan Moorwood European Southern Observatory Garching, Germany Tel.: +4989-3200-6294 email: amoorwoo@eso.org Appendix: Technical Information about NAOS and CONICA Once fully tested, NAOS-CONICA will provide adaptive optics assisted imaging, polarimetry and spectroscopy in the 1 - 5 µm waveband. NAOS is an adaptive optics system equipped with both visible and infrared, Shack-Hartmann type, wavefront sensors. Provided a reference source (e.g., a star) with visual magnitude V brighter than 18 or K-magnitude brighter than 13 mag is available within 60 arcsec of the science target, NAOS-CONICA will ultimately offer diffraction limited resolution at the level of 0.030 arcsec at a wavelength of 1 µm, albeit with a large halo around the image core for the faint end of the reference source brightness. This may be compared with VLT median seeing images of 0.65 arcsec at a wavelength of 1 µm and exceptionally good images around 0.30 arcsec. NAOS-CONICA is installed at Nasmyth Focus B at VLT YEPUN (UT4). In about two years' time, this instrument will benefit from a sodium Laser Guide Star (LGS) facility. The creation of an artificial guide star is then possible in any sky field of interest, thereby providing a much better sky coverage than what is possible with natural guide stars only. NAOS is equipped with two wavefront sensors, one in the visible part of the spectrum (0.45 - 0.95 µm) and one in the infrared part (1 - 2.5 µm); both are based on the Shack-Hartmann principle. The maximum correction frequency is about 500 Hz. There are 185 deformable mirror actuators plus a tip-tilt mirror correction. Together, they should permit to obtain a high Strehl ratio in the K-band (2.2 µm), up to 70%, depending on the actual seeing and waveband. Both the visible and IR wavefront sensors (WFS) have been optimized to provide AO correction for faint objects/stars. The visible WFS provides a low-order correction for objects as faint as visual magnitude ~ 18. The IR WFS will provide a low-order correction for objects as faint as K-magnitude 13. CONICA is a high performant instrument in terms of image quality and detector sensitivity. It has been designed so that it is able to make optimal use of the AO system. Inherent mechanical flexures are corrected on-line by NAOS through a pointing model. It offers a variety of modes, e.g., direct imaging, polarimetry, slit spectroscopy, coronagraphy and spectro-imaging. The ESO PR Video Clips service to visitors to the ESO website provides "animated" illustrations of the ongoing work and events at the European Southern Observatory. The most recent clip was: ESO PR Video Clip 06/01 about observations of a binary star (8 October 2001). Information is also available on the web about other ESO videos.
Fast computational scheme of image compression for 32-bit microprocessors
NASA Technical Reports Server (NTRS)
Kasperovich, Leonid
1994-01-01
This paper presents a new computational scheme of image compression based on the discrete cosine transform (DCT), underlying JPEG and MPEG International Standards. The algorithm for the 2-d DCT computation uses integer operations (register shifts and additions / subtractions only); its computational complexity is about 8 additions per image pixel. As a meaningful example of an on-board image compression application we consider the software implementation of the algorithm for the Mars Rover (Marsokhod, in Russian) imaging system being developed as a part of Mars-96 International Space Project. It's shown that fast software solution for 32-bit microprocessors may compete with the DCT-based image compression hardware.
HUBBLE SNAPS 'FAMILY PORTRAIT'
NASA Technical Reports Server (NTRS)
2002-01-01
The Hubble Space Telescope's Near Infrared Camera and Multi-Object Spectrometer (NICMOS) has peered into the Cone Nebula, revealing a stunning image of six baby sun-like stars surrounding their mother, a bright, massive star. Known as NGC 2264 IRS, the massive star triggered the creation of these baby stars by releasing high-speed particles of dust and gas during its formative years. The image on the left, taken in visible light by a ground-based telescope, shows the Cone Nebula, located 2,500 light-years away in the constellation Monoceros. The white box pinpoints the location of the star nursery. The nursery cannot be seen in this image because dust and gas obscure it. The large cone of cold molecular hydrogen and dust rising from the lefthand edge of the image was created by the outflow from NGC 2264 IRS. The NICMOS image on the right shows this massive star - the brightest source in the region - and the stars formed by its outflow. The baby stars are only .04 to .08 light-years away from their brilliant mother. The rings surrounding the massive star and the spikes emanating from it are not part of the image. This pattern demonstrates the near-perfect optical performance of NICMOS. A near-perfect optical system should bend light from point-like sources, such as NGC 2264 IRS, forming these diffraction patterns of rings and spikes. This false color image was taken with 1.1-, 1.6-, and 2.2-micron filters. The image was taken on April 28, 1997. Credits: Rodger Thompson, Marcia Rieke and Glenn Schneider (University of Arizona), and NASA Image files in GIF and JPEG format and captions may be accessed on the Internet via anonymous ftp from ftp.stsci.edu in /pubinfo.
Volcanoes of the Wrangell Mountains and Cook Inlet region, Alaska: selected photographs
Neal, Christina A.; McGimsey, Robert G.; Diggles, Michael F.
2001-01-01
Alaska is home to more than 40 active volcanoes, many of which have erupted violently and repeatedly in the last 200 years. This CD-ROM contains 97 digitized color 35-mm images which represent a small fraction of thousands of photographs taken by Alaska Volcano Observatory scientists, other researchers, and private citizens. The photographs were selected to portray Alaska's volcanoes, to document recent eruptive activity, and to illustrate the range of volcanic phenomena observed in Alaska. These images are for use by the interested public, multimedia producers, desktop publishers, and the high-end printing industry. The digital images are stored in the 'images' folder and can be read across Macintosh, Windows, DOS, OS/2, SGI, and UNIX platforms with applications that can read JPG (JPEG - Joint Photographic Experts Group format) or PCD (Kodak's PhotoCD (YCC) format) files. Throughout this publication, the image numbers match among the file names, figure captions, thumbnail labels, and other references. Also included on this CD-ROM are Windows and Macintosh viewers and engines for keyword searches (Adobe Acrobat Reader with Search). At the time of this publication, Kodak's policy on the distribution of color-management files is still unresolved, and so none is included on this CD-ROM. However, using the Universal Ektachrome or Universal Kodachrome transforms found in your software will provide excellent color. In addition to PhotoCD (PCD) files, this CD-ROM contains large (14.2'x19.5') and small (4'x6') screen-resolution (72 dots per inch; dpi) images in JPEG format. These undergo downsizing and compression relative to the PhotoCD images.
NASA Astrophysics Data System (ADS)
Yang, Keon Ho; Jung, Haijo; Kang, Won-Suk; Jang, Bong Mun; Kim, Joong Il; Han, Dong Hoon; Yoo, Sun-Kook; Yoo, Hyung-Sik; Kim, Hee-Joung
2006-03-01
The wireless mobile service with a high bit rate using CDMA-1X EVDO is now widely used in Korea. Mobile devices are also increasingly being used as the conventional communication mechanism. We have developed a web-based mobile system that communicates patient information and images, using CDMA-1X EVDO for emergency diagnosis. It is composed of a Mobile web application system using the Microsoft Windows 2003 server and an internet information service. Also, a mobile web PACS used for a database managing patient information and images was developed by using Microsoft access 2003. A wireless mobile emergency patient information and imaging communication system is developed by using Microsoft Visual Studio.NET, and JPEG 2000 ActiveX control for PDA phone was developed by using the Microsoft Embedded Visual C++. Also, the CDMA-1X EVDO is used for connections between mobile web servers and the PDA phone. This system allows fast access to the patient information database, storing both medical images and patient information anytime and anywhere. Especially, images were compressed into a JPEG2000 format and transmitted from a mobile web PACS inside the hospital to the radiologist using a PDA phone located outside the hospital. Also, this system shows radiological images as well as physiological signal data, including blood pressure, vital signs and so on, in the web browser of the PDA phone so radiologists can diagnose more effectively. Also, we acquired good results using an RW-6100 PDA phone used in the university hospital system of the Sinchon Severance Hospital in Korea.
Optimized atom position and coefficient coding for matching pursuit-based image compression.
Shoa, Alireza; Shirani, Shahram
2009-12-01
In this paper, we propose a new encoding algorithm for matching pursuit image coding. We show that coding performance is improved when correlations between atom positions and atom coefficients are both used in encoding. We find the optimum tradeoff between efficient atom position coding and efficient atom coefficient coding and optimize the encoder parameters. Our proposed algorithm outperforms the existing coding algorithms designed for matching pursuit image coding. Additionally, we show that our algorithm results in better rate distortion performance than JPEG 2000 at low bit rates.
About a method for compressing x-ray computed microtomography data
NASA Astrophysics Data System (ADS)
Mancini, Lucia; Kourousias, George; Billè, Fulvio; De Carlo, Francesco; Fidler, Aleš
2018-04-01
The management of scientific data is of high importance especially for experimental techniques that produce big data volumes. Such a technique is x-ray computed tomography (CT) and its community has introduced advanced data formats which allow for better management of experimental data. Rather than the organization of the data and the associated meta-data, the main topic on this work is data compression and its applicability to experimental data collected from a synchrotron-based CT beamline at the Elettra-Sincrotrone Trieste facility (Italy) and studies images acquired from various types of samples. This study covers parallel beam geometry, but it could be easily extended to a cone-beam one. The reconstruction workflow used is the one currently in operation at the beamline. Contrary to standard image compression studies, this manuscript proposes a systematic framework and workflow for the critical examination of different compression techniques and does so by applying it to experimental data. Beyond the methodology framework, this study presents and examines the use of JPEG-XR in combination with HDF5 and TIFF formats providing insights and strategies on data compression and image quality issues that can be used and implemented at other synchrotron facilities and laboratory systems. In conclusion, projection data compression using JPEG-XR appears as a promising, efficient method to reduce data file size and thus to facilitate data handling and image reconstruction.
NASA Astrophysics Data System (ADS)
1999-11-01
First Images from FORS2 at VLT KUEYEN on Paranal The first, major astronomical instrument to be installed at the ESO Very Large Telescope (VLT) was FORS1 ( FO cal R educer and S pectrograph) in September 1998. Immediately after being attached to the Cassegrain focus of the first 8.2-m Unit Telescope, ANTU , it produced a series of spectacular images, cf. ESO PR 14/98. Many important observations have since been made with this outstanding facility. Now FORS2 , its powerful twin, has been installed at the second VLT Unit Telescope, KUEYEN . It is the fourth major instrument at the VLT after FORS1 , ISAAC and UVES.. The FORS2 Commissioning Team that is busy installing and testing this large and complex instrument reports that "First Light" was successfully achieved already on October 29, 1999, only two days after FORS2 was first mounted at the Cassegrain focus. Since then, various observation modes have been carefully tested, including normal and high-resolution imaging, echelle and multi-object spectroscopy, as well as fast photometry with millisecond time resolution. A number of fine images were obtained during this work, some of which are made available with the present Press Release. The FORS instruments ESO PR Photo 40a/99 ESO PR Photo 40a/99 [Preview - JPEG: 400 x 345 pix - 203k] [Normal - JPEG: 800 x 689 pix - 563kb] [Full-Res - JPEG: 1280 x 1103 pix - 666kb] Caption to PR Photo 40a/99: This digital photo shows the twin instruments, FORS2 at KUEYEN (in the foreground) and FORS1 at ANTU, seen in the background through the open ventilation doors in the two telescope enclosures. Although they look alike, the two instruments have specific functions, as described in the text. FORS1 and FORS2 are the products of one of the most thorough and advanced technological studies ever made of a ground-based astronomical instrument. They have been specifically designed to investigate the faintest and most remote objects in the universe. They are "multi-mode instruments" that may be used in several different observation modes. FORS2 is largely identical to FORS1 , but there are a number of important differences. For example, it contains a Mask Exchange Unit (MXU) for laser-cut star-plates [1] that may be inserted at the focus, allowing a large number of spectra of different objects, in practice up to about 70, to be taken simultaneously. Highly sophisticated software assigns slits to individual objects in an optimal way, ensuring a great degree of observing efficiency. Instead of the polarimetry optics found in FORS1 , FORS2 has new grisms that allow the use of higher spectral resolutions. The FORS project was carried out under ESO contract by a consortium of three German astronomical institutes, the Heidelberg State Observatory and the University Observatories of Göttingen and Munich. The participating institutes have invested a total of about 180 man-years of work in this unique programme. The photos below demonstrate some of the impressive possibilities with this new instrument. They are based on observations with the FORS2 standard resolution collimator (field size 6.8 x 6.8 armin = 2048 x 2048 pixels; 1 pixel = 0.20 arcsec). In addition, observations of the Crab pulsar demonstrate a new observing mode, high-speed photometry. Protostar HH-34 in Orion ESO PR Photo 40b/99 ESO PR Photo 40b/99 [Preview - JPEG: 400 x 444 pix - 220kb] [Normal - JPEG: 800 x 887 pix - 806kb] [Full-Res - JPEG: 2000 x 2217 pix - 3.6Mb] The Area around HH-34 in Orion ESO PR Photo 40c/99 ESO PR Photo 40c/99 [Preview - JPEG: 400 x 494 pix - 262kb] [Full-Res - JPEG: 802 x 991 pix - 760 kb] The HH-34 Superjet in Orion (centre) PR Photo 40b/99 shows a three-colour composite of the young object Herbig-Haro 34 (HH-34) , now in the protostar stage of evolution. It is based on CCD frames obtained with the FORS2 instrument in imaging mode, on November 2 and 6, 1999. This object has a remarkable, very complicated appearance that includes two opposite jets that ram into the surrounding interstellar matter. This structure is produced by a machine-gun-like blast of "bullets" of dense gas ejected from the star at high velocities (approaching 250 km/sec). This seems to indicate that the star experiences episodic "outbursts" when large chunks of material fall onto it from a surrounding disk. HH-34 is located at a distance of approx. 1,500 light-years, near the famous Orion Nebula , one of the most productive star birth regions. Note also the enigmatic "waterfall" to the upper left, a feature that is still unexplained. PR Photo 40c/99 is an enlargement of a smaller area around the central object. Technical information : Photo 40b/99 is based on a composite of three images taken through three different filters: B (wavelength 429 nm; Full-Width-Half-Maximum (FWHM) 88 nm; exposure time 10 min; here rendered as blue), H-alpha (centered on the hydrogen emission line at wavelength 656 nm; FWHM 6 nm; 30 min; green) and S II (centrered at the emission lines of inonized sulphur at wavelength 673 nm; FWHM 6 nm; 30 min; red) during a period of 0.8 arcsec seeing. The field shown measures 6.8 x 6.8 arcmin and the images were recorded in frames of 2048 x 2048 pixels, each measuring 0.2 arcsec. The Full Resolution version shows the original pixels. North is up; East is left. N 70 Nebula in the Large Magellanic Cloud ESO PR Photo 40d/99 ESO PR Photo 40d/99 [Preview - JPEG: 400 x 444 pix - 360kb] [Normal - JPEG: 800 x 887 pix - 1.0Mb] [Full-Res - JPEG: 1997 x 2213 pix - 3.4Mb] The N 70 Nebula in the LMC ESO PR Photo 40e/99 ESO PR Photo 40e/99 [Preview - JPEG: 400 x 485 pix - 346kb] [Full-Res - JPEG: 986 x 1196 pix - 1.2Mb] The N70 Nebula in the LMC (detail) PR Photo 40d/99 shows a three-colour composite of the N 70 nebula. It is a "Super Bubble" in the Large Magellanic Cloud (LMC) , a satellite galaxy to the Milky Way system, located in the southern sky at a distance of about 160,000 light-years. This photo is based on CCD frames obtained with the FORS2 instrument in imaging mode in the morning of November 5, 1999. N 70 is a luminous bubble of interstellar gas, measuring about 300 light-years in diameter. It was created by winds from hot, massive stars and supernova explosions and the interior is filled with tenuous, hot expanding gas. An object like N70 provides astronomers with an excellent opportunity to explore the connection between the lifecycles of stars and the evolution of galaxies. Very massive stars profoundly affect their environment. They stir and mix the interstellar clouds of gas and dust, and they leave their mark in the compositions and locations of future generations of stars and star systems. PR Photo 40e/99 is an enlargement of a smaller area of this nebula. Technical information : Photos 40d/99 is based on a composite of three images taken through three different filters: B (429 nm; FWHM 88 nm; 3 min; here rendered as blue), V (554 nm; FWHM 111 nm; 3 min; green) and H-alpha (656 nm; FWHM 6 nm; 3 min; red) during a period of 1.0 arcsec seeing. The field shown measures 6.8 x 6.8 arcmin and the images were recorded in frames of 2048 x 2048 pixels, each measuring 0.2 arcsec. The Full Resolution version shows the original pixels. North is up; East is left. The Crab Nebula in Taurus ESO PR Photo 40f/99 ESO PR Photo 40f/99 [Preview - JPEG: 400 x 446 pix - 262k] [Normal - JPEG: 800 x 892 pix - 839 kb] [Full-Res - JPEG: 2036 x 2269 pix - 3.6Mb] The Crab Nebula in Taurus ESO PR Photo 40g/99 ESO PR Photo 40g/99 [Preview - JPEG: 400 x 444 pix - 215kb] [Full-Res - JPEG: 817 x 907 pix - 485 kb] The Crab Nebula in Taurus (detail) PR Photo 40f/99 shows a three colour composite of the well-known Crab Nebula (also known as "Messier 1" ), as observed with the FORS2 instrument in imaging mode in the morning of November 10, 1999. It is the remnant of a supernova explosion at a distance of about 6,000 light-years, observed almost 1000 years ago, in the year 1054. It contains a neutron star near its center that spins 30 times per second around its axis (see below). PR Photo 40g/99 is an enlargement of a smaller area. More information on the Crab Nebula and its pulsar is available on the web, e.g. at a dedicated website for Messier objects. In this picture, the green light is predominantly produced by hydrogen emission from material ejected by the star that exploded. The blue light is predominantly emitted by very high-energy ("relativistic") electrons that spiral in a large-scale magnetic field (so-called syncrotron emission ). It is believed that these electrons are continuously accelerated and ejected by the rapidly spinning neutron star at the centre of the nebula and which is the remnant core of the exploded star. This pulsar has been identified with the lower/right of the two close stars near the geometric center of the nebula, immediately left of the small arc-like feature, best seen in PR Photo 40g/99 . Technical information : Photo 40f/99 is based on a composite of three images taken through three different optical filters: B (429 nm; FWHM 88 nm; 5 min; here rendered as blue), R (657 nm; FWHM 150 nm; 1 min; green) and S II (673 nm; FWHM 6 nm; 5 min; red) during periods of 0.65 arcsec (R, S II) and 0.80 (B) seeing, respectively. The field shown measures 6.8 x 6.8 arcmin and the images were recorded in frames of 2048 x 2048 pixels, each measuring 0.2 arcsec. The Full Resolution version shows the original pixels. North is up; East is left. The High Time Resolution mode (HIT) of FORS2 ESO PR Photo 40h/99 ESO PR Photo 40h/99 [Preview - JPEG: 400 x 304 pix - 90kb] [Normal - JPEG: 707 x 538 pix - 217kb] Time Sequence of the Pulsar in the Crab Nebula ESO PR Photo 40i/99 ESO PR Photo 40i/99 [Preview - JPEG: 400 x 324 pix - 42kb] [Normal - JPEG: 800 x 647 pix - 87kb] Lightcurve of the Pulsar in the Crab Nebula In combination with the large light collecting power of the VLT Unit Telescopes, the high time resolution (25 nsec = 0.000000025 sec) of the ESO-developed FIERA CCD-detector controller opens a new observing window for celestial objects that undergo light intensity variations on very short time scales. A first implementation of this type of observing mode was tested with FORS2 during the first commissioning phase, by means of one of the most fascinating astronomical objects, the rapidly spinning neutron star in the Crab Nebula . It is also known as the Crab pulsar and is an exceedingly dense object that represents an extreme state of matter - it weighs as much as the Sun, but measures only about 30 km across. The result presented here was obtained in the so-called trailing mode , during which one of the rectangular openings of the Multi-Object Spectroscopy (MOS) assembly within FORS2 is placed in front of the lower end of the field. In this way, the entire surface of the CCD is covered, except the opening in which the object under investigation is positioned. By rotating this opening, some neighbouring objects (e.g. stars for alignment) may be observed simultaneously. As soon as the shutter is opened, the charges on the chip are progressively shifted upwards, one pixel at a time, until those first collected in the bottom row behind the opening have reached the top row. Then the entire CCD is read out and the digital data with the full image is stored in the computer. In this way, successive images (or spectra) of the object are recorded in the same frame, displaying the intensity variation with time during the exposure. For this observation, the total exposure lasted 2.5 seconds. During this time interval the image of the pulsar (and those of some neighbouring stars) were shifted 2048 times over the 2048 rows of the CCD. Each individual exposure therefore lasted exactly 1.2 msec (0.0012 sec), corresponding to a nominal time-resolution of 2.4 msec (2 pixels). Faster or slower time resolutions are possible by increasing or decreasing the shift and read-out rate [2]. In ESO PR Photo 40h/99 , the continuous lines in the top and bottom half are produced by normal stars of constant brightness, while the series of dots represents the individual pulses of the Crab pulsar, one every 33 milliseconds (i.e. the neutron star rotates around its axis 30 times per second). It is also obvious that these dots are alternatively brighter and fainter: they mirror the double-peaked profile of the light pulses, as shown in ESO PR Photo 40i/99 . In this diagramme, the time increases along the abscissa axis (1 pixel = 1.2 msec) and the momentary intensity (uncalibrated) is along the ordinate axis. One full revolution of the neutron star corresponds to the distance from one high peak to the next, and the diagramme therefore covers six consecutive revolutions (about 200 milliseconds). Following thorough testing, this new observing mode will allow to investigate the brightness variations of this and many other objects in great detail in order to gain new and fundamental insights in the physical mechanisms that produce the radiation pulses. In addition, it is foreseen to do high time resolution spectroscopy of rapidly varying phenomena. Pushing it to the limits with an 8.2-m telescope like KUEYEN will be a real challenge to the observers that will most certainly lead to great and exciting research projects in various fields of modern astrophysics. Technical information : The frame shown in Photo 40h/99 was obtained during a total exposure time of 2.5 sec without any optical filtre. During this time, the charges on the CCD were shifted over 2048 rows; each row was therefore exposed during 1.2 msec. The bright continuous line comes from the star next to the pulsar; the orientation was such that the "observation slit" was placed over two neighbouring stars. Preliminary data reduction: 11 pixels were added across the pulsar image to increase the signal-to-noise ratio and the background light from the Crab Nebula was subtracted for the same reason. Division by a brighter star (also background-subtracted, but not shown in the image) helped to reduce the influence of the Earth's atmosphere. Notes [1] The masks are produced by the Mask Manufacturing Unit (MMU) built by the VIRMOS Consortium for the VIMOS and NIRMOS instruments that will be installed at the VLT MELIPAL and YEPUN telescopes, respectively. [2] The time resolution achieved during the present test was limited by the maximum charge transfer rate of this particular CCD chip; in the future, FORS2 may be equipped with a new chip with a rate that is up to 20 times faster. How to obtain ESO Press Information ESO Press Information is made available on the World-Wide Web (URL: http://www.eso.org../ ). ESO Press Photos may be reproduced, if credit is given to the European Southern Observatory.
NASA Astrophysics Data System (ADS)
2001-01-01
Last year saw very good progress at ESO's Paranal Observatory , the site of the Very Large Telescope (VLT). The third and fourth 8.2-m Unit Telescopes, MELIPAL and YEPUN had "First Light" (cf. PR 01/00 and PR 18/00 ), while the first two, ANTU and KUEYEN , were busy collecting first-class data for hundreds of astronomers. Meanwhile, work continued towards the next phase of the VLT project, the combination of the telescopes into the VLT Interferometer. The test instrument, VINCI (cf. PR 22/00 ) is now being installed in the VLTI Laboratory at the centre of the observing platform on the top of Paranal. Below is a new collection of video sequences and photos that illustrate the latest developments at the Paranal Observatory. The were obtained by the EPR Video Team in December 2000. The photos are available in different formats, including "high-resolution" that is suitable for reproduction purposes. A related ESO Video News Reel for professional broadcasters will soon become available and will be announced via the usual channels. Overview Paranal Observatory (Dec. 2000) Video Clip 02a/01 [MPEG - 4.5Mb] ESO PR Video Clip 02a/01 "Paranal Observatory (December 2000)" (4875 frames/3:15 min) [MPEG Video+Audio; 160x120 pix; 4.5Mb] [MPEG Video+Audio; 320x240 pix; 13.5 Mb] [RealMedia; streaming; 34kps] [RealMedia; streaming; 200kps] ESO Video Clip 02a/01 shows some of the construction activities at the Paranal Observatory in December 2000, beginning with a general view of the site. Then follow views of the Residencia , a building that has been designed by Architects Auer and Weber in Munich - it integrates very well into the desert, creating a welcome recreational site for staff and visitors in this harsh environment. The next scenes focus on the "stations" for the auxiliary telescopes for the VLTI and the installation of two delay lines in the 140-m long underground tunnel. The following part of the video clip shows the start-up of the excavation work for the 2.6-m VLT Survey Telescope (VST) as well as the location known as the "NTT Peak", now under consideration for the installation of the 4-m VISTA telescope. The last images are from to the second 8.2-m Unit Telescope, KUEYEN, that has been in full use by the astronomers with the UVES and FORS2 instruments since April 2000. ESO PR Photo 04a/01 ESO PR Photo 04a/01 [Preview - JPEG: 466 x 400 pix - 58k] [Normal - JPEG: 931 x 800 pix - 688k] [Hires - JPEG: 3000 x 2577 pix - 7.6M] Caption : PR Photo 04a/01 shows an afternoon view from the Paranal summit towards East, with the Base Camp and the new Residencia on the slope to the right, above the valley in the shadow of the mountain. ESO PR Photo 04b/01 ESO PR Photo 04b/01 [Preview - JPEG: 791 x 400 pix - 89k] [Normal - JPEG: 1582 x 800 pix - 1.1Mk] [Hires - JPEG: 3000 x 1517 pix - 3.6M] PR Photo 04b/01 shows the ramp leading to the main entrance to the partly subterranean Residencia , with the steel skeleton for the dome over the central area in place. ESO PR Photo 04c/01 ESO PR Photo 04c/01 [Preview - JPEG: 498 x 400 pix - 65k] [Normal - JPEG: 995 x 800 pix - 640k] [Hires - JPEG: 3000 x 2411 pix - 6.6M] PR Photo 04c/01 is an indoor view of the reception hall under the dome, looking towards the main entrance. ESO PR Photo 04d/01 ESO PR Photo 04d/01 [Preview - JPEG: 472 x 400 pix - 61k] [Normal - JPEG: 944 x 800 pix - 632k] [Hires - JPEG: 3000 x 2543 pix - 5.8M] PR Photo 04d/01 shows the ramps from the reception area towards the rooms. The VLT Interferometer The Delay Lines consitute a most important element of the VLT Interferometer , cf. PR Photos 26a-e/00. At this moment, two Delay Lines are operational on site. A third system will be integrated early this year. The VLTI Delay Line is located in an underground tunnel that is 168 metres long and 8 metres wide. This configuration has been designed to accommodate up to eight Delay Lines, including their transfer optics in an ideal environment: stable temperature, high degree of cleanliness, low levels of straylight, low air turbulence. The positions of the Delay Line carriages are computed to adjust the Optical Path Lengths requested for the fringe pattern observation. The positions are controlled in real time by a laser metrology system, specially developed for this purpose. The position precision is about 20 nm (1 nm = 10 -9 m, or 1 millionth of a millimetre) over a distance of 120 metres. The maximum velocity is 0.50 m/s in position mode and maximum 0.05 m/s in operation. The system is designed for 25 year of operation and to survive earthquake up to 8.6 magnitude on the Richter scale. The VLTI Delay Line is a three-year project, carried out by ESO in collaboration with Dutch Space Holdings (formerly Fokker Space) and TPD-TNO . VLTI Delay Lines (December 2000) - ESO PR Video Clip 02b/01 [MPEG - 3.6Mb] ESO PR Video Clip 02b/01 "VLTI Delay Lines (December 2000)" (2000 frames/1:20 min) [MPEG Video+Audio; 160x120 pix; 3.6Mb] [MPEG Video+Audio; 320x240 pix; 13.7 Mb] [RealMedia; streaming; 34kps] [RealMedia; streaming; 200kps] ESO Video Clip 02b/00 shows the Delay Lines of the VLT Interferometer facility at Paranal during tests. One of the carriages is moving on 66-metre long rectified rails, driven by a linear motor. The carriage is equipped with three wheels in order to preserve high guidance accuracy. Another important element is the Cat's Eye that reflects the light from the telescope to the VLT instrumentation. This optical system is made of aluminium (including the mirrors) to avoid thermo-mechanical problems. ESO PR Photo 04e/01 ESO PR Photo 04e/01 [Preview - JPEG: 400 x 402 pix - 62k] [Normal - JPEG: 800 x 804 pix - 544k] [Hires - JPEG: 3000 x 3016 pix - 6.2M] Caption : PR Photo 04e/01 shows one of the 30 "stations" for the movable 1.8-m Auxiliary Telescopes. When one of these telescopes is positioned ("parked") on top of it, The light will be guided through the hole towards the Interferometric Tunnel and the Delay Lines. ESO PR Photo 04f/01 ESO PR Photo 04f/01 [Preview - JPEG: 568 x 400 pix - 96k] [Normal - JPEG: 1136 x 800 pix - 840k] [Hires - JPEG: 3000 x 2112 pix - 4.6M] PR Photo 04f/01 shows a general view of the Interferometric Tunnel and the Delay Lines. ESO PR Photo 04g/01 ESO PR Photo 04g/01 [Preview - JPEG: 406 x 400 pix - 62k] [Normal - JPEG: 812 x 800 pix - 448k] [Hires - JPEG: 3000 x 2956 pix - 5.5M] PR Photo 04g/01 shows one of the Delay Line carriages in parking position. The "NTT Peak" The "NTT Peak" is a mountain top located about 2 km to the north of Paranal. It received this name when ESO considered to move the 3.58-m New Technology Telescope from La Silla to this peak. The possibility of installing the 4-m VISTA telescope (cf. PR 03/00 ) on this peak is now being discussed. ESO PR Photo 04h/01 ESO PR Photo 04h/01 [Preview - JPEG: 630 x 400 pix - 89k] [Normal - JPEG: 1259 x 800 pix - 1.1M] [Hires - JPEG: 3000 x 1907 pix - 5.2M] PR Photo 04h/01 shows the view from the "NTT Peak" towards south, vith the Paranal mountain and the VLT enclosures in the background. ESO PR Photo 04i/01 ESO PR Photo 04i/01 [Preview - JPEG: 516 x 400 pix - 50k] [Normal - JPEG: 1031 x 800 pix - 664k] [Hires - JPEG: 3000 x 2328 pix - 6.0M] PR Photo 04i/01 is a view towards the "NTT Peak" from the top of the Paranal mountain. The access road and the concrete pillar that was used to support a site testing telescope at the top of this peak are seen This is the caption to ESO PR Photos 04a-1/01 and PR Video Clips 02a-b/01 . They may be reproduced, if credit is given to the European Southern Observatory. The ESO PR Video Clips service to visitors to the ESO website provides "animated" illustrations of the ongoing work and events at the European Southern Observatory. The most recent clip was: ESO PR Video Clip 01/01 about the Physics On Stage Festival (11 January 2001) . Information is also available on the web about other ESO videos.
NASA Technical Reports Server (NTRS)
Robinson, Julie A.; Webb, Edward L.; Evangelista, Arlene
2000-01-01
Studies that utilize astronaut-acquired orbital photographs for visual or digital classification require high-quality data to ensure accuracy. The majority of images available must be digitized from film and electronically transferred to scientific users. This study examined the effect of scanning spatial resolution (1200, 2400 pixels per inch [21.2 and 10.6 microns/pixel]), scanning density range option (Auto, Full) and compression ratio (non-lossy [TIFF], and lossy JPEG 10:1, 46:1, 83:1) on digital classification results of an orbital photograph from the NASA - Johnson Space Center archive. Qualitative results suggested that 1200 ppi was acceptable for visual interpretive uses for major land cover types. Moreover, Auto scanning density range was superior to Full density range. Quantitative assessment of the processing steps indicated that, while 2400 ppi scanning spatial resolution resulted in more classified polygons as well as a substantially greater proportion of polygons < 0.2 ha, overall agreement between 1200 ppi and 2400 ppi was quite high. JPEG compression up to approximately 46:1 also did not appear to have a major impact on quantitative classification characteristics. We conclude that both 1200 and 2400 ppi scanning resolutions are acceptable options for this level of land cover classification, as well as a compression ratio at or below approximately 46:1. Auto range density should always be used during scanning because it acquires more of the information from the film. The particular combination of scanning spatial resolution and compression level will require a case-by-case decision and will depend upon memory capabilities, analytical objectives and the spatial properties of the objects in the image.
Image editing with Adobe Photoshop 6.0.
Caruso, Ronald D; Postel, Gregory C
2002-01-01
The authors introduce Photoshop 6.0 for radiologists and demonstrate basic techniques of editing gray-scale cross-sectional images intended for publication and for incorporation into computerized presentations. For basic editing of gray-scale cross-sectional images, the Tools palette and the History/Actions palette pair should be displayed. The History palette may be used to undo a step or series of steps. The Actions palette is a menu of user-defined macros that save time by automating an action or series of actions. Converting an image to 8-bit gray scale is the first editing function. Cropping is the next action. Both decrease file size. Use of the smallest file size necessary for the purpose at hand is recommended. Final file size for gray-scale cross-sectional neuroradiologic images (8-bit, single-layer TIFF [tagged image file format] at 300 pixels per inch) intended for publication varies from about 700 Kbytes to 3 Mbytes. Final file size for incorporation into computerized presentations is about 10-100 Kbytes (8-bit, single-layer, gray-scale, high-quality JPEG [Joint Photographic Experts Group]), depending on source and intended use. Editing and annotating images before they are inserted into presentation software is highly recommended, both for convenience and flexibility. Radiologists should find that image editing can be carried out very rapidly once the basic steps are learned and automated. Copyright RSNA, 2002
An efficient system for reliably transmitting image and video data over low bit rate noisy channels
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.; Huang, Y. F.; Stevenson, Robert L.
1994-01-01
This research project is intended to develop an efficient system for reliably transmitting image and video data over low bit rate noisy channels. The basic ideas behind the proposed approach are the following: employ statistical-based image modeling to facilitate pre- and post-processing and error detection, use spare redundancy that the source compression did not remove to add robustness, and implement coded modulation to improve bandwidth efficiency and noise rejection. Over the last six months, progress has been made on various aspects of the project. Through our studies of the integrated system, a list-based iterative Trellis decoder has been developed. The decoder accepts feedback from a post-processor which can detect channel errors in the reconstructed image. The error detection is based on the Huber Markov random field image model for the compressed image. The compression scheme used here is that of JPEG (Joint Photographic Experts Group). Experiments were performed and the results are quite encouraging. The principal ideas here are extendable to other compression techniques. In addition, research was also performed on unequal error protection channel coding, subband vector quantization as a means of source coding, and post processing for reducing coding artifacts. Our studies on unequal error protection (UEP) coding for image transmission focused on examining the properties of the UEP capabilities of convolutional codes. The investigation of subband vector quantization employed a wavelet transform with special emphasis on exploiting interband redundancy. The outcome of this investigation included the development of three algorithms for subband vector quantization. The reduction of transform coding artifacts was studied with the aid of a non-Gaussian Markov random field model. This results in improved image decompression. These studies are summarized and the technical papers included in the appendices.
Processed Thematic Mapper Satellite Imagery for Selected Areas within the U.S.-Mexico Borderlands
Dohrenwend, John C.; Gray, Floyd; Miller, Robert J.
2000-01-01
The study is summarized in the Adobe Acrobat Portable Document Format (PDF) file OF00-309.PDF. This publication also contain satellite full-scene images of selected areas along the U.S.-Mexico border. These images are presented as high-resolution images in jpeg format (IMAGES). The folder LOCATIONS in contains TIFF images showing exact positions of easily-identified reference locations for each of the Landsat TM scenes located at least partly within the U.S. A reference location table (BDRLOCS.DOC in MS Word format) lists the latitude and longitude of each reference location with a nominal precision of 0.001 minute of arc
FBCOT: a fast block coding option for JPEG 2000
NASA Astrophysics Data System (ADS)
Taubman, David; Naman, Aous; Mathew, Reji
2017-09-01
Based on the EBCOT algorithm, JPEG 2000 finds application in many fields, including high performance scientific, geospatial and video coding applications. Beyond digital cinema, JPEG 2000 is also attractive for low-latency video communications. The main obstacle for some of these applications is the relatively high computational complexity of the block coder, especially at high bit-rates. This paper proposes a drop-in replacement for the JPEG 2000 block coding algorithm, achieving much higher encoding and decoding throughputs, with only modest loss in coding efficiency (typically < 0.5dB). The algorithm provides only limited quality/SNR scalability, but offers truly reversible transcoding to/from any standard JPEG 2000 block bit-stream. The proposed FAST block coder can be used with EBCOT's post-compression RD-optimization methodology, allowing a target compressed bit-rate to be achieved even at low latencies, leading to the name FBCOT (Fast Block Coding with Optimized Truncation).
First Digit Law and Its Application to Digital Forensics
NASA Astrophysics Data System (ADS)
Shi, Yun Q.
Digital data forensics, which gathers evidence of data composition, origin, and history, is crucial in our digital world. Although this new research field is still in its infancy stage, it has started to attract increasing attention from the multimedia-security research community. This lecture addresses the first digit law and its applications to digital forensics. First, the Benford and generalized Benford laws, referred to as first digit law, are introduced. Then, the application of first digit law to detection of JPEG compression history for a given BMP image and detection of double JPEG compressions are presented. Finally, applying first digit law to detection of double MPEG video compressions is discussed. It is expected that the first digit law may play an active role in other task of digital forensics. The lesson learned is that statistical models play an important role in digital forensics and for a specific forensic task different models may provide different performance.
A new efficient method for color image compression based on visual attention mechanism
NASA Astrophysics Data System (ADS)
Shao, Xiaoguang; Gao, Kun; Lv, Lily; Ni, Guoqiang
2010-11-01
One of the key procedures in color image compression is to extract its region of interests (ROIs) and evaluate different compression ratios. A new non-uniform color image compression algorithm with high efficiency is proposed in this paper by using a biology-motivated selective attention model for the effective extraction of ROIs in natural images. When the ROIs have been extracted and labeled in the image, the subsequent work is to encode the ROIs and other regions with different compression ratios via popular JPEG algorithm. Furthermore, experiment results and quantitative and qualitative analysis in the paper show perfect performance when comparing with other traditional color image compression approaches.
NASA Astrophysics Data System (ADS)
2000-09-01
VLT YEPUN Joins ANTU, KUEYEN and MELIPAL It was a historical moment last night (September 3 - 4, 2000) in the VLT Control Room at the Paranal Observatory , after nearly 15 years of hard work. Finally, four teams of astronomers and engineers were sitting at the terminals - and each team with access to an 8.2-m telescope! From now on, the powerful "Paranal Quartet" will be observing night after night, with a combined mirror surface of more than 210 m 2. And beginning next year, some of them will be linked to form part of the unique VLT Interferometer with unparalleled sensitivity and image sharpness. YEPUN "First Light" Early in the evening, the fourth 8.2-m Unit Telescope, YEPUN , was pointed to the sky for the first time and successfully achieved "First Light". Following a few technical exposures, a series of "first light" photos was made of several astronomical objects with the VLT Test Camera. This instrument was also used for the three previous "First Light" events for ANTU ( May 1998 ), KUEYEN ( March 1999 ) and MELIPAL ( January 2000 ). These images served to evaluate provisionally the performance of the new telescope, mainly in terms of mechanical and optical quality. The ESO staff were very pleased with the results and pronounced YEPUN fit for the subsequent commissioning phase. When the name YEPUN was first given to the fourth VLT Unit Telescope, it was supposed to mean "Sirius" in the Mapuche language. However, doubts have since arisen about this translation and a detailed investigation now indicates that the correct meaning is "Venus" (as the Evening Star). For a detailed explanation, please consult the essay On the Meaning of "YEPUN" , now available at the ESO website. The first images At 21:39 hrs local time (01:39 UT), YEPUN was turned to point in the direction of a dense Milky Way field, near the border between the constellations Sagitta (The Arrow) and Aquila (The Eagle). A guide star was acquired and the active optics system quickly optimized the mirror system. At 21:44 hrs (01:44 UT), the Test Camera at the Cassegrain focus within the M1 mirror cell was opened for 30 seconds, with the planetary nebula Hen 2-428 in the field. The resulting "First Light" image was immediately read out and appeared on the computer screen at 21:45:53 hrs (01:45:53 UT). "Not bad! - "Very nice!" were the first, "business-as-usual"-like comments in the room. The zenith distance during this observation was 44° and the image quality was measured as 0.9 arcsec, exactly the same as that registered by the Seeing Monitoring Telescope outside the telescope building. There was some wind. ESO PR Photo 22a/00 ESO PR Photo 22a/00 [Preview - JPEG: 374 x 400 pix - 128k] [Normal - JPEG: 978 x 1046 pix - 728k] Caption : ESO PR Photo 22a/00 shows a colour composite of some of the first astronomical exposures obtained by YEPUN . The object is the planetary nebula Hen 2-428 that is located at a distance of 6,000-8,000 light-years and seen in a dense sky field, only 2° from the main plane of the Milky Way. As other planetary nebulae, it is caused by a dying star (the bluish object at the centre) that shreds its outer layers. The image is based on exposures through three optical filtres: B(lue) (10 min exposure, seeing 0.9 arcsec; here rendered as blue), V(isual) (5 min; 0.9 arcsec; green) and R(ed) (3 min; 0.9 arcsec; red). The field measures 88 x 78 arcsec 2 (1 pixel = 0.09 arcsec). North is to the lower right and East is to the lower left. The 5-day old Moon was about 90° away in the sky that was accordingly bright. The zenith angle was 44°. The ESO staff then proceeded to take a series of three photos with longer exposures through three different optical filtres. They have been combined to produce the image shown in ESO PR Photo 22a/00 . More astronomical images were obtained in sequence, first of the dwarf galaxy NGC 6822 in the Local Group (see PR Photo 22f/00 below) and then of the spiral galaxy NGC 7793 . All 8.2-m telescopes now in operation at Paranal The ESO Director General, Catherine Cesarsky , who was present on Paranal during this event, congratulated the ESO staff to the great achievement, herewith bringing a major phase of the VLT project to a successful end. She was particularly impressed by the excellent optical quality that was achieved at this early moment of the commissioning tests. A measurement showed that already now, 80% of the light is concentrated within 0.22 arcsec. The manager of the VLT project, Massimo Tarenghi , was very happy to reach this crucial project milestone, after nearly fifteen years of hard work. He also remarked that with the M2 mirror already now "in the active optics loop", the telescope was correctly compensating for the somewhat mediocre atmospheric conditions on this night. The next major step will be the "first light" for the VLT Interferometer (VLTI) , when the light from two Unit Telescopes is combined. This event is expected in the middle of next year. Impressions from the YEPUN "First Light" event First Light for YEPUN - ESO PR VC 06/00 ESO PR Video Clip 06/00 "First Light for YEPUN" (5650 frames/3:46 min) [MPEG Video+Audio; 160x120 pix; 7.7Mb] [MPEG Video+Audio; 320x240 pix; 25.7 Mb] [RealMedia; streaming; 34kps] [RealMedia; streaming; 200kps] ESO Video Clip 06/00 shows sequences from the Control Room at the Paranal Observatory, recorded with a fixed TV-camera in the evening of September 3 at about 23:00 hrs local time (03:00 UT), i.e., soon after the moment of "First Light" for YEPUN . The video sequences were transmitted via ESO's dedicated satellite communication link to the Headquarters in Garching for production of the clip. It begins at the moment a guide star is acquired to perform an automatic "active optics" correction of the mirrors; the associated explanation is given by Massimo Tarenghi (VLT Project Manager). The first astronomical observation is performed and the first image of the planetary nebula Hen 2-428 is discussed by the ESO Director General, Catherine Cesarsky . The next image, of the nearby dwarf galaxy NGC 6822 , arrives and is shown and commented on by the ESO Director General. Finally, Massimo Tarenghi talks about the next major step of the VLT Project. The combination of the lightbeams from two 8.2-m Unit Telescopes, planned for the summer of 2001, will mark the beginning of the VLT Interferometer. ESO Press Photo 22b/00 ESO Press Photo 22b/00 [Preview; JPEG: 400 x 300; 88k] [Full size; JPEG: 1600 x 1200; 408k] The enclosure for the fourth VLT 8.2-m Unit Telescope, YEPUN , photographed at sunset on September 3, 2000, immediately before "First Light" was successfully achieved. The upper part of the mostly subterranean Interferometric Laboratory for the VLTI is seen in front. (Digital Photo). ESO Press Photo 22c/00 ESO Press Photo 22c/00 [Preview; JPEG: 400 x 300; 112k] [Full size; JPEG: 1280 x 960; 184k] The initial tuning of the YEPUN optical system took place in the early evening of September 3, 2000, from the "observing hut" on the floor of the telescope enclosure. From left to right: Krister Wirenstrand who is responsible for the VLT Control Software, Jason Spyromilio - Head of the Commissioning Team, and Massimo Tarenghi , VLT Manager. (Digital Photo). ESO Press Photo 22d/00 ESO Press Photo 22d/00 [Preview; JPEG: 400 x 300; 112k] [Full size; JPEG: 1280 x 960; 184k] "Mission Accomplished" - The ESO Director General, Catherine Cesarsky , and the Paranal Director, Roberto Gilmozzi , face the VLT Manager, Massimo Tarenghi at the YEPUN Control Station, right after successful "First Light" for this telescope. (Digital Photo). An aerial image of YEPUN in its enclosure is available as ESO PR Photo 43a/99. The mechanical structure of YEPUN was first pre-assembled at the Ansaldo factory in Milan (Italy) where it served for tests while the other telescopes were erected at Paranal. An early photo ( ESO PR Photo 37/95 ) is available that was obtained during the visit of the ESO Council to Milan in December 1995, cf. ESO PR 18/95. Paranal at sunset ESO Press Photo 22e/00 ESO Press Photo 22e/00 [Preview; JPEG: 400 x 200; 14kb] [Normal; JPEG: 800 x 400; 84kb] [High-Res; JPEG: 4000 x 2000; 4.0Mb] Wide-angle view of the Paranal Observatory at sunset. The last rays of the sun illuminate the telescope enclosures at the top of the mountain and some of the buildings at the Base Camp. The new "residencia" that will provide living space for the Paranal staff and visitors from next year is being constructed to the left. The "First Light" observations with YEPUN began soon after sunset. This photo was obtained in March 2000. Additional photos (September 6, 2000) ESO PR Photo 22f/00 ESO PR Photo 22f/00 [Preview - JPEG: 400 x 487 pix - 224k] [Normal - JPEG: 992 x 1208 pix - 1.3Mb] Caption : ESO PR Photo 22f/00 shows a colour composite of three exposures of a field in the dwarf galaxy NGC 6822 , a member of the Local Group of Galaxies at a distance of about 2 million light-years. They were obtained by YEPUN and the VLT Test Camera at about 23:00 hrs local time on September 3 (03:00 UT on September 4), 2000. The image is based on exposures through three optical filtres: B(lue) (10 min exposure; here rendered as blue), V(isual) (5 min; green) and R(ed) (5 min; red); the seeing was 0.9 - 1.0 arcsec. Individual stars of many different colours (temperatures) are seen. The field measures about 1.5 x 1.5 arcmin 2. Another image of this galaxy was obtained earlier with ANTU and FORS1 , cf. PR Photo 10b/99. ESO Press Photo 22g/00 ESO Press Photo 22g/00 [Preview; JPEG: 400 x 300; 136k] [Full size; JPEG: 1280 x 960; 224k] Most of the crew that put together YEPUN is here photographed after the installation of the M1 mirror cell at the bottom of the mechanical structure (on July 30, 2000). Back row (left to right): Erich Bugueno (Mechanical Supervisor), Erito Flores (Maintenance Technician); front row (left to right) Peter Gray (Mechanical Engineer), German Ehrenfeld (Mechanical Engineer), Mario Tapia (Mechanical Engineer), Christian Juica (kneeling - Mechanical Technician), Nelson Montano (Maintenance Engineer), Hansel Sepulveda (Mechanical Technican) and Roberto Tamai (Mechanical Engineer). (Digital Photo). ESO PR Photos may be reproduced, if credit is given to the European Southern Observatory. The ESO PR Video Clips service to visitors to the ESO website provides "animated" illustrations of the ongoing work and events at the European Southern Observatory. The most recent clip was: ESO PR Video Clip 05/00 ("Portugal to Accede to ESO (27 June 2000). Information is also available on the web about other ESO videos.
Digital watermarking algorithm research of color images based on quaternion Fourier transform
NASA Astrophysics Data System (ADS)
An, Mali; Wang, Weijiang; Zhao, Zhen
2013-10-01
A watermarking algorithm of color images based on the quaternion Fourier Transform (QFFT) and improved quantization index algorithm (QIM) is proposed in this paper. The original image is transformed by QFFT, the watermark image is processed by compression and quantization coding, and then the processed watermark image is embedded into the components of the transformed original image. It achieves embedding and blind extraction of the watermark image. The experimental results show that the watermarking algorithm based on the improved QIM algorithm with distortion compensation achieves a good tradeoff between invisibility and robustness, and better robustness for the attacks of Gaussian noises, salt and pepper noises, JPEG compression, cropping, filtering and image enhancement than the traditional QIM algorithm.
A visual detection model for DCT coefficient quantization
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.; Watson, Andrew B.
1994-01-01
The discrete cosine transform (DCT) is widely used in image compression and is part of the JPEG and MPEG compression standards. The degree of compression and the amount of distortion in the decompressed image are controlled by the quantization of the transform coefficients. The standards do not specify how the DCT coefficients should be quantized. One approach is to set the quantization level for each coefficient so that the quantization error is near the threshold of visibility. Results from previous work are combined to form the current best detection model for DCT coefficient quantization noise. This model predicts sensitivity as a function of display parameters, enabling quantization matrices to be designed for display situations varying in luminance, veiling light, and spatial frequency related conditions (pixel size, viewing distance, and aspect ratio). It also allows arbitrary color space directions for the representation of color. A model-based method of optimizing the quantization matrix for an individual image was developed. The model described above provides visual thresholds for each DCT frequency. These thresholds are adjusted within each block for visual light adaptation and contrast masking. For given quantization matrix, the DCT quantization errors are scaled by the adjusted thresholds to yield perceptual errors. These errors are pooled nonlinearly over the image to yield total perceptual error. With this model one may estimate the quantization matrix for a particular image that yields minimum bit rate for a given total perceptual error, or minimum perceptual error for a given bit rate. Custom matrices for a number of images show clear improvement over image-independent matrices. Custom matrices are compatible with the JPEG standard, which requires transmission of the quantization matrix.
NASA Astrophysics Data System (ADS)
2004-04-01
New Detailed VLT Images of Saturn's Largest Moon Optimizing space missions Titan, the largest moon of Saturn was discovered by Dutch astronomer Christian Huygens in 1655 and certainly deserves its name. With a diameter of no less than 5,150 km, it is larger than Mercury and twice as large as Pluto. It is unique in having a hazy atmosphere of nitrogen, methane and oily hydrocarbons. Although it was explored in some detail by the NASA Voyager missions, many aspects of the atmosphere and surface still remain unknown. Thus, the existence of seasonal or diurnal phenomena, the presence of clouds, the surface composition and topography are still under debate. There have even been speculations that some kind of primitive life (now possibly extinct) may be found on Titan. Titan is the main target of the NASA/ESA Cassini/Huygens mission, launched in 1997 and scheduled to arrive at Saturn on July 1, 2004. The ESA Huygens probe is designed to enter the atmosphere of Titan, and to descend by parachute to the surface. Ground-based observations are essential to optimize the return of this space mission, because they will complement the information gained from space and add confidence to the interpretation of the data. Hence, the advent of the adaptive optics system NAOS-CONICA (NACO) [1] in combination with ESO's Very Large Telescope (VLT) at the Paranal Observatory in Chile now offers a unique opportunity to study the resolved disc of Titan with high sensitivity and increased spatial resolution. Adaptive Optics (AO) systems work by means of a computer-controlled deformable mirror that counteracts the image distortion induced by atmospheric turbulence. It is based on real-time optical corrections computed from image data obtained by a special camera at very high speed, many hundreds of times each second (see e.g. ESO Press Release 25/01 , ESO PR Photos 04a-c/02, ESO PR Photos 19a-c/02, ESO PR Photos 21a-c/02, ESO Press Release 17/02, and ESO Press Release 26/03 for earlier NACO images, and ESO Press Release 11/03 for MACAO-VLTI results.) The southern smile ESO PR Photo 08a/04 ESO PR Photo 08a/04 Images of Titan on November 20, 25 and 26, 2002 Through Five Filters (VLT YEPUN + NACO) [Preview - JPEG: 522 x 400 pix - 40k] [Normal - JPEG: 1043 x 800 pix - 340k] [Hires - JPEG: 2875 x 2205 pix - 1.2M] Caption: ESO PR Photo 08a/04 shows Titan (apparent visual magnitude 8.05, apparent diameter 0.87 arcsec) as observed with the NAOS/CONICA instrument at VLT Yepun (Paranal Observatory, Chile) on November 20, 25 and 26, 2003, between 6.00 UT and 9.00 UT. The median seeing values were 1.1 arcsec and 1.5 arcsec respectively for the 20th and 25th. Deconvoluted ("sharpened") images of Titan are shown through 5 different narrow-band filters - they allow to probe in some detail structures at different altitudes and on the surface. Depending on the filter, the integration time varies from 10 to 100 seconds. While Titan shows its leading hemisphere (i.e. the one observed when Titan moves towards us) on Nov. 20, the trailing side (i.e the one we see when Titan moves away from us in its course around Saturn) - which displays less bright surface features - is observed on the last two dates. ESO PR Photo 08b/04 ESO PR Photo 08b/04 Titan Observed Through Nine Different Filters on November 26, 2002 [Preview - JPEG: 480 x 400 pix - 36k] [Normal - JPEG: 960 x 800 pix - 284k] Caption: ESO PR Photo 08b/04: Images of Titan taken on November 26, 2002 through nine different filters to probe different altitudes, ranging from the stratosphere to the surface. On this night, a stable "seeing" (image quality before adaptive optics correction) of 0.9 arcsec allowed the astronomers to attain the diffraction limit of the telescope (0.032 arcsec resolution). Due to these good observing conditions, Titan's trailing hemisphere was observed with contrasts of about 40%, allowing the detection of several bright features on this surface region, once thought to be quite dark and featureless. ESO PR Photo 08c/04 ESO PR Photo 08c/04 Titan Surface Projections [Preview - JPEG: 601 x 400 pix - 64k] [Normal - JPEG: 1201 x 800 pix - 544k] Caption: ESO PR Photo 08c/04 : Titan images obtained with NACO on November 26th, 2002. Left: Titan's surface projection on the trailing hemisphere as observed at 1.3 μm, revealing a complex brightness structure thanks to the high image contrast of about 40%. Right: a new, possibly meteorological, phenomenon observed at 2.12 μm in Titan's atmosphere, in the form of a bright feature revolving around the South Pole. A team of French astronomers [2] have recently used the NACO state-of-the-art adaptive optics system on the fourth 8.2-m VLT unit telescope, Yepun, to map the surface of Titan by means of near-infrared images and to search for changes in the dense atmosphere. These extraordinary images have a nominal resolution of 1/30th arcsec and show details of the order of 200 km on the surface of Titan. To provide the best possible views, the raw data from the instrument were subjected to deconvolution (image sharpening). Images of Titan were obtained through 9 narrow-band filters, sampling near-infrared wavelengths with large variations in methane opacity. This permits sounding of different altitudes ranging from the stratosphere to the surface. Titan harbours at 1.24 and 2.12 μm a "southern smile", that is a north-south asymmetry, while the opposite situation is observed with filters probing higher altitudes, such as 1.64, 1.75 and 2.17 μm. A high-contrast bright feature is observed at the South Pole and is apparently caused by a phenomenon in the atmosphere, at an altitude below 140 km or so. This feature was found to change its location on the images from one side of the south polar axis to the other during the week of observations. Outlook An additional series of NACO observations of Titan is foreseen later this month (April 2004). These will be a great asset in helping optimize the return of the Cassini/Huygens mission. Several of the instruments aboard the spacecraft depend on such ground-based data to better infer the properties of Titan's surface and lower atmosphere. Although the astronomers have yet to model and interpret the physical and geophysical phenomena now observed and to produce a full cartography of the surface, this first analysis provides a clear demonstration of the marvellous capabilities of the NACO imaging system. More examples of the exciting science possible with this facility will be found in a series of five papers published today in the European research journal Astronomy & Astrophysics (Vol. 47, L1 to L24).
Privacy-preserving photo sharing based on a public key infrastructure
NASA Astrophysics Data System (ADS)
Yuan, Lin; McNally, David; Küpçü, Alptekin; Ebrahimi, Touradj
2015-09-01
A significant number of pictures are posted to social media sites or exchanged through instant messaging and cloud-based sharing services. Most social media services offer a range of access control mechanisms to protect users privacy. As it is not in the best interest of many such services if their users restrict access to their shared pictures, most services keep users' photos unprotected which makes them available to all insiders. This paper presents an architecture for a privacy-preserving photo sharing based on an image scrambling scheme and a public key infrastructure. A secure JPEG scrambling is applied to protect regional visual information in photos. Protected images are still compatible with JPEG coding and therefore can be viewed by any one on any device. However, only those who are granted secret keys will be able to descramble the photos and view their original versions. The proposed architecture applies an attribute-based encryption along with conventional public key cryptography, to achieve secure transmission of secret keys and a fine-grained control over who may view shared photos. In addition, we demonstrate the practical feasibility of the proposed photo sharing architecture with a prototype mobile application, ProShare, which is built based on iOS platform.
Storage, retrieval, and edit of digital video using Motion JPEG
NASA Astrophysics Data System (ADS)
Sudharsanan, Subramania I.; Lee, D. H.
1994-04-01
In a companion paper we describe a Micro Channel adapter card that can perform real-time JPEG (Joint Photographic Experts Group) compression of a 640 by 480 24-bit image within 1/30th of a second. Since this corresponds to NTSC video rates at considerably good perceptual quality, this system can be used for real-time capture and manipulation of continuously fed video. To facilitate capturing the compressed video in a storage medium, an IBM Bus master SCSI adapter with cache is utilized. Efficacy of the data transfer mechanism is considerably improved using the System Control Block architecture, an extension to Micro Channel bus masters. We show experimental results that the overall system can perform at compressed data rates of about 1.5 MBytes/second sustained and with sporadic peaks to about 1.8 MBytes/second depending on the image sequence content. We also describe mechanisms to access the compressed data very efficiently through special file formats. This in turn permits creation of simpler sequence editors. Another advantage of the special file format is easy control of forward, backward and slow motion playback. The proposed method can be extended for design of a video compression subsystem for a variety of personal computing systems.
Improved Adaptive LSB Steganography Based on Chaos and Genetic Algorithm
NASA Astrophysics Data System (ADS)
Yu, Lifang; Zhao, Yao; Ni, Rongrong; Li, Ting
2010-12-01
We propose a novel steganographic method in JPEG images with high performance. Firstly, we propose improved adaptive LSB steganography, which can achieve high capacity while preserving the first-order statistics. Secondly, in order to minimize visual degradation of the stego image, we shuffle bits-order of the message based on chaos whose parameters are selected by the genetic algorithm. Shuffling message's bits-order provides us with a new way to improve the performance of steganography. Experimental results show that our method outperforms classical steganographic methods in image quality, while preserving characteristics of histogram and providing high capacity.
Visual information processing; Proceedings of the Meeting, Orlando, FL, Apr. 20-22, 1992
NASA Technical Reports Server (NTRS)
Huck, Friedrich O. (Editor); Juday, Richard D. (Editor)
1992-01-01
Topics discussed in these proceedings include nonlinear processing and communications; feature extraction and recognition; image gathering, interpolation, and restoration; image coding; and wavelet transform. Papers are presented on noise reduction for signals from nonlinear systems; driving nonlinear systems with chaotic signals; edge detection and image segmentation of space scenes using fractal analyses; a vision system for telerobotic operation; a fidelity analysis of image gathering, interpolation, and restoration; restoration of images degraded by motion; and information, entropy, and fidelity in visual communication. Attention is also given to image coding methods and their assessment, hybrid JPEG/recursive block coding of images, modified wavelets that accommodate causality, modified wavelet transform for unbiased frequency representation, and continuous wavelet transform of one-dimensional signals by Fourier filtering.
ESO and NSF Sign Agreement on ALMA
NASA Astrophysics Data System (ADS)
2003-02-01
Green Light for World's Most Powerful Radio Observatory On February 25, 2003, the European Southern Observatory (ESO) and the US National Science Foundation (NSF) are signing a historic agreement to construct and operate the world's largest and most powerful radio telescope, operating at millimeter and sub-millimeter wavelength. The Director General of ESO, Dr. Catherine Cesarsky, and the Director of the NSF, Dr. Rita Colwell, act for their respective organizations. Known as the Atacama Large Millimeter Array (ALMA), the future facility will encompass sixty-four interconnected 12-meter antennae at a unique, high-altitude site at Chajnantor in the Atacama region of northern Chile. ALMA is a joint project between Europe and North America. In Europe, ESO is leading on behalf of its ten member countries and Spain. In North America, the NSF also acts for the National Research Council of Canada and executes the project through the National Radio Astronomy Observatory (NRAO) operated by Associated Universities, Inc. (AUI). The conclusion of the ESO-NSF Agreement now gives the final green light for the ALMA project. The total cost of approximately 650 million Euro (or US Dollars) is shared equally between the two partners. Dr. Cesarsky is excited: "This agreement signifies the start of a great project of contemporary astronomy and astrophysics. Representing Europe, and in collaboration with many laboratories and institutes on this continent, we together look forward towards wonderful research projects. With ALMA we may learn how the earliest galaxies in the Universe really looked like, to mention but one of the many eagerly awaited opportunities with this marvellous facility". "With this agreement, we usher in a new age of research in astronomy" says Dr. Colwell. "By working together in this truly global partnership, the international astronomy community will be able to ensure the research capabilities needed to meet the long-term demands of our scientific enterprise, and that we will be able to study and understand our universe in ways that have previously been beyond our vision". The recent Presidential decree from Chile for AUI and the agreement signed in late 2002 between ESO and the Government of the Republic of Chile (cf. ESO PR 18/02) recognize the interest that the ALMA Project has for Chile, as it will deepen and strengthen the cooperation in scientific and technological matters between the parties. A joint ALMA Board has been established which oversees the realisation of the ALMA project via the management structure. This Board meets for the first time on February 24-25, 2003, at NSF in Washington and will witness this historic event. ALMA: Imaging the Light from Cosmic Dawn ESO PR Photo 06a/03 ESO PR Photo 06a/03 [Preview - JPEG: 588 x 400 pix - 52k [Normal - JPEG: 1176 x 800 pix - 192k] [Hi-Res - JPEG: 3300 x 2244 pix - 2.0M] ESO PR Photo 06b/03 ESO PR Photo 06b/03 [Preview - JPEG: 502 x 400 pix - 82k [Normal - JPEG: 1003 x 800 pix - 392k] [Hi-Res - JPEG: 2222 x 1773 pix - 3.0M] ESO PR Photo 06c/03 ESO PR Photo 06c/03 [Preview - JPEG: 474 x 400 pix - 84k [Normal - JPEG: 947 x 800 pix - 344k] [Hi-Res - JPEG: 2272 x 1920 pix - 2.0M] ESO PR Photo 06d/03 ESO PR Photo 06d/03 [Preview - JPEG: 414 x 400 pix - 69k [Normal - JPEG: 828 x 800 pix - 336k] [HiRes - JPEG: 2935 x 2835 pix - 7.4k] Captions: PR Photo 06a/03 shows an artist's view of the Atacama Large Millimeter Array (ALMA), with 64 12-m antennae. PR Photo 06b/03 is another such view, with the array arranged in a compact configuration at the high-altitude Chajnantor site. The ALMA VertexRSI prototype antennae is shown in PR Photo 06c/03 on the Antenna Test Facility (ATF) site at the NRAO Very Large Array (VLA) site near Socorro (New Mexico, USA). The future ALMA site at Llano de Chajnantor at 5000 metre altitude, some 40 km East of the village of San Pedro de Atacama (Chile) is seen in PR Photo 06d/03 - this view was obtained at 11 hrs in the morning on a crisp and clear autumn day (more views of this site are available at the Chajnantor Photo Gallery). The Atacama Large Millimeter Array (ALMA) will be one of astronomy's most powerful telescopes - providing unprecedented imaging capabilities and sensitivity in the corresponding wavelength range, many orders of magnitude greater than anything of its kind today. ALMA will be an array of 64 antennae that will work together as one telescope to study millimeter and sub-millimeter wavelength radiation from space. This radiation crosses the critical boundary between infrared and microwave radiation and holds the key to understanding such processes as planet and star formation, the formation of early galaxies and galaxy clusters, and the formation of organic and other molecules in space. "ALMA will be one of astronomy's premier tools for studying the universe" says Nobel Laureate Riccardo Giacconi, President of AUI (and former ESO Director General (1993-1999)). "The entire astronomical community is anxious to have the unprecedented power and resolution that ALMA will provide". The President of the ESO Council, Professor Piet van der Kruit, agrees: "ALMA heralds a break-through in sub-millimeter and millimeter astronomy, allowing some of the most penetrating studies the Universe ever made. It is safe to predict that there will be exciting scientific surprises when ALMA enters into operation". What is millimeter and sub-millimeter wavelength astronomy? Astronomers learn about objects in space by studying the energy emitted by those objects. Our Sun and the other stars throughout the Universe emit visible light. But these objects also emit other kinds of light waves, such as X-rays, infrared radiation, and radio waves. Some objects emit very little or no visible light, yet are strong sources at other wavelengths in the electromagnetic spectrum. Much of the energy in the Universe is present in the sub-millimeter and millimeter portion of the spectrum. This energy comes from the cold dust mixed with gas in interstellar space. It also comes from distant galaxies that formed many billions of years ago at the edges of the known universe. With ALMA, astronomers will have a uniquely powerful facility with access to this remarkable portion of the spectrum and hence, new and wonderful opportunities to learn more about those objects. Current observatories simply do not have anywhere near the necessary sensitivity and resolution to unlock the secrets that abundant sub-millimeter and millimeter wavelength radiation can reveal. It will take the unparalleled power of ALMA to fully study the cosmic emission at this wavelength and better understand the nature of the universe. Scientists from all over the world will use ALMA. They will compete for observing time by submitting proposals, which will be judged by a group of their peers on the basis of scientific merit. ALMA's unique capabilities ALMA's ability to detect remarkably faint sub-millimeter and millimeter wavelength emission and to create high-resolution images of the source of that emission gives it capabilities not found in any other astronomical instruments. ALMA will therefore be able to study phenomena previously out of reach to astronomers and astrophysicists, such as: * Very young galaxies forming stars at the earliest times in cosmic history; * New planets forming around young stars in our galaxy, the Milky Way; * The birth of new stars in spinning clouds of gas and dust; and * Interstellar clouds of gas and dust that are the nurseries of complex molecules and even organic chemicals that form the building blocks of life. How will ALMA work? All of ALMA's 64 antennae will work in concert, taking quick "snapshots" or long-term exposures of astronomical objects. Cosmic radiation from these objects will be reflected from the surface of each antenna and focussed onto highly sensitive receivers cooled to just a few degrees above absolute zero in order to suppress undesired "noise" from the surroundings. There the signals will be amplified many times, digitized, and then sent along underground fiber-optic cables to a large signal processor in the central control building. This specialized computer, called a correlator - running at 16,000 million-million operations per second - will combine all of the data from the 64 antennae to make images of remarkable quality. The extraordinary ALMA site Since atmospheric water vapor absorbs millimeter and (especially) sub-millimeter waves, ALMA must be constructed at a very high altitude in a very dry region of the earth. Extensive tests showed that the sky above the Atacama Desert of Chile has the excellent clarity and stability essential for ALMA. That is why ALMA will be built there, on Llano de Chajnantor at an altitude of 5,000 metres in the Chilean Andes. A series of views of this site, also in high-resolution suitable for reproduction, is available at the Chajnantor Photo Gallery. Timeline for ALMA June 1998: Phase 1 (Research and Development) June 1999: European/American Memorandum of Understanding February 2003: Signature of the bilateral Agreement 2004: Tests of the Prototype System 2007: Initial scientific operation of a partially completed array 2011: End of construction of the array
JPEG2000 vs. full frame wavelet packet compression for smart card medical records.
Leehan, Joaquín Azpirox; Lerallut, Jean-Francois
2006-01-01
This paper describes a comparison among different compression methods to be used in the context of electronic health records in the newer version of "smart cards". The JPEG2000 standard is compared to a full-frame wavelet packet compression method at high (33:1 and 50:1) compression rates. Results show that the full-frame method outperforms the JPEG2K standard qualitatively and quantitatively.
NASA Astrophysics Data System (ADS)
Song, W. M.; Fan, D. W.; Su, L. Y.; Cui, C. Z.
2017-11-01
Calculating the coordinate parameters recorded in the form of key/value pairs in FITS (Flexible Image Transport System) header is the key to determine FITS images' position in the celestial system. As a result, it has great significance in researching the general process of calculating the coordinate parameters. By combining CCD related parameters of astronomical telescope (such as field, focal length, and celestial coordinates in optical axis, etc.), astronomical images recognition algorithm, and WCS (World Coordinate System) theory, the parameters can be calculated effectively. CCD parameters determine the scope of star catalogue, so that they can be used to build a reference star catalogue by the corresponding celestial region of astronomical images; Star pattern recognition completes the matching between the astronomical image and reference star catalogue, and obtains a table with a certain number of stars between CCD plane coordinates and their celestial coordinates for comparison; According to different projection of the sphere to the plane, WCS can build different transfer functions between these two coordinates, and the astronomical position of image pixels can be determined by the table's data we have worked before. FITS images are used to carry out scientific data transmission and analyze as a kind of mainstream data format, but only to be viewed, edited, and analyzed in the professional astronomy software. It decides the limitation of popular science education in astronomy. The realization of a general image visualization method is significant. FITS is converted to PNG or JPEG images firstly. The coordinate parameters in the FITS header are converted to metadata in the form of AVM (Astronomy Visualization Metadata), and then the metadata is added to the PNG or JPEG header. This method can meet amateur astronomers' general needs of viewing and analyzing astronomical images in the non-astronomical software platform. The overall design flow is realized through the java program and tested by SExtractor, WorldWide Telescope, picture viewer, and other software.
An effective and efficient compression algorithm for ECG signals with irregular periods.
Chou, Hsiao-Hsuan; Chen, Ying-Jui; Shiau, Yu-Chien; Kuo, Te-Son
2006-06-01
This paper presents an effective and efficient preprocessing algorithm for two-dimensional (2-D) electrocardiogram (ECG) compression to better compress irregular ECG signals by exploiting their inter- and intra-beat correlations. To better reveal the correlation structure, we first convert the ECG signal into a proper 2-D representation, or image. This involves a few steps including QRS detection and alignment, period sorting, and length equalization. The resulting 2-D ECG representation is then ready to be compressed by an appropriate image compression algorithm. We choose the state-of-the-art JPEG2000 for its high efficiency and flexibility. In this way, the proposed algorithm is shown to outperform some existing arts in the literature by simultaneously achieving high compression ratio (CR), low percent root mean squared difference (PRD), low maximum error (MaxErr), and low standard derivation of errors (StdErr). In particular, because the proposed period sorting method rearranges the detected heartbeats into a smoother image that is easier to compress, this algorithm is insensitive to irregular ECG periods. Thus either the irregular ECG signals or the QRS false-detection cases can be better compressed. This is a significant improvement over existing 2-D ECG compression methods. Moreover, this algorithm is not tied exclusively to JPEG2000. It can also be combined with other 2-D preprocessing methods or appropriate codecs to enhance the compression performance in irregular ECG cases.
Digital storage and analysis of color Doppler echocardiograms
NASA Technical Reports Server (NTRS)
Chandra, S.; Thomas, J. D.
1997-01-01
Color Doppler flow mapping has played an important role in clinical echocardiography. Most of the clinical work, however, has been primarily qualitative. Although qualitative information is very valuable, there is considerable quantitative information stored within the velocity map that has not been extensively exploited so far. Recently, many researchers have shown interest in using the encoded velocities to address the clinical problems such as quantification of valvular regurgitation, calculation of cardiac output, and characterization of ventricular filling. In this article, we review some basic physics and engineering aspects of color Doppler echocardiography, as well as drawbacks of trying to retrieve velocities from video tape data. Digital storage, which plays a critical role in performing quantitative analysis, is discussed in some detail with special attention to velocity encoding in DICOM 3.0 (medical image storage standard) and the use of digital compression. Lossy compression can considerably reduce file size with minimal loss of information (mostly redundant); this is critical for digital storage because of the enormous amount of data generated (a 10 minute study could require 18 Gigabytes of storage capacity). Lossy JPEG compression and its impact on quantitative analysis has been studied, showing that images compressed at 27:1 using the JPEG algorithm compares favorably with directly digitized video images, the current goldstandard. Some potential applications of these velocities in analyzing the proximal convergence zones, mitral inflow, and some areas of future development are also discussed in the article.
Multiple-image hiding using super resolution reconstruction in high-frequency domains
NASA Astrophysics Data System (ADS)
Li, Xiao-Wei; Zhao, Wu-Xiang; Wang, Jun; Wang, Qiong-Hua
2017-12-01
In this paper, a robust multiple-image hiding method using the computer-generated integral imaging and the modified super-resolution reconstruction algorithm is proposed. In our work, the host image is first transformed into frequency domains by cellular automata (CA), to assure the quality of the stego-image, the secret images are embedded into the CA high-frequency domains. The proposed method has the following advantages: (1) robustness to geometric attacks because of the memory-distributed property of elemental images, (2) increasing quality of the reconstructed secret images as the scheme utilizes the modified super-resolution reconstruction algorithm. The simulation results show that the proposed multiple-image hiding method outperforms other similar hiding methods and is robust to some geometric attacks, e.g., Gaussian noise and JPEG compression attacks.
No-reference quality assessment based on visual perception
NASA Astrophysics Data System (ADS)
Li, Junshan; Yang, Yawei; Hu, Shuangyan; Zhang, Jiao
2014-11-01
The visual quality assessment of images/videos is an ongoing hot research topic, which has become more and more important for numerous image and video processing applications with the rapid development of digital imaging and communication technologies. The goal of image quality assessment (IQA) algorithms is to automatically assess the quality of images/videos in agreement with human quality judgments. Up to now, two kinds of models have been used for IQA, namely full-reference (FR) and no-reference (NR) models. For FR models, IQA algorithms interpret image quality as fidelity or similarity with a perfect image in some perceptual space. However, the reference image is not available in many practical applications, and a NR IQA approach is desired. Considering natural vision as optimized by the millions of years of evolutionary pressure, many methods attempt to achieve consistency in quality prediction by modeling salient physiological and psychological features of the human visual system (HVS). To reach this goal, researchers try to simulate HVS with image sparsity coding and supervised machine learning, which are two main features of HVS. A typical HVS captures the scenes by sparsity coding, and uses experienced knowledge to apperceive objects. In this paper, we propose a novel IQA approach based on visual perception. Firstly, a standard model of HVS is studied and analyzed, and the sparse representation of image is accomplished with the model; and then, the mapping correlation between sparse codes and subjective quality scores is trained with the regression technique of least squaresupport vector machine (LS-SVM), which gains the regressor that can predict the image quality; the visual metric of image is predicted with the trained regressor at last. We validate the performance of proposed approach on Laboratory for Image and Video Engineering (LIVE) database, the specific contents of the type of distortions present in the database are: 227 images of JPEG2000, 233 images of JPEG, 174 images of White Noise, 174 images of Gaussian Blur, 174 images of Fast Fading. The database includes subjective differential mean opinion score (DMOS) for each image. The experimental results show that the proposed approach not only can assess many kinds of distorted images quality, but also exhibits a superior accuracy and monotonicity.
Kamauu, Aaron W C; DuVall, Scott L; Wiggins, Richard H; Avrin, David E
2008-09-01
In the creation of interesting radiological cases in a digital teaching file, it is necessary to adjust the window and level settings of an image to effectively display the educational focus. The web-based applet described in this paper presents an effective solution for real-time window and level adjustments without leaving the picture archiving and communications system workstation. Optimized images are created, as user-defined parameters are passed between the applet and a servlet on the Health Insurance Portability and Accountability Act-compliant teaching file server.
Digitizing the KSO white light images
NASA Astrophysics Data System (ADS)
Pötzi, W.
From 1989 up to 2007 the Sun was observed at the Kanzelhöhe Observatory in white light on photographic film material. The images are on transparent sheet films and are not available to the scientific community now. With a photo scanner for transparent film material the films are now scanned and then prepared for scientific use. The programs for post processing are already finished and as an output FITS and JPEG-files are produced. The scanning should be finished end of 2011 and the data should then be available via our homepage.
Implementation of image transmission server system using embedded Linux
NASA Astrophysics Data System (ADS)
Park, Jong-Hyun; Jung, Yeon Sung; Nam, Boo Hee
2005-12-01
In this paper, we performed the implementation of image transmission server system using embedded system that is for the specified object and easy to install and move. Since the embedded system has lower capability than the PC, we have to reduce the quantity of calculation of the baseline JPEG image compression and transmission. We used the Redhat Linux 9.0 OS at the host PC and the target board based on embedded Linux. The image sequences are obtained from the camera attached to the FPGA (Field Programmable Gate Array) board with ALTERA cooperation chip. For effectiveness and avoiding some constraints from the vendor's own, we made the device driver using kernel module.
Black Hole in Search of a Home
NASA Astrophysics Data System (ADS)
2005-09-01
Astronomers Discover Bright Quasar Without Massive Host Galaxy An international team of astronomers [1] used two of the most powerful astronomical facilities available, the ESO Very Large Telescope (VLT) at Cerro Paranal and the Hubble Space Telescope (HST), to conduct a detailed study of 20 low redshift quasars. For 19 of them, they found, as expected, that these super massive black holes are surrounded by a host galaxy. But when they studied the bright quasar HE0450-2958, located some 5 billion light-years away, they couldn't find evidence for an encircling galaxy. This, the astronomers suggest, may indicate a rare case of collision between a seemingly normal spiral galaxy and a much more exotic object harbouring a very massive black hole. With masses up to hundreds of millions that of the Sun, "super massive" black holes are the most tantalizing objects known. Hiding in the centre of most large galaxies, including our own Milky Way (see ESO PR 26/03), they sometimes manifest themselves by devouring matter they engulf from their surroundings. Shining up to the largest distances, they are then called "quasars" or "QSOs" (for "quasi-stellar objects"), as they had initially been confused with stars. Decades of observations of quasars have suggested that they are always associated with massive host galaxies. However, observing the host galaxy of a quasar is a challenging work, because the quasar is radiating so energetically that its host galaxy is hard to detect in the flare. ESO PR Photo 28a/05 ESO PR Photo 28a/05 Two Quasars with their Host Galaxy [Preview - JPEG: 400 x 760 pix - 82k] [Normal - JPEG: 800 x 1520 pix - 395k] [Full Res - JPEG: 1722 x 3271 pix - 4.0M] Caption: ESO PR Photo 28a/05 shows two examples of quasars from the sample studied by the astronomers, where the host galaxy is obvious. In each case, the quasar is the bright central spot. The host of HE1239-2426 (left), a z=0.082 quasar, displays large spiral arms, while the host of HE1503+0228 (right), having a redshift of 0.135, is more fuzzy and shows only hints of spiral arms. Although these particular objects are rather close to us and constitute therefore easy targets, their host would still be perfectly visible at much higher redshift, including at distances as large as the one of HE0450-2958 (z=0.285). The observations were done with the ACS camera on the HST. ESO PR Photo 28b/05 ESO PR Photo 28b/05 The Quasar without a Home: HE0450-2958 [Preview - JPEG: 400 x 760 pix - 53k] [Normal - JPEG: 800 x 1520 pix - 197k] [Full Res - JPEG: 1718 x 3265 pix - 1.5M] Caption of ESO PR Photo 28b/05: (Left) HST image of the z=0.285 quasar HE0450-2958. No obvious host galaxy centred on the quasar is seen. Only a strongly disturbed and star forming companion galaxy is seen near the top of the image. (Right) Same image shown after applying an efficient image sharpening method known as MCS-deconvolution. In contrast to the usual cases, as the ones shown in ESO PR Photo 28a/05, the quasar is not situated at the centre of an extended host galaxy, but on the edge of a compact structure, whose spectra (see ESO PR Photo 28c/05) show it to be composed of gas ionised by the quasar radiation. This gas may have been captured through a collision with the star-forming galaxy. The star indicated on the figure is a nearby galactic star seen by chance in the field of view. To overcome this problem, the astronomers devised a new and highly efficient strategy. Using ESO's VLT for spectroscopy and HST for imagery, they observed their quasars at the same time as a reference star. Simultaneous observation of a star allowed them to measure at best the shape of the quasar point source on spectra and images, and further to separate the quasar light from the other contribution, i.e. from the underlying galaxy itself. This very powerful image and spectra sharpening method ("MCS deconvolution") was applied to these data in order to detect the finest details of the host galaxy (see e.g. ESO PR 19/03). Using this efficient technique, the astronomers could detect a host galaxy for all but one of the quasars they studied. No stellar environment was found for HE0450-2958, suggesting that if any host galaxy exists, it must either have a luminosity at least six times fainter than expected a priori from the quasar observed luminosity, or a radius smaller than about 300 light-years. Typical radii for quasar host galaxies range between 6,000 and 50,000 light-years, i.e. they are at least 20 to 170 times larger. "With the data we managed to secure with the VLT and the HST, we would have been able to detect a normal host galaxy", says Pierre Magain (Université de Liège, Belgium), lead author of the paper reporting the study. "We must therefore conclude that, contrary to our expectations, this bright quasar is not surrounded by a massive galaxy." Instead, the astronomers detected just besides the quasar a bright cloud of about 2,500 light-years in size, which they baptized "the blob". The VLT observations show this cloud to be composed only of gas ionised by the intense radiation coming from the quasar. It is probably the gas of this cloud which is feeding the supermassive black hole, allowing it to become a quasar. ESO PR Photo 28c/05 ESO PR Photo 28c/05 Spectrum of Quasar HE0450-2958, the Blob and the Companion Galaxy (FORS/VLT) [Preview - JPEG: 400 x 561 pix - 112k] [Normal - JPEG: 800 x 1121 pix - 257k] [HiRes - JPEG: 2332 x 3268 pix - 1.1M] Caption: ESO PR Photo 28c/05 presents the spectra of the three objects indicated in ESO PR Photo 28b/05 as obtained with FORS1 on ESO's Very Large Telescope. The spectrum of the companion galaxy shown on the top panel reveals strong star formation. Thanks to the image sharpening process, it has been possible to separate very well the spectra of the quasar (centre) from that of the blob (bottom). The spectrum of the blob shows exclusively strong narrow emission lines having properties indicative of ionisation by the quasar light. There is no trace of stellar light, down to very faint levels, in the surrounding of the quasar. A strongly perturbed galaxy, showing all signs of a recent collision, is also seen on the HST images 2 arcseconds away (corresponding to about 50,000 light-years), with the VLT spectra showing it to be presently in a state where it forms stars at a frantic rate. "The absence of a massive host galaxy, combined with the existence of the blob and the star-forming galaxy, lead us to believe that we have uncovered a really exotic quasar, says team member Frédéric Courbin (Ecole Polytechnique Fédérale de Lausanne, Switzerland). "There is little doubt that a burst in the formation of stars in the companion galaxy and the quasar itself have been ignited by a collision that must haven taken place about 100 million years ago. What happened to the putative quasar host remains unknown." HE0450-2958 constitutes a challenging case of interpretation. The astronomers propose several possible explanations, that will need to be further investigated and confronted. Has the host galaxy been completely disrupted as a result of the collision? It is hard to imagine how that could happen. Has an isolated black hole captured gas while crossing the disc of a spiral galaxy? This would require very special conditions and would probably not have caused such a tremendous perturbation as is observed in the neighbouring galaxy. Another intriguing hypothesis is that the galaxy harbouring the black hole was almost exclusively made of dark matter. "Whatever the solution of this riddle, the strong observable fact is that the quasar host galaxy, if any, is much too faint", says team member Knud Jahnke (Astrophysikalisches Institut Potsdam, Germany). The report on HE0450-2958 is published in the September 15, 2005 issue of the journal Nature ("Discovery of a bright quasar without a massive host galaxy" by Pierre Magain et al.).
The effects of video compression on acceptability of images for monitoring life sciences experiments
NASA Astrophysics Data System (ADS)
Haines, Richard F.; Chuang, Sherry L.
1992-07-01
Future manned space operations for Space Station Freedom will call for a variety of carefully planned multimedia digital communications, including full-frame-rate color video, to support remote operations of scientific experiments. This paper presents the results of an investigation to determine if video compression is a viable solution to transmission bandwidth constraints. It reports on the impact of different levels of compression and associated calculational parameters on image acceptability to investigators in life-sciences research at ARC. Three nonhuman life-sciences disciplines (plant, rodent, and primate biology) were selected for this study. A total of 33 subjects viewed experimental scenes in their own scientific disciplines. Ten plant scientists viewed still images of wheat stalks at various stages of growth. Each image was compressed to four different compression levels using the Joint Photographic Expert Group (JPEG) standard algorithm, and the images were presented in random order. Twelve and eleven staffmembers viewed 30-sec videotaped segments showing small rodents and a small primate, respectively. Each segment was repeated at four different compression levels in random order using an inverse cosine transform (ICT) algorithm. Each viewer made a series of subjective image-quality ratings. There was a significant difference in image ratings according to the type of scene viewed within disciplines; thus, ratings were scene dependent. Image (still and motion) acceptability does, in fact, vary according to compression level. The JPEG still-image-compression levels, even with the large range of 5:1 to 120:1 in this study, yielded equally high levels of acceptability. In contrast, the ICT algorithm for motion compression yielded a sharp decline in acceptability below 768 kb/sec. Therefore, if video compression is to be used as a solution for overcoming transmission bandwidth constraints, the effective management of the ratio and compression parameters according to scientific discipline and experiment type is critical to the success of remote experiments.
The effects of video compression on acceptability of images for monitoring life sciences experiments
NASA Technical Reports Server (NTRS)
Haines, Richard F.; Chuang, Sherry L.
1992-01-01
Future manned space operations for Space Station Freedom will call for a variety of carefully planned multimedia digital communications, including full-frame-rate color video, to support remote operations of scientific experiments. This paper presents the results of an investigation to determine if video compression is a viable solution to transmission bandwidth constraints. It reports on the impact of different levels of compression and associated calculational parameters on image acceptability to investigators in life-sciences research at ARC. Three nonhuman life-sciences disciplines (plant, rodent, and primate biology) were selected for this study. A total of 33 subjects viewed experimental scenes in their own scientific disciplines. Ten plant scientists viewed still images of wheat stalks at various stages of growth. Each image was compressed to four different compression levels using the Joint Photographic Expert Group (JPEG) standard algorithm, and the images were presented in random order. Twelve and eleven staffmembers viewed 30-sec videotaped segments showing small rodents and a small primate, respectively. Each segment was repeated at four different compression levels in random order using an inverse cosine transform (ICT) algorithm. Each viewer made a series of subjective image-quality ratings. There was a significant difference in image ratings according to the type of scene viewed within disciplines; thus, ratings were scene dependent. Image (still and motion) acceptability does, in fact, vary according to compression level. The JPEG still-image-compression levels, even with the large range of 5:1 to 120:1 in this study, yielded equally high levels of acceptability. In contrast, the ICT algorithm for motion compression yielded a sharp decline in acceptability below 768 kb/sec. Therefore, if video compression is to be used as a solution for overcoming transmission bandwidth constraints, the effective management of the ratio and compression parameters according to scientific discipline and experiment type is critical to the success of remote experiments.
Robust image obfuscation for privacy protection in Web 2.0 applications
NASA Astrophysics Data System (ADS)
Poller, Andreas; Steinebach, Martin; Liu, Huajian
2012-03-01
We present two approaches to robust image obfuscation based on permutation of image regions and channel intensity modulation. The proposed concept of robust image obfuscation is a step towards end-to-end security in Web 2.0 applications. It helps to protect the privacy of the users against threats caused by internet bots and web applications that extract biometric and other features from images for data-linkage purposes. The approaches described in this paper consider that images uploaded to Web 2.0 applications pass several transformations, such as scaling and JPEG compression, until the receiver downloads them. In contrast to existing approaches, our focus is on usability, therefore the primary goal is not a maximum of security but an acceptable trade-off between security and resulting image quality.
NASA Astrophysics Data System (ADS)
Mansoor, Awais; Robinson, J. Paul; Rajwa, Bartek
2009-02-01
Modern automated microscopic imaging techniques such as high-content screening (HCS), high-throughput screening, 4D imaging, and multispectral imaging are capable of producing hundreds to thousands of images per experiment. For quick retrieval, fast transmission, and storage economy, these images should be saved in a compressed format. A considerable number of techniques based on interband and intraband redundancies of multispectral images have been proposed in the literature for the compression of multispectral and 3D temporal data. However, these works have been carried out mostly in the elds of remote sensing and video processing. Compression for multispectral optical microscopy imaging, with its own set of specialized requirements, has remained under-investigated. Digital photography{oriented 2D compression techniques like JPEG (ISO/IEC IS 10918-1) and JPEG2000 (ISO/IEC 15444-1) are generally adopted for multispectral images which optimize visual quality but do not necessarily preserve the integrity of scientic data, not to mention the suboptimal performance of 2D compression techniques in compressing 3D images. Herein we report our work on a new low bit-rate wavelet-based compression scheme for multispectral fluorescence biological imaging. The sparsity of signicant coefficients in high-frequency subbands of multispectral microscopic images is found to be much greater than in natural images; therefore a quad-tree concept such as Said et al.'s SPIHT1 along with correlation of insignicant wavelet coefficients has been proposed to further exploit redundancy at high-frequency subbands. Our work propose a 3D extension to SPIHT, incorporating a new hierarchal inter- and intra-spectral relationship amongst the coefficients of 3D wavelet-decomposed image. The new relationship, apart from adopting the parent-child relationship of classical SPIHT, also brought forth the conditional "sibling" relationship by relating only the insignicant wavelet coefficients of subbands at the same level of decomposition. The insignicant quadtrees in dierent subbands in the high-frequency subband class are coded by a combined function to reduce redundancy. A number of experiments conducted on microscopic multispectral images have shown promising results for the proposed method over current state-of-the-art image-compression techniques.
JPEG XS, a new standard for visually lossless low-latency lightweight image compression
NASA Astrophysics Data System (ADS)
Descampe, Antonin; Keinert, Joachim; Richter, Thomas; Fößel, Siegfried; Rouvroy, Gaël.
2017-09-01
JPEG XS is an upcoming standard from the JPEG Committee (formally known as ISO/IEC SC29 WG1). It aims to provide an interoperable visually lossless low-latency lightweight codec for a wide range of applications including mezzanine compression in broadcast and Pro-AV markets. This requires optimal support of a wide range of implementation technologies such as FPGAs, CPUs and GPUs. Targeted use cases are professional video links, IP transport, Ethernet transport, real-time video storage, video memory buffers, and omnidirectional video capture and rendering. In addition to the evaluation of the visual transparency of the selected technologies, a detailed analysis of the hardware and software complexity as well as the latency has been done to make sure that the new codec meets the requirements of the above-mentioned use cases. In particular, the end-to-end latency has been constrained to a maximum of 32 lines. Concerning the hardware complexity, neither encoder nor decoder should require more than 50% of an FPGA similar to Xilinx Artix 7 or 25% of an FPGA similar to Altera Cyclon 5. This process resulted in a coding scheme made of an optional color transform, a wavelet transform, the entropy coding of the highest magnitude level of groups of coefficients, and the raw inclusion of the truncated wavelet coefficients. This paper presents the details and status of the standardization process, a technical description of the future standard, and the latest performance evaluation results.
Parallel efficient rate control methods for JPEG 2000
NASA Astrophysics Data System (ADS)
Martínez-del-Amor, Miguel Á.; Bruns, Volker; Sparenberg, Heiko
2017-09-01
Since the introduction of JPEG 2000, several rate control methods have been proposed. Among them, post-compression rate-distortion optimization (PCRD-Opt) is the most widely used, and the one recommended by the standard. The approach followed by this method is to first compress the entire image split in code blocks, and subsequently, optimally truncate the set of generated bit streams according to the maximum target bit rate constraint. The literature proposes various strategies on how to estimate ahead of time where a block will get truncated in order to stop the execution prematurely and save time. However, none of them have been defined bearing in mind a parallel implementation. Today, multi-core and many-core architectures are becoming popular for JPEG 2000 codecs implementations. Therefore, in this paper, we analyze how some techniques for efficient rate control can be deployed in GPUs. In order to do that, the design of our GPU-based codec is extended, allowing stopping the process at a given point. This extension also harnesses a higher level of parallelism on the GPU, leading to up to 40% of speedup with 4K test material on a Titan X. In a second step, three selected rate control methods are adapted and implemented in our parallel encoder. A comparison is then carried out, and used to select the best candidate to be deployed in a GPU encoder, which gave an extra 40% of speedup in those situations where it was really employed.
NASA Astrophysics Data System (ADS)
2004-12-01
On December 9-10, 2004, the ESO Paranal Observatory was honoured with an overnight visit by His Excellency the President of the Republic of Chile, Ricardo Lagos and his wife, Mrs. Luisa Duran de Lagos. The high guests were welcomed by the ESO Director General, Dr. Catherine Cesarsky, ESO's representative in Chile, Mr. Daniel Hofstadt, and Prof. Maria Teresa Ruiz, Head of the Astronomy Department at the Universidad de Chile, as well as numerous ESO staff members working at the VLT site. The visit was characterised as private, and the President spent a considerable time in pleasant company with the Paranal staff, talking with and getting explanations from everybody. The distinguished visitors were shown the various high-tech installations at the observatory, including the Interferometric Tunnel with the VLTI delay lines and the first Auxiliary Telescope. Explanations were given by ESO astronomers and engineers and the President, a keen amateur astronomer, gained a good impression of the wide range of exciting research programmes that are carried out with the VLT. President Lagos showed a deep interest and impressed everyone present with many, highly relevant questions. Having enjoyed the spectacular sunset over the Pacific Ocean from the Residence terrace, the President met informally with the Paranal employees who had gathered for this unique occasion. Later, President Lagos visited the VLT Control Room from where the four 8.2-m Unit Telescopes and the VLT Interferometer (VLTI) are operated. Here, the President took part in an observing sequence of the spiral galaxy NGC 1097 (see PR Photo 35d/04) from the console of the MELIPAL telescope. After one more visit to the telescope platform at the top of Paranal, the President and his wife left the Observatory in the morning of December 10, 2004, flying back to Santiago. ESO PR Photo 35e/04 ESO PR Photo 35e/04 President Lagos Meets with ESO Staff at the Paranal Residencia [Preview - JPEG: 400 x 267pix - 144k] [Normal - JPEG: 640 x 427 pix - 240k] ESO PR Photo 35f/04 ESO PR Photo 35f/04 The Presidential Couple with Professor Maria Teresa Ruiz and the ESO Director General [Preview - JPEG: 500 x 400 pix - 224k] [Normal - JPEG: 1000 x 800 pix - 656k] [FullRes - JPEG: 1575 x 1260 pix - 1.0M] ESO PR Photo 35g/04 ESO PR Photo 35g/04 President Lagos with ESO Staff [Preview - JPEG: 500 x 400 pix - 192k] [Normal - JPEG: 1000 x 800 pix - 592k] [FullRes - JPEG: 1575 x 1200 pix - 1.1M] Captions: ESO PR Photo 35e/04 was obtained during President Lagos' meeting with ESO Staff at the Paranal Residencia. On ESO PR Photo 35f/04, President Lagos and Mrs. Luisa Duran de Lagos are seen at a quiet moment during the visit to the VLT Control Room, together with Prof. Maria Teresa Ruiz (far right), Head of the Astronomy Department at the Universidad de Chile, and the ESO Director General. ESO PR Photo 35g/04 shows President Lagos with some ESO staff members in the Paranal Residencia. VLT obtains a splendid photo of a unique galaxy, NGC 1097 ESO PR Photo 35d/04 ESO PR Photo 35d/04 Spiral Galaxy NGC 1097 (Melipal + VIMOS) [Preview - JPEG: 400 x 525 pix - 181k] [Normal - JPEG: 800 x 1049 pix - 757k] [FullRes - JPEG: 2296 x 3012 pix - 7.9M] Captions: ESO PR Photo 35d/04 is an almost-true colour composite based on three images made with the multi-mode VIMOS instrument on the 8.2-m Melipal (Unit Telescope 3) of ESO's Very Large Telescope. They were taken on the night of December 9-10, 2004, in the presence of the President of the Republic of Chile, Ricardo Lagos. Details are available in the Technical Note below. A unique and very beautiful image was obtained with the VIMOS instrument with President Lagos at the control desk. Located at a distance of about 45 million light-years in the southern constellation Fornax (the Furnace), NGC 1097 is a relatively bright, barred spiral galaxy of type SBb, seen face-on. At magnitude 9.5, and thus just 25 times fainter than the faintest object that can be seen with the unaided eye, it appears in small telescopes as a bright, circular disc. ESO PR Photo 35d/04, taken on the night of December 9 to 10, 2004 with the VIsible Multi-Object Spectrograph ("VIMOS), a four-channel multiobject spectrograph and imager attached to the 8.2-m VLT Melipal telescope, shows that the real structure is much more complicated. NGC 1097 is indeed a most interesting object in many respects. As this striking image reveals, NGC 1097 presents a centre that consists of a broken ring of bright knots surrounding the galaxy's nucleus. The sizes of these knots - presumably gigantic bubbles of hydrogen atoms having lost one electron (HII regions) through the intense radiation from luminous massive stars - range from roughly 750 to 2000 light-years. The presence of these knots suggests that an energetic burst of star formation has recently occurred. NGC 1097 is also known as an example of the so-called LINER (Low-Ionization Nuclear Emission Region Galaxies) class. Objects of this type are believed to be low-luminosity examples of Active Galactic Nuclei (AGN), whose emission is thought to arise from matter (gas and stars) falling into oblivion in a central black hole. There is indeed much evidence that a supermassive black hole is located at the very centre of NGC 1097, with a mass of several tens of million times the mass of the Sun. This is at least ten times more massive than the central black hole in our own Milky Way. However, NGC 1097 possesses a comparatively faint nucleus only, and the black hole in its centre must be on a very strict "diet": only a small amount of gas and stars is apparently being swallowed by the black hole at any given moment. A turbulent past As can be clearly seen in the upper part of PR Photo 35d/04, NGC 1097 also has a small galaxy companion; it is designated NGC 1097A and is located about 42,000 light-years away from the centre of NGC 1097. This peculiar elliptical galaxy is 25 times fainter than its big brother and has a "box-like" shape, not unlike NGC 6771, the smallest of the three galaxies that make up the famous Devil's Mask, cf. ESO PR Photo 12/04. There is evidence that NGC 1097 and NGC 1097A have been interacting in the recent past. Another piece of evidence for this galaxy's tumultuous past is the presence of four jets - not visible on this image - discovered in the 1970's on photographic plates. These jets are now believed to be the captured remains of a disrupted dwarf galaxy that passed through the inner part of the disc of NGC 1097. Moreover, another interesting feature of this active galaxy is the fact that no less than two supernovae were detected inside it within a time span of only four years. SN 1999eu was discovered by Japanese amateur Masakatsu Aoki (Toyama, Japan) on November 5, 1999. This 17th-magnitude supernova was a peculiar Type II supernova, the end result of the core collapse of a very massive star. And in the night of January 5 to 6, 2003, Reverend Robert Evans (Australia) discovered another Type II supernova of 15th magnitude. Also visible in this very nice image which was taken during very good sky conditions - the seeing was well below 1 arcsec - are a multitude of background galaxies of different colours and shapes. Given the fact that the total exposure time for this three-colour image was just 11 min, it is a remarkable feat, demonstrating once again the very high efficiency of the VLT.
Aladin Lite: Lightweight sky atlas for browsers
NASA Astrophysics Data System (ADS)
Boch, Thomas
2014-02-01
Aladin Lite is a lightweight version of the Aladin tool, running in the browser and geared towards simple visualization of a sky region. It allows visualization of image surveys (JPEG multi-resolution HEALPix all-sky surveys) and permits superimposing tabular (VOTable) and footprints (STC-S) data. Aladin Lite is powered by HTML5 canvas technology and is easily embeddable on any web page and can also be controlled through a Javacript API.
NASA Astrophysics Data System (ADS)
Martin, Gabriel; Gonzalez-Ruiz, Vicente; Plaza, Antonio; Ortiz, Juan P.; Garcia, Inmaculada
2010-07-01
Lossy hyperspectral image compression has received considerable interest in recent years due to the extremely high dimensionality of the data. However, the impact of lossy compression on spectral unmixing techniques has not been widely studied. These techniques characterize mixed pixels (resulting from insufficient spatial resolution) in terms of a suitable combination of spectrally pure substances (called endmembers) weighted by their estimated fractional abundances. This paper focuses on the impact of JPEG2000-based lossy compression of hyperspectral images on the quality of the endmembers extracted by different algorithms. The three considered algorithms are the orthogonal subspace projection (OSP), which uses only spatial information, and the automatic morphological endmember extraction (AMEE) and spatial spectral endmember extraction (SSEE), which integrate both spatial and spectral information in the search for endmembers. The impact of compression on the resulting abundance estimation based on the endmembers derived by different methods is also substantiated. Experimental results are conducted using a hyperspectral data set collected by NASA Jet Propulsion Laboratory over the Cuprite mining district in Nevada. The experimental results are quantitatively analyzed using reference information available from U.S. Geological Survey, resulting in recommendations to specialists interested in applying endmember extraction and unmixing algorithms to compressed hyperspectral data.
Compression for radiological images
NASA Astrophysics Data System (ADS)
Wilson, Dennis L.
1992-07-01
The viewing of radiological images has peculiarities that must be taken into account in the design of a compression technique. The images may be manipulated on a workstation to change the contrast, to change the center of the brightness levels that are viewed, and even to invert the images. Because of the possible consequences of losing information in a medical application, bit preserving compression is used for the images used for diagnosis. However, for archiving the images may be compressed to 10 of their original size. A compression technique based on the Discrete Cosine Transform (DCT) takes the viewing factors into account by compressing the changes in the local brightness levels. The compression technique is a variation of the CCITT JPEG compression that suppresses the blocking of the DCT except in areas of very high contrast.
Novel Algorithm for Classification of Medical Images
NASA Astrophysics Data System (ADS)
Bhushan, Bharat; Juneja, Monika
2010-11-01
Content-based image retrieval (CBIR) methods in medical image databases have been designed to support specific tasks, such as retrieval of medical images. These methods cannot be transferred to other medical applications since different imaging modalities require different types of processing. To enable content-based queries in diverse collections of medical images, the retrieval system must be familiar with the current Image class prior to the query processing. Further, almost all of them deal with the DICOM imaging format. In this paper a novel algorithm based on energy information obtained from wavelet transform for the classification of medical images according to their modalities is described. For this two types of wavelets have been used and have been shown that energy obtained in either case is quite distinct for each of the body part. This technique can be successfully applied to different image formats. The results are shown for JPEG imaging format.
NASA Astrophysics Data System (ADS)
2005-09-01
The Atacama Pathfinder Experiment (APEX) project celebrates the inauguration of its outstanding 12-m telescope, located on the 5100m high Chajnantor plateau in the Atacama Desert (Chile). The APEX telescope, designed to work at sub-millimetre wavelengths, in the 0.2 to 1.5 mm range, passed successfully its Science Verification phase in July, and since then is performing regular science observations. This new front-line facility provides access to the "Cold Universe" with unprecedented sensitivity and image quality. After months of careful efforts to set up the telescope to work at the best possible technical level, those involved in the project are looking with satisfaction at the fruit of their labour: APEX is not only fully operational, it has already provided important scientific results. "The superb sensitivity of our detectors together with the excellence of the site allow fantastic observations that would not be possible with any other telescope in the world," said Karl Menten, Director of the group for Millimeter and Sub-Millimeter Astronomy at the Max-Planck-Institute for Radio Astronomy (MPIfR) and Principal Investigator of the APEX project. ESO PR Photo 30/05 ESO PR Photo 30/05 Sub-Millimetre Image of a Stellar Cradle [Preview - JPEG: 400 x 627 pix - 200k] [Normal - JPEG: 800 x 1254 pix - 503k] [Full Res - JPEG: 1539 x 2413 pix - 1.3M] Caption: ESO PR Photo 30/05 is an image of the giant molecular cloud G327 taken with APEX. More than 5000 spectra were taken in the J=3-2 line of the carbon monoxide molecule (CO), one of the best tracers of molecular clouds, in which star formation takes place. The bright peak in the north of the cloud is an evolved star forming region, where the gas is heated by a cluster of new stars. The most interesting region in the image is totally inconspicuous in CO: the G327 hot core, as seen in methanol contours. It is a truly exceptional source, and is one of the richest sources of emission from complex organic molecules in the Galaxy (see spectrum at bottom). Credit: Wyrowski et al. (map), Bisschop et al. (spectrum). Millimetre and sub-millimetre astronomy opens exciting new possibility in the study of the first galaxies to have formed in the Universe and of the formation processes of stars and planets. In particular, APEX allows astronomers to study the chemistry and physical conditions of molecular clouds, that is, dense regions of gas and dust in which new stars are forming. Among the first studies made with APEX, astronomers took a first glimpse deep into cradles of massive stars, observing for example the molecular cloud G327 and measuring significant emission in carbon monoxide and complex organic molecules (see ESO PR Photo 30/05). The official inauguration of the APEX telescope will start in San Pedro de Atacama on September, 25th. The Ambassadors in Chile of some of ESO's member states, the Intendente of the Chilean Region II, the Mayor of San Pedro, the Executive Director of the Chilean Science Agency (CONICYT), the Presidents of the Communities of Sequitor and Toconao, as well as representatives of the Ministry of Foreign Affairs and Universities in Chile, will join ESO's Director General, Dr. Catherine Cesarsky, the Chairman of the APEX Board and MPIfR director, Prof. Karl Menten, and the Director of the Onsala Space Observatory, Prof. Roy Booth, in a celebration that will be held in San Pedro de Atacama. The next day, the delegation will visit the APEX base camp in Sequitor, near San Pedro, from where the telescope is operated, as well as the APEX site on the 5100m high Llano de Chajnantor.
Using Purpose-Built Functions and Block Hashes to Enable Small Block and Sub-file Forensics
2010-01-01
JPEGs. We tested precarve using the nps-2009-canon2-gen6 (Garfinkel et al., 2009) disk image. The disk image was created with a 32 MB SD card and a...analysis of n-grams in the fragment. Fig. 1 e Usage of a 160 GB iPod reported by iTunes 8.2.1 (6) (top), as reported by the file system (bottom center), and...as computing with random sampling (bottom right). Note that iTunes usage actually in GiB, even though the program displays the “GB” label. Fig. 2 e
Barisoni, Laura; Troost, Jonathan P; Nast, Cynthia; Bagnasco, Serena; Avila-Casado, Carmen; Hodgin, Jeffrey; Palmer, Matthew; Rosenberg, Avi; Gasim, Adil; Liensziewski, Chrysta; Merlino, Lino; Chien, Hui-Ping; Chang, Anthony; Meehan, Shane M; Gaut, Joseph; Song, Peter; Holzman, Lawrence; Gibson, Debbie; Kretzler, Matthias; Gillespie, Brenda W; Hewitt, Stephen M
2016-07-01
The multicenter Nephrotic Syndrome Study Network (NEPTUNE) digital pathology scoring system employs a novel and comprehensive methodology to document pathologic features from whole-slide images, immunofluorescence and ultrastructural digital images. To estimate inter- and intra-reader concordance of this descriptor-based approach, data from 12 pathologists (eight NEPTUNE and four non-NEPTUNE) with experience from training to 30 years were collected. A descriptor reference manual was generated and a webinar-based protocol for consensus/cross-training implemented. Intra-reader concordance for 51 glomerular descriptors was evaluated on jpeg images by seven NEPTUNE pathologists scoring 131 glomeruli three times (Tests I, II, and III), each test following a consensus webinar review. Inter-reader concordance of glomerular descriptors was evaluated in 315 glomeruli by all pathologists; interstitial fibrosis and tubular atrophy (244 cases, whole-slide images) and four ultrastructural podocyte descriptors (178 cases, jpeg images) were evaluated once by six and five pathologists, respectively. Cohen's kappa for inter-reader concordance for 48/51 glomerular descriptors with sufficient observations was moderate (0.40
Optimal color coding for compression of true color images
NASA Astrophysics Data System (ADS)
Musatenko, Yurij S.; Kurashov, Vitalij N.
1998-11-01
In the paper we present the method that improves lossy compression of the true color or other multispectral images. The essence of the method is to project initial color planes into Karhunen-Loeve (KL) basis that gives completely decorrelated representation for the image and to compress basis functions instead of the planes. To do that the new fast algorithm of true KL basis construction with low memory consumption is suggested and our recently proposed scheme for finding optimal losses of Kl functions while compression is used. Compare to standard JPEG compression of the CMYK images the method provides the PSNR gain from 0.2 to 2 dB for the convenient compression ratios. Experimental results are obtained for high resolution CMYK images. It is demonstrated that presented scheme could work on common hardware.
Low bit rate coding of Earth science images
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Chung, Wilson C.; Smith, Mark J. T.
1993-01-01
In this paper, the authors discuss compression based on some new ideas in vector quantization and their incorporation in a sub-band coding framework. Several variations are considered, which collectively address many of the individual compression needs within the earth science community. The approach taken in this work is based on some recent advances in the area of variable rate residual vector quantization (RVQ). This new RVQ method is considered separately and in conjunction with sub-band image decomposition. Very good results are achieved in coding a variety of earth science images. The last section of the paper provides some comparisons that illustrate the improvement in performance attributable to this approach relative the the JPEG coding standard.
Costa, Marcus V C; Carvalho, Joao L A; Berger, Pedro A; Zaghetto, Alexandre; da Rocha, Adson F; Nascimento, Francisco A O
2009-01-01
We present a new preprocessing technique for two-dimensional compression of surface electromyographic (S-EMG) signals, based on correlation sorting. We show that the JPEG2000 coding system (originally designed for compression of still images) and the H.264/AVC encoder (video compression algorithm operating in intraframe mode) can be used for compression of S-EMG signals. We compare the performance of these two off-the-shelf image compression algorithms for S-EMG compression, with and without the proposed preprocessing step. Compression of both isotonic and isometric contraction S-EMG signals is evaluated. The proposed methods were compared with other S-EMG compression algorithms from the literature.
Compression of CCD raw images for digital still cameras
NASA Astrophysics Data System (ADS)
Sriram, Parthasarathy; Sudharsanan, Subramania
2005-03-01
Lossless compression of raw CCD images captured using color filter arrays has several benefits. The benefits include improved storage capacity, reduced memory bandwidth, and lower power consumption for digital still camera processors. The paper discusses the benefits in detail and proposes the use of a computationally efficient block adaptive scheme for lossless compression. Experimental results are provided that indicate that the scheme performs well for CCD raw images attaining compression factors of more than two. The block adaptive method also compares favorably with JPEG-LS. A discussion is provided indicating how the proposed lossless coding scheme can be incorporated into digital still camera processors enabling lower memory bandwidth and storage requirements.
NASA Astrophysics Data System (ADS)
Ma, Long; Zhao, Deping
2011-12-01
Spectral imaging technology have been used mostly in remote sensing, but have recently been extended to new area requiring high fidelity color reproductions like telemedicine, e-commerce, etc. These spectral imaging systems are important because they offer improved color reproduction quality not only for a standard observer under a particular illuminantion, but for any other individual exhibiting normal color vision capability under another illuminantion. A possibility for browsing of the archives is needed. In this paper, the authors present a new spectral image browsing architecture. The architecture for browsing is expressed as follow: (1) The spectral domain of the spectral image is reduced with the PCA transform. As a result of the PCA transform the eigenvectors and the eigenimages are obtained. (2) We quantize the eigenimages with the original bit depth of spectral image (e.g. if spectral image is originally 8bit, then quantize eigenimage to 8bit), and use 32bit floating numbers for the eigenvectors. (3) The first eigenimage is lossless compressed by JPEG-LS, the other eigenimages were lossy compressed by wavelet based SPIHT algorithm. For experimental evalution, the following measures were used. We used PSNR as the measurement for spectral accuracy. And for the evaluation of color reproducibility, ΔE was used.here standard D65 was used as a light source. To test the proposed method, we used FOREST and CORAL spectral image databases contrain 12 and 10 spectral images, respectively. The images were acquired in the range of 403-696nm. The size of the images were 128*128, the number of bands was 40 and the resolution was 8 bits per sample. Our experiments show the proposed compression method is suitable for browsing, i.e., for visual purpose.
High-resolution seismic-reflection data offshore of Dana Point, southern California borderland
Sliter, Ray W.; Ryan, Holly F.; Triezenberg, Peter J.
2010-01-01
The U.S. Geological Survey collected high-resolution shallow seismic-reflection profiles in September 2006 in the offshore area between Dana Point and San Mateo Point in southern Orange and northern San Diego Counties, California. Reflection profiles were located to image folds and reverse faults associated with the San Mateo fault zone and high-angle strike-slip faults near the shelf break (the Newport-Inglewood fault zone) and at the base of the slope. Interpretations of these data were used to update the USGS Quaternary fault database and in shaking hazard models for the State of California developed by the Working Group for California Earthquake Probabilities. This cruise was funded by the U.S. Geological Survey Coastal and Marine Catastrophic Hazards project. Seismic-reflection data were acquired aboard the R/V Sea Explorer, which is operated by the Ocean Institute at Dana Point. A SIG ELC820 minisparker seismic source and a SIG single-channel streamer were used. More than 420 km of seismic-reflection data were collected. This report includes maps of the seismic-survey sections, linked to Google Earth? software, and digital data files showing images of each transect in SEG-Y, JPEG, and TIFF formats.
NASA Astrophysics Data System (ADS)
Zaborowicz, M.; Przybył, J.; Koszela, K.; Boniecki, P.; Mueller, W.; Raba, B.; Lewicki, A.; Przybył, K.
2014-04-01
The aim of the project was to make the software which on the basis on image of greenhouse tomato allows for the extraction of its characteristics. Data gathered during the image analysis and processing were used to build learning sets of artificial neural networks. Program enables to process pictures in jpeg format, acquisition of statistical information of the picture and export them to an external file. Produced software is intended to batch analyze collected research material and obtained information saved as a csv file. Program allows for analysis of 33 independent parameters implicitly to describe tested image. The application is dedicated to processing and image analysis of greenhouse tomatoes. The program can be used for analysis of other fruits and vegetables of a spherical shape.
NASA Astrophysics Data System (ADS)
Aizenberg, Evgeni; Bigio, Irving J.; Rodriguez-Diaz, Eladio
2012-03-01
The Fourier descriptors paradigm is a well-established approach for affine-invariant characterization of shape contours. In the work presented here, we extend this method to images, and obtain a 2D Fourier representation that is invariant to image rotation. The proposed technique retains phase uniqueness, and therefore structural image information is not lost. Rotation-invariant phase coefficients were used to train a single multi-valued neuron (MVN) to recognize satellite and human face images rotated by a wide range of angles. Experiments yielded 100% and 96.43% classification rate for each data set, respectively. Recognition performance was additionally evaluated under effects of lossy JPEG compression and additive Gaussian noise. Preliminary results show that the derived rotation-invariant features combined with the MVN provide a promising scheme for efficient recognition of rotated images.
View compensated compression of volume rendered images for remote visualization.
Lalgudi, Hariharan G; Marcellin, Michael W; Bilgin, Ali; Oh, Han; Nadar, Mariappan S
2009-07-01
Remote visualization of volumetric images has gained importance over the past few years in medical and industrial applications. Volume visualization is a computationally intensive process, often requiring hardware acceleration to achieve a real time viewing experience. One remote visualization model that can accomplish this would transmit rendered images from a server, based on viewpoint requests from a client. For constrained server-client bandwidth, an efficient compression scheme is vital for transmitting high quality rendered images. In this paper, we present a new view compensation scheme that utilizes the geometric relationship between viewpoints to exploit the correlation between successive rendered images. The proposed method obviates motion estimation between rendered images, enabling significant reduction to the complexity of a compressor. Additionally, the view compensation scheme, in conjunction with JPEG2000 performs better than AVC, the state of the art video compression standard.
Next VLT Instrument Ready for the Astronomers
NASA Astrophysics Data System (ADS)
2000-02-01
FORS2 Commissioning Period Successfully Terminated The commissioning of the FORS2 multi-mode astronomical instrument at KUEYEN , the second FOcal Reducer/low dispersion Spectrograph at the ESO Very Large Telescope, was successfully finished today. This important work - that may be likened with the test driving of a new car model - took place during two periods, from October 22 to November 21, 1999, and January 22 to February 8, 2000. The overall goal was to thoroughly test the functioning of the new instrument, its conformity to specifications and to optimize its operation at the telescope. FORS2 is now ready to be handed over to the astronomers on April 1, 2000. Observing time for a six-month period until October 1 has already been allocated to a large number of research programmes. Two of the images that were obtained with FORS2 during the commissioning period are shown here. An early report about this instrument is available as ESO PR 17/99. The many modes of FORS2 The FORS Commissioning Team carried out a comprehensive test programme for all observing modes. These tests were done with "observation blocks (OBs)" that describe the set-up of the instrument and telescope for each exposure in all details, e.g., position in the sky of the object to be observed, filters, exposure time, etc.. Whenever an OB is "activated" from the control console, the corresponding observation is automatically performed. Additional information about the VLT Data Flow System is available in ESO PR 10/99. The FORS2 observing modes include direct imaging, long-slit and multi-object spectroscopy, exactly as in its twin, FORS1 at ANTU . In addition, FORS2 contains the "Mask Exchange Unit" , a motorized magazine that holds 10 masks made of thin metal plates into which the slits are cut by means of a laser. The advantage of this particular observing method is that more spectra (of more objects) can be taken with a single exposure (up to approximately 80) and that the shape of the slits can be adapted to the shape of the objects, thus increasing the scientific return. Results obtained so far look very promising. To increase further the scientific power of the FORS2 instrument in the spectroscopic mode, a number of new optical dispersion elements ("grisms", i.e., a combination of a grating and a glass prism) have been added. They give the scientists a greater choice of spectral resolution and wavelength range. Another mode that is new to FORS2 is the high time resolution mode. It was demonstrated with the Crab pulsar, cf. ESO PR 17/99 and promises very interesting scientific returns. Images from the FORS2 Commissioning Phase The two composite images shown below were obtained during the FORS2 commissioning work. They are based on three exposures through different optical broadband filtres (B: 429 nm central wavelength; 88 nm FWHM (Full Width at Half Maximum), V: 554/111 nm, R: 655/165 nm). All were taken with the 2048 x 2048 pixel 2 CCD detector with a field of view of 6.8 x 6.8 arcmin 2 ; each pixel measures 24 µm square. They were flatfield corrected and bias subtracted, scaled in intensity and some cosmetic cleaning was performed, e.g. removal of bad columns on the CCD. North is up and East is left. Tarantula Nebula in the Large Magellanic Cloud ESO Press Photo 05a/00 ESO Press Photo 05a/00 [Preview; JPEG: 400 x 452; 52k] [Normal; JPEG: 800 x 903; 142k] [Full-Res; JPEG: 2048 x 2311; 2.0Mb] The Tarantula Nebula in the Large Magellanic Cloud , as obtained with FORS2 at KUEYEN during the recent Commissioning period. It was taken during the night of January 31 - February 1, 2000. It is a composite of three exposures in B (30 sec exposure, image quality 0.75 arcsec; here rendered in blue colour), V (15 sec, 0.70 arcsec; green) and R (10 sec, 0.60 arcsec; red). The full-resolution version of this photo retains the orginal pixels. 30 Doradus , also known as the Tarantula Nebula , or NGC 2070 , is located in the Large Magellanic Cloud (LMC) , some 170,000 light-years away. It is one of the largest known star-forming regions in the Local Group of Galaxies. It was first catalogued as a star, but then recognized to be a nebula by the French astronomer A. Lacaille in 1751-52. The Tarantula Nebula is the only extra-galactic nebula which can be seen with the unaided eye. It contains in the centre the open stellar cluster R 136 with many of the largest, hottest, and most massive stars known. Radio Galaxy Centaurus A ESO Press Photo 05b/00 ESO Press Photo 05b/00 [Preview; JPEG: 400 x 448; 40k] [Normal; JPEG: 800 x 896; 110k] [Full-Res; JPEG: 2048 x 2293; 2.0Mb] The radio galaxy Centarus A , as obtained with FORS2 at KUEYEN during the recent Commissioning period. It was taken during the night of January 31 - February 1, 2000. It is a composite of three exposures in B (300 sec exposure, image quality 0.60 arcsec; here rendered in blue colour), V (240 sec, 0.60 arcsec; green) and R (240 sec, 0.55 arcsec; red). The full-resolution version of this photo retains the orginal pixels. ESO Press Photo 05c/00 ESO Press Photo 05c/00 [Preview; JPEG: 400 x 446; 52k] [Normal; JPEG: 801 x 894; 112k] An area, north-west of the centre of Centaurus A with a detailed view of the dust lane and clusters of luminous blue stars. The normal version of this photo retains the orginal pixels. The new FORS2 image of Centaurus A , also known as NGC 5128 , is an example of how frontier science can be combined with esthetic aspects. This galaxy is a most interesting object for the present attempts to understand active galaxies . It is being investigated by means of observations in all spectral regions, from radio via infrared and optical wavelengths to X- and gamma-rays. It is one of the most extensively studied objects in the southern sky. FORS2 , with its large field-of-view and excellent optical resolution, makes it possible to study the global context of the active region in Centaurus A in great detail. Note for instance the great number of massive and luminous blue stars that are well resolved individually, in the upper right and lower left in PR Photo 05b/00 . Centaurus A is one of the foremost examples of a radio-loud active galactic nucleus (AGN) . On images obtained at optical wavelengths, thick dust layers almost completely obscure the galaxy's centre. This structure was first reported by Sir John Herschel in 1847. Until 1949, NGC 5128 was thought to be a strange object in the Milky Way, but it was then identified as a powerful radio galaxy and designated Centaurus A . The distance is about 10-13 million light-years (3-4 Mpc) and the apparent visual magnitude is about 8, or 5 times too faint to be seen with the unaided eye. There is strong evidence that Centaurus A is a merger of an elliptical with a spiral galaxy, since elliptical galaxies would not have had enough dust and gas to form the young, blue stars seen along the edges of the dust lane. The core of Centaurus A is the smallest known extragalactic radio source, only 10 light-days across. A jet of high energy particles from this centre is observed in radio and X-ray images. The core probably contains a supermassive black hole with a mass of about 100 million solar masses. This is the caption to ESO PR Photos 05a-c/00 . They may be reproduced, if credit is given to the European Southern Observatory..
Morgan, Karen L.M.; Krohn, M. Dennis; Doran, Kara; Guy, Kristy K.
2013-01-01
The U.S. Geological Survey (USGS) conducts baseline and storm response photography missions to document and understand the changes in vulnerability of the Nation's coasts to extreme storms (Morgan, 2009). On February 7, 2012, the USGS conducted an oblique aerial photographic survey from Pensacola, Fla., to Breton Islands, La., aboard a Piper Navajo Chieftain at an altitude of 500 feet (ft) and approximately 1,000 ft offshore. This mission was flown to collect baseline data for assessing incremental changes since the last survey, and the data can be used in the assessment of future coastal change. The photographs provided here are Joint Photographic Experts Group (JPEG) images. The photograph locations are an estimate of the position of the aircraft and do not indicate the location of the feature in the images (see the Navigation Data page). These photos document the configuration of the barrier islands and other coastal features at the time of the survey. The header of each photo is populated with time of collection, Global Positioning System (GPS) latitude, GPS longitude, GPS position (latitude and longitude), keywords, credit, artist (photographer), caption, copyright, and contact information using EXIFtools (Subino and others, 2012). Photographs can be opened directly with any JPEG-compatible image viewer by clicking on a thumbnail on the contact sheet. Table 1 provides detailed information about the assigned location, name, data, and time the photograph was taken along with links to the photograph. In addition to the photographs, a Google Earth Keyhole Markup Language (KML) file is provided and can be used to view the images by clicking on the marker and then clicking on either the thumbnail or the link above the thumbnail. The KML files were created using the photographic navigation files (see the Photos and Maps page).
Lu, Hao; Papathomas, Thomas G; van Zessen, David; Palli, Ivo; de Krijger, Ronald R; van der Spek, Peter J; Dinjens, Winand N M; Stubbs, Andrew P
2014-11-25
In prognosis and therapeutics of adrenal cortical carcinoma (ACC), the selection of the most active areas in proliferative rate (hotspots) within a slide and objective quantification of immunohistochemical Ki67 Labelling Index (LI) are of critical importance. In addition to intratumoral heterogeneity in proliferative rate i.e. levels of Ki67 expression within a given ACC, lack of uniformity and reproducibility in the method of quantification of Ki67 LI may confound an accurate assessment of Ki67 LI. We have implemented an open source toolset, Automated Selection of Hotspots (ASH), for automated hotspot detection and quantification of Ki67 LI. ASH utilizes NanoZoomer Digital Pathology Image (NDPI) splitter to convert the specific NDPI format digital slide scanned from the Hamamatsu instrument into a conventional tiff or jpeg format image for automated segmentation and adaptive step finding hotspots detection algorithm. Quantitative hotspot ranking is provided by the functionality from the open source application ImmunoRatio as part of the ASH protocol. The output is a ranked set of hotspots with concomitant quantitative values based on whole slide ranking. We have implemented an open source automated detection quantitative ranking of hotspots to support histopathologists in selecting the 'hottest' hotspot areas in adrenocortical carcinoma. To provide wider community easy access to ASH we implemented a Galaxy virtual machine (VM) of ASH which is available from http://bioinformatics.erasmusmc.nl/wiki/Automated_Selection_of_Hotspots . The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/13000_2014_216.
A two-factor error model for quantitative steganalysis
NASA Astrophysics Data System (ADS)
Böhme, Rainer; Ker, Andrew D.
2006-02-01
Quantitative steganalysis refers to the exercise not only of detecting the presence of hidden stego messages in carrier objects, but also of estimating the secret message length. This problem is well studied, with many detectors proposed but only a sparse analysis of errors in the estimators. A deep understanding of the error model, however, is a fundamental requirement for the assessment and comparison of different detection methods. This paper presents a rationale for a two-factor model for sources of error in quantitative steganalysis, and shows evidence from a dedicated large-scale nested experimental set-up with a total of more than 200 million attacks. Apart from general findings about the distribution functions found in both classes of errors, their respective weight is determined, and implications for statistical hypothesis tests in benchmarking scenarios or regression analyses are demonstrated. The results are based on a rigorous comparison of five different detection methods under many different external conditions, such as size of the carrier, previous JPEG compression, and colour channel selection. We include analyses demonstrating the effects of local variance and cover saturation on the different sources of error, as well as presenting the case for a relative bias model for between-image error.
& Legislation Links Discussion Lists Quick Links AAPT eMentoring ComPADRE Review of High School Take Physics" Poster Why Physics Poster Thumbnail Download normal resolution JPEG Download high resolution JPEG Download Spanish Version Recruiting Physics Students in High School (FED newsletter article
First-Ever Census of Variable Mira-Type Stars in Galaxy Outside the Local Group
NASA Astrophysics Data System (ADS)
2003-05-01
First-Ever Census of Variable Mira-Type Stars in Galaxy Outsidethe Local Group Summary An international team led by ESO astronomer Marina Rejkuba [1] has discovered more than 1000 luminous red variable stars in the nearby elliptical galaxy Centaurus A (NGC 5128) . Brightness changes and periods of these stars were measured accurately and reveal that they are mostly cool long-period variable stars of the so-called "Mira-type" . The observed variability is caused by stellar pulsation. This is the first time a detailed census of variable stars has been accomplished for a galaxy outside the Local Group of Galaxies (of which the Milky Way galaxy in which we live is a member). It also opens an entirely new window towards the detailed study of stellar content and evolution of giant elliptical galaxies . These massive objects are presumed to play a major role in the gravitational assembly of galaxy clusters in the Universe (especially during the early phases). This unprecedented research project is based on near-infrared observations obtained over more than three years with the ISAAC multi-mode instrument at the 8.2-m VLT ANTU telescope at the ESO Paranal Observatory . PR Photo 14a/03 : Colour image of the peculiar galaxy Centaurus A . PR Photo 14b/03 : Location of the fields in Centaurus A, now studied. PR Photo 14c/03 : "Field 1" in Centaurus A (visual light; FORS1). PR Photo 14d/03 : "Field 2" in Centaurus A (visual light; FORS1). PR Photo 14e/03 : "Field 1" in Centaurus A (near-infrared; ISAAC). PR Photo 14f/03 : "Field 2" in Centaurus A (near-infrared; ISAAC). PR Photo 14g/03 : Light variation of six variable stars in Centaurus A PR Photo 14h/03 : Light variation of stars in Centaurus A (Animated GIF) PR Photo 14i/03 : Light curves of four variable stars in Centaurus A. Mira-type variable stars Among the stars that are visible in the sky to the unaided eye, roughly one out of three hundred (0.3%) displays brightness variations and is referred to by astronomers as a "variable star". The percentage is much higher among large, cool stars ("red giants") - in fact, almost all luminous stars of that type are variable. Such stars are known as Mira-variables ; the name comes from the most prominent member of this class, Omicron Ceti in the constellation Cetus (The Whale), also known as "Stella Mira" (The Wonderful Star). Its brightness changes with a period of 332 days and it is about 1500 times brighter at maximum (visible magnitude 2 and one of the fifty brightest stars in the sky) than at minimum (magnitude 10 and only visible in small telescopes) [2]. Stars like Omicron Ceti are nearing the end of their life. They are very large and have sizes from a few hundred to about a thousand times that of the Sun. The brightness variation is due to pulsations during which the star's temperature and size change dramatically. In the following evolutionary phase, Mira-variables will shed their outer layers into surrounding space and become visible as planetary nebulae with a hot and compact star (a "white dwarf") at the middle of a nebula of gas and dust (cf. the "Dumbbell Nebula" - ESO PR Photo 38a-b/98 ). Several thousand Mira-type stars are currently known in the Milky Way galaxy and a few hundred have been found in other nearby galaxies, including the Magellanic Clouds. The peculiar galaxy Centaurus A ESO PR Photo 14a/03 ESO PR Photo 14a/03 [Preview - JPEG: 400 x 451 pix - 53k [Normal - JPEG: 800 x 903 pix - 528k] [Hi-Res - JPEG: 3612 x 4075 pix - 8.4M] ESO PR Photo 14b/03 ESO PR Photo 14b/03 [Preview - JPEG: 570 x 400 pix - 52k [Normal - JPEG: 1140 x 800 pix - 392k] ESO PR Photo 14c/03 ESO PR Photo 14c/03 [Preview - JPEG: 400 x 451 pix - 61k [Normal - JPEG: 800 x 903 pix - 768k] ESO PR Photo 14d/03 ESO PR Photo 14d/03 [Preview - JPEG: 400 x 451 pix - 56k [Normal - JPEG: 800 x 903 pix - 760k] Captions : PR Photo 14a/03 is a colour composite photo of the peculiar galaxy Centaurus A (NGC 5128) , obtained with the Wide-Field Imager (WFI) camera at the ESO/MPG 2.2-m telescope on La Silla. It is based on a total of nine 3-min exposures made on March 25, 1999, through different broad-band optical filters (B(lue) - total exposure time 9 min - central wavelength 456 nm - here rendered as blue; V(isual) - 540 nm - 9 min - green; I(nfrared) - 784 nm - 9 min - red); it was prepared from files in the ESO Science Data Archive by ESO-astronomer Benoît Vandame . The elliptical shape and the central dust band, the imprint of a galaxy collision, are well visible. PR Photo 14b/03 identifies the two regions of Centaurus A (the rectangles in the upper left and lower right inserts) in which a search for variable stars was made during the present research project: "Field 1" is located in an area north-east of the center in which many young stars are present. This is also the direction in which an outflow ("jet") is seen on deep optical and radio images. "Field 2" is positioned in the galaxy's halo, south of the centre. High-resolution, very deep colour photos of these two fields and their immediate surroundings are shown in PR Photos 14c-d/03 . They were produced by means of CCD-frames obtained in July 1999 through U- and V-band optical filters with the VLT FORS1 multi-mode instrument at the 8.2-m VLT ANTU telescope on Paranal. Note the great variety of object types and colours, including many background galaxies which are seen through these less dense regions of Centaurus A . The total exposure time was 30 min in each filter and the seeing was excellent, 0.5 arcsec. The original pixel size is 0.196 arcsec and the fields measure 6.7 x 6.7 arcmin 2 (2048 x 2048 pix 2 ). North is up and East is left on all photos. Centaurus A (NGC 5128) is the nearest giant galaxy, at a distance of about 13 million light-years. It is located outside the Local Group of Galaxies to which our own galaxy, the Milky Way, and its satellite galaxies, the Magellanic Clouds, belong. Centaurus A is seen in the direction of the southern constellation Centaurus. It is of elliptical shape and is currently merging with a companion galaxy, making it one of the most spectacular objects in the sky, cf. PR Photo 14a/03 . It possesses a very heavy black hole at its centre (see ESO PR 04/01 ) and is a source of strong radio and X-ray emission. During the present research programme, two regions in Centaurus A were searched for stars of variable brightness; they are located in the periphery of this peculiar galaxy, cf. PR Photos 14b-d/03 . An outer field ("Field 1") coincides with a stellar shell with many blue and luminous stars produced by the on-going galaxy merger; it lies at a distance of 57,000 light-years from the centre. The inner field ("Field 2") is more crowded and is situated at a projected distance of about 30,000 light-years from the centre.. Three years of VLT observations ESO PR Photo 14e/03 ESO PR Photo 14e/03 [Preview - JPEG: 400 x 447 pix - 120k [Normal - JPEG: 800 x 894 pix - 992k] ESO PR Photo 14f/03 ESO PR Photo 14f/03 [Preview - JPEG: 400 x 450 pix - 96k [Normal - JPEG: 800 x 899 pix - 912k] Caption : PR Photos 14e-f/03 are colour composites of two small fields ("Field 1" and "Field 2") in the peculiar galaxy Centaurus A (NGC 5128) , based on exposures through three near-infrared filters (the J-, H- and K-bands at wavelengths 1.2, 1.6 and 2.2 µm, respectively) with the ISAAC multi-mode instrument at the 8.2-m VLT ANTU telescope at the ESO Paranal observatory. The corresponding areas are outlined within the two inserts in PR Photo 14b/03 and may be compared with the visual images from FORS1 ( PR Photos 14c-d/03 ). These ISAAC photos are the deepest near-infrared images ever obtained in this galaxy and show thousands of its stars of different colours. In the present colour-coding, the redder an image, the cooler is the star. The original pixel size is 0.15 arcsec and both fields measure 2.5 x 2.5 arcmin 2. North is up and East is left. Under normal circumstances, any team of professional astronomers will have access to the largest telescopes in the world for only a very limited number of consecutive nights each year. However, extensive searches for variable stars like the present require repeated observations lasting minutes-to-hours over periods of months-to-years. It is thus not feasible to perform such observations in the classical way in which the astronomers travel to the telescope each time. Fortunately, the operational system of the VLT at the ESO Paranal Observatory (Chile) is also geared to encompass this kind of long-term programme. Between April 1999 and July 2002, the 8.2-m VLT ANTU telescope on Cerro Paranal in Chile) was operated in service mode on many occasions to obtain K-band images of the two fields in Centaurus A by means of the near-infrared ISAAC multi-mode instrument. Each field was observed over 20 times in the course of this three-year period ; some of the images were obtained during exceptional seeing conditions of 0.30 arcsec. One set of complementary optical images was obtained with the FORS1 multi-mode instrument (also on VLT ANTU) in July 1999. Each image from the ISAAC instrument covers a sky field measuring 2.5 x 2.5 arcmin 2. The combined images, encompassing a total exposure of 20 hours are indeed the deepest infrared images ever made of the halo of any galaxy as distant as Centaurus A , about 13 million light-years. Discovering one thousand Mira variables ESO PR Photo 14g/03 ESO PR Photo 14g/03 [Preview - JPEG: 400 x 480 pix - 61k [Normal - JPEG: 800 x 961 pix - 808k] ESO PR Photo 14h/03 ESO PR Photo 14h/03 [Animated GIF: 263 x 267 pix - 56k ESO PR Photo 14i/03 ESO PR Photo 14i/03 [Preview - JPEG: 480 x 400 pix - 33k [Normal - JPEG: 959 x 800 pix - 152k] Captions : PR Photo 14g/03 shows a zoomed-in area within "Field 2" in Centaurus A , from the ISAAC colour image shown in PR Photo 14e/03 . Nearly all red stars in this area are of the variable Mira-type. The brightness variation of some stars (labelled A-D) is demonstrated in the animated-GIF image PR Photo 14h/03 . The corresponding light curves (brightness over the pulsation period) are shown in PR Photo 14i/03 . Here the abscissa indicates the pulsation phase (one full period corresponds to the interval from 0 to 1) and the ordinate unit is near-infrared K s -magnitude. One magnitude corresponds to a difference in brightness of a factor 2.5. Once the lengthy observations were completed, two further steps were needed to identify the variable stars in Centaurus A . First, each ISAAC frame was individually processed to identify the thousands and thousands of faint point-like images (stars) visible in these fields. Next, all images were compared using a special software package ("DAOPHOT") to measure the brightness of all these stars in the different frames, i.e., as a function of time. While most stars in these fields as expected were found to have constant brightness, more than 1000 stars displayed variations in brightness with time; this is by far the largest number of variable stars ever discovered in a galaxy outside the Local Group of Galaxies. The detailed analysis of this enormous dataset took more than a year. Most of the variable stars were found to be of the Mira-type and their light curves (brightness over the pulsation period) were measured, cf. PR Photo 14i/03 . For each of them, values of the characterising parameters, the period (days) and brightness amplitude (magnitudes) were determined. A catalogue of the newly discovered variable stars in Centaurus A has now been made available to the astronomical community via the European research journal Astronomy & Astrophysics. Marina Rejkuba is pleased and thankful: "We are really very fortunate to have carried out this ambitious project so successfully. It all depended critically on different factors: the repeated granting of crucial observing time by the ESO Observing Programmes Committee over different observing periods in the face of rigorous international competition, the stability and reliability of the telescope and the ISAAC instrument over a period of more than three years and, not least, the excellent quality of the service mode observations, so efficiently performed by the staff at the Paranal Observatory." What have we learned about Centaurus A? The present study of variable stars in this giant elliptical galaxy is the first-ever of its kind. Although the evaluation of the very large observational data material is still not finished, it has already led to a number of very useful scientific results. Confirmation of the presence of an intermediate-age population Based on earlier research (optical and near-IR colour-magnitude diagrams of the stars in the fields), the present team of astronomers had previously detected the presence of intermediate-age and young stellar populations in the halo of this galaxy. The youngest stars appear to be aligned with the powerful jet produced by the massive black hole at the centre. Some of the very luminous red variable stars now discovered confirm the presence of a population of intermediate-age stars in the halo of this galaxy. It also contributes to our understanding of how giant elliptical galaxies form. New measurement of the distance to Centaurus A The pulsation of Mira-type variable stars obeys a period-luminosity relation. The longer its period, the more luminous is a Mira-type star. This fact makes it possible to use Mira-type stars as "standard candles" (objects of known intrinsic luminosity) for distance determinations. They have in fact often been used in this way to measure accurate distances to more nearby objects, e.g., to individual clusters of stars and to the center in our Milky Way galaxy, and also to galaxies in the Local Group, in particular the Magellanic Clouds. This method works particularly well with infrared measurements and the astronomers were now able to measure the distance to Centaurus A in this new way. They found 13.7 ± 1.9 million light-years , in general agreement with and thus confirming other methods. Study of stellar population gradients in the halo of a giant elliptical galaxy The two fields here studied contain different populations of stars. A clear dependence on the location (a "gradient") within the galaxy is observed, which can be due to differences in chemical composition or age, or to a combination of both. Understanding the cause of this gradient will provide additional clues to how Centaurus A - and indeed all giant elliptical galaxies - was formed and has since evolved. Comparison with other well-known nearby galaxies Past searches have discovered Mira-type variable stars thoughout the Milky Way, our home galaxy, and in other nearby galaxies in the Local Group. However, there are no giant elliptical galaxies like Centaurus A in the Local Group and this is the first time it has been possible to identify this kind of stars in that type of galaxy. The present investigation now opens a new window towards studies of the stellar constituents of such galaxies .
Method for measuring anterior chamber volume by image analysis
NASA Astrophysics Data System (ADS)
Zhai, Gaoshou; Zhang, Junhong; Wang, Ruichang; Wang, Bingsong; Wang, Ningli
2007-12-01
Anterior chamber volume (ACV) is very important for an oculist to make rational pathological diagnosis as to patients who have some optic diseases such as glaucoma and etc., yet it is always difficult to be measured accurately. In this paper, a method is devised to measure anterior chamber volumes based on JPEG-formatted image files that have been transformed from medical images using the anterior-chamber optical coherence tomographer (AC-OCT) and corresponding image-processing software. The corresponding algorithms for image analysis and ACV calculation are implemented in VC++ and a series of anterior chamber images of typical patients are analyzed, while anterior chamber volumes are calculated and are verified that they are in accord with clinical observation. It shows that the measurement method is effective and feasible and it has potential to improve accuracy of ACV calculation. Meanwhile, some measures should be taken to simplify the handcraft preprocess working as to images.
Embedded wavelet packet transform technique for texture compression
NASA Astrophysics Data System (ADS)
Li, Jin; Cheng, Po-Yuen; Kuo, C.-C. Jay
1995-09-01
A highly efficient texture compression scheme is proposed in this research. With this scheme, energy compaction of texture images is first achieved by the wavelet packet transform, and an embedding approach is then adopted for the coding of the wavelet packet transform coefficients. By comparing the proposed algorithm with the JPEG standard, FBI wavelet/scalar quantization standard and the EZW scheme with extensive experimental results, we observe a significant improvement in the rate-distortion performance and visual quality.
Adapting the ISO 20462 softcopy ruler method for online image quality studies
NASA Astrophysics Data System (ADS)
Burns, Peter D.; Phillips, Jonathan B.; Williams, Don
2013-01-01
In this paper we address the problem of Image Quality Assessment of no reference metrics, focusing on JPEG corrupted images. In general no reference metrics are not able to measure with the same performance the distortions within their possible range and with respect to different image contents. The crosstalk between content and distortion signals influences the human perception. We here propose two strategies to improve the correlation between subjective and objective quality data. The first strategy is based on grouping the images according to their spatial complexity. The second one is based on a frequency analysis. Both the strategies are tested on two databases available in the literature. The results show an improvement in the correlations between no reference metrics and psycho-visual data, evaluated in terms of the Pearson Correlation Coefficient.
SINFONI Opens with Upbeat Chords
NASA Astrophysics Data System (ADS)
2004-08-01
First Observations with New VLT Instrument Hold Great Promise [1] Summary The European Southern Observatory, the Max-Planck-Institute for Extraterrestrial Physics (Garching, Germany) and the Nederlandse Onderzoekschool Voor Astronomie (Leiden, The Netherlands), and with them all European astronomers, are celebrating the successful accomplishment of "First Light" for the Adaptive Optics (AO) assisted SINFONI ("Spectrograph for INtegral Field Observation in the Near-Infrared") instrument, just installed on ESO's Very Large Telescope at the Paranal Observatory (Chile). This is the first facility of its type ever installed on an 8-m class telescope, now providing exceptional observing capabilities for the imaging and spectroscopic studies of very complex sky regions, e.g. stellar nurseries and black-hole environments, also in distant galaxies. Following smooth assembly at the 8.2-m VLT Yepun telescope of SINFONI's two parts, the Adaptive Optics Module that feeds the SPIFFI spectrograph, the "First Light" spectrum of a bright star was recorded with SINFONI in the early evening of July 9, 2004. The following thirteen nights served to evaluate the performance of the new instrument and to explore its capabilities by test observations on a selection of exciting astronomical targets. They included the Galactic Centre region, already imaged with the NACO AO-instrument on the same telescope. Unprecedented high-angular resolution spectra and images were obtained of stars in the immediate vicinity of the massive central black hole. During the night of July 15 - 16, SINFONI recorded a flare from this black hole in great detail. Other interesting objects observed during this period include galaxies with active nuclei (e.g., the Circinus Galaxy and NGC 7469), a merging galaxy system (NGC 6240) and a young starforming galaxy pair at redshift 2 (BX 404/405). These first results were greeted with enthusiasm by the team of astronomers and engineers [2] from the consortium of German and Dutch Institutes and ESO who have worked on the development of SINFONI for nearly 7 years. The work on SINFONI at Paranal included successful commissioning in June 2004 of the Adaptive Optics Module built by ESO, during which exceptional test images were obtained of the main-belt asteroid (22) Kalliope and its moon. Moreover, the ability was demonstrated to correct the atmospheric turbulence by means of even very faint "guide" objects (magnitude 17.5), crucial for the observation of astronomical objects in many parts of the sky. SPIFFI - SPectrometer for Infrared Faint Field Imaging - was developed at the Max Planck Institute for Extraterrestrische Physik (MPE) in Garching (Germany), in a collaboration with the Nederlandse Onderzoekschool Voor Astronomie (NOVA) in Leiden and the Netherlands Foundation for Research in Astronomy (ASTRON), and ESO. PR Photo 24a/04: SINFONI Adaptive Optics Module at VLT Yepun (June 2004) PR Photo 24b/04: SINFONI at VLT Yepun, now fully assembled (July 2004) PR Photo 24c/04: "First Light" image from the SINFONI Adaptive Optics Module PR Photo 24d/04: AO-corrected Image of a 17.5-magnitude Star PR Photo 24e/04: SINFONI undergoing Balancing and Flexure Tests at VLT Yepun PR Photo 24f/04: SINFONI "First Light" Spectrum of HD 130163 PR Photo 24g/04: Members of the SINFONI Adaptive Optics Module Commissioning Team PR Photo 24h/04: Members of the SPIFFI Commissioning Team PR Photo 24i/04: The Principle of Integral Field Spectroscopy (IFS) PR Photo 24j/04: The Orbital Motion of Linus around (22) Kalliope PR Photo 24k/04: SINFONI Observations of the Galactic Centre Region PR Photo 24l/04: SINFONI Observations of the Circinus Galaxy PR Photo 24m/04: SINFONI Observations of the AGN Galaxy NGC 7469 PR Photo 24n/04: SINFONI Observations of NGC 6240 PR Photo 24o/04: SINFONI Observations of the Young Starforming Galaxies BX 404/405 PR Video Clip 07/04: The Orbital Motion of Linus around (22) Kalliope SINFONI: A powerful and complex instrument ESO PR Photo 24a/04 ESO PR Photo 24a/04 The SINFONI Adaptive Optics Module Commissioning Setup [Preview - JPEG: 427 x 400 pix - 230k] [Normal - JPEG: 854 x 800 pix - 551k] ESO PR Photo 24b/04 ESO PR Photo 24b/04 SINFONI at the VLT Yepun Cassegrain Focus [Preview - JPEG: 414 x 400 pix - 222k] [Normal - JPEG: 827 x 800 pix - 574k] Captions: ESO PR Photo 24a/04 shows the SINFONI Adaptive Optics Module, installed at the 8.2-m VLT YEPUN telescope during the first tests in June 2004. At this time, SPIFFI was not yet installed. The blue ring is the Adaptive Optics Module. The yellow parts, with a weight of 800 kg, simulate SPIFFI. The IR Test Imager is located inside the yellow ring. On ESO PR Photo 24b/04, the Near-Infrared Spectrograph SPIFFI in its cryogenic aluminium cylinder has now been attached. A new and very powerful astronomical instrument, a world-leader in its field, has been installed on the Very Large Telescope at the Paranal Observatory (Chile), cf. PR Photos 24a-b/04. Known as SINFONI ("Spectrograph for INtegral Field Observation in the Near-Infrared"), it was mounted in two steps at the Cassegrain focus of the 8.2-m VLT YEPUN telescope. First Light of the completed instrument was achieved on July 9, 2004 and various test observations during the subsequent commissioning phase were carried out with great success. SINFONI has two parts, the Near Infrared Integral Field Spectrograph, also known as SPIFFI (SPectrometer for Infrared Faint Field Imaging), and the Adaptive Optics Module. SPIFFI was developed at the Max Planck Institute for Extraterrestrische Physik (MPE) (Garching, Germany), in a collaboration with the Nederlandse Onderzoekschool Voor Astronomie (NOVA) in Leiden, the Netherlands Foundation for Research in Astronomy (ASTRON) (The Netherlands), and the European Southern Observatory (ESO) (Garching, Germany). The Adaptive Optics (AO) Module was developed by ESO. Once fully commissioned, SINFONI will provide adaptive-optics assisted Integral Field Spectroscopy in the near-infrared 1.1 - 2.45 µm waveband. This advanced technique provides simultaneous spectra of numerous adjacent regions in a small sky field, e.g., of an interstellar nebula, the stars in a dense stellar cluster or a galaxy. Astronomers refer to these data as "3D-spectra" or "data cubes" (i.e., one spectrum for each small area in the two-dimensional sky field), cf. Appendix A. The SINFONI Adaptive Optics Module is based on a 60-element curvature system, similar to the Multi Application Curvature Adaptive Optics devices (MACAO), developed by the ESO Adaptive Optics Department and of which three have already been installed at the VLT (ESO PR 11/03); the last one in August 2004. Provided a sufficiently bright reference source ("guide star") is available within 60 arcsec of the observed field, the SINFONI AO module will ultimately offer diffraction-limited images (resolution 0.050 arcsec) at a wavelength of 2 µm. At the centre of the field, partial correction can be performed with guide stars as faint as magnitude 17.5. In about 6-months' time, it will benefit from a sodium Laser Guide Star, achieving a much better sky coverage than what is now possible. SPIFFI is a fully cryogenic near-infrared integral field spectrograph allowing observers to obtain simultaneously spectra of 2048 pixels within a 64 x 32 pixel field-of-view. In conjunction with the AO Module, it performs spectroscopy with slit-width sampling at the diffraction limit of an 8-m class telescope. For observations of very faint, extended celestial objects, the spatial resolution can be degraded so that both sensitivity and field-of-view are increased. SPIFFI works in the near-infrared wavelength range (1.1 - 2.45 µm) with a moderate spectral resolving power (R = 1500 to 4500). More information about the way SPIFFI functions will be found in Appendix A. "First Light with SINFONI's Adaptive Optics Module ESO PR Photo 24c/04 ESO PR Photo 24c/04 SINFONI AO "First Light" Image [Preview - JPEG: 400 x 482 pix - 106k] [Normal - JPEG: 800 x 963 pix - 256k] ESO PR Photo 24d/04 ESO PR Photo 24d/04 AO-corrected image of 17.5-magnitude Star [Preview - JPEG: 509 x 400 pix - 80k] [Normal - JPEG: 1018 x 800 pix - 182k] Captions: ESO PR Photo 24c/04 shows the "First Light" image obtained with the SINFONI AO Module and a high-angular-resolution near-infrared Test Camera during the night of May 31 - June 1, 2004. The magnitude of the observed star is 11 and the seeing conditions median. The diffraction limit at wavelength 2.2 µm of the 8.2-m telescope (FWHM 0.06 arcsec) was reached and is indicated by the bar. ESO PR Photo 24d/04: Image of a very faint guide star (visual magnitude 17.5), obtained with the SINFONI AO Module. To the right, the seeing-limited K-band image (FWHM 0.38 arcsec). To the left, the AO-corrected image (FWHM 0.145 arcsec). The ability to perform AO corrections on very faint guide objects is essential for SINFONI in order to observe very faint extragalactic objects. Because of the complexity of SINFONI, with its two modules, it was decided to perform the installation on the 8.2-m VLT Yepun telescope in two steps. The Adaptive Optics module was completely dismounted at ESO-Garching (Germany) and the corresponding 6 tons of equipment was air-freighted from Frankfurt to Santiago de Chile. The subsequent transport by road arrived at the Paranal Observatory on April 21, 2004. After 6 weeks of reintegration and testing in the Integration Hall, the AO Module was mounted on Yepun on May 30 - 31, together with a high-angular-resolution near-infrared Test Camera, cf. PR Photo 24a/04. Technical "First-Light" with this system was achieved around midnight on May 31st by observing a 11-magnitude star, cf. PR Photo 24c/04, reaching right away the theoretical diffraction limit of the 8.2-m telescope (0.06 arcsec) at this wavelength (2.2 µm). Following this early success, the ESO AO team continued the full on-sky tuning and testing of the AO Module until June 8, setting in particular a new world record by reaching a limiting guide-star magnitude of 17.5, two-and-a-half magnitudes (a factor of 10) fainter than ever achieved with any telescope! The ability to perform AO corrections on very faint guide objects is essential for SINFONI in order to observe very faint extragalactic objects. During this commissioning period, test observations were performed of the binary asteroid (22) Kalliope and its moon Linus. They were made by the ESO AO team and served to demonstrate the high performance of this ESO-built Adaptive Optics (AO) system at near-infrared wavelengths. More information about these observations, including a movie of the orbital motion of Linus is available in Appendix B. "First Light" with SINFONI ESO PR Photo 24e/04 ESO PR Photo 24e/04 SINFONI Undergoing Balancing and Flexure Tests at VLT Yepun [Preview - JPEG: 427 x 400 pix - 269k] [Normal - JPEG: 854 x 800 pix - 730k] ESO PR Photo 24f/04 ESO PR Photo 24f/04 SINFONI "First Light" Spectrum [Preview - JPEG: 427 x 400 pix - 94k] [Normal - JPEG: 854 x 800 pix - 222k] Captions: ESO PR Photo 24e/04 shows SINFONI attached to the Cassegrain focus of the 8.2-m VLT Yepun telescope during balancing and flexure tests. ESO PR Photo 24f/04: "First Light" "data cube" spectrum obtained with SINFONI on the bright star HD 130163 on July 9, 2004, as seen on the science data computer screen. This 7th-magnitude A0 V star was observed in the near-infrared H-band with a moderate seeing of 0.8 arcsec. The width of the slitlets in this image is 0.25 arcsec. The exposure time was 1 second. The fully integrated SPIFFI module was air-freighted from Frankfurt to Santiago de Chile and arrived at Paranal on June 5, 2004. The subsequent cool-down to -195 °C was done and an extensive test programme was carried through during the next two weeks. Meanwhile, the AO Module was removed from the telescope and the "wedding" with SPIFFI was celebrated on June 20 in the Paranal Integration Hall. All went well and the first AO-corrected test spectra were obtained immediately thereafter. The extensive tests of SINFONI continued at this site until July 7, 2004, when the instrument was declared fit for work at the telescope. The installation at the 8.2-m VLT Yepun telescope was then accomplished on July 8 - 9, cf. PR Photos 24b/04 and 24e/04. "First Light" was achieved in the early evening of July 9, 2004, only 30 min after the telescope enclosure was opened. At 19:30 local time, SINFONI recorded the first AO-corrected "data cube" with spectra of HD 130163, cf. PR Photo 24f/04. This 7th-magnitude star was observed in the near-infrared H-band with a moderate seeing of 0.8 arcsec. Test Observations with SINFONI ESO PR Photo 24k/04 ESO PR Photo 24k/04 SINFONI Observations of the Galactic Centre [Preview - JPEG: 427 x 400 pix - 213k] [Normal - JPEG: 854 x 800 pix - 511k] ESO PR Photo 24o/04 ESO PR Photo 24o/04 SINFONI Observations of the Distant Galaxy Pair BX 404/405 [Preview - JPEG: 481 x 400 pix - 86k] [Normal - JPEG: 962 x 800 pix - 251k] Captions: ESO PR Photo 24k/04: The coloured image (background) shows a three-band composite image (H, K, and L-bands) obtained with the AO imager NACO on the 8.2-m VLT Yepun telescope. On July 15, 2004, the new SINFONI instrument, mounted at the Cassegrain focus of the same telescope, observed the innermost region (the central 1 x 1 arcsec) of the Milky Way Galaxy in the combined H+K band (1.45 - 2.45 µm) during a total of 110 min "on-source". The insert (upper left) shows the immediate neighbourhood of the central black hole as seen with SINFONI. The position of the black hole is marked with a yellow circle. Later in the night (03:37 UT on July 16), a flare from the black hole ocurred (a zoom-in is shown in the insert at the lower left) and the first-ever infrared spectrum of this phenomenon was observed. It was also possible to register for the first time in great detail the near-infrared spectra of young massive stars orbiting the black hole; some of these are shown in the inserts at the upper right; stars are identified by their "S"-designations. The lower right inserts show the spectra of stars in "IRS 13 E", a very compact cluster of very young and massive stars, located about 3.5 arcsec to the south-west of the black hole. The wavefront reference ("guide") star employed for these AO observations is comparably faint (red magnitude approx. 15), and it is located about 20 arcsec away from the field centre. The seeing during these observations was about 0.6 arcsec. The width of the slitlets was 0.025 arcsec. See Appendix G for more detail. ESO PR Photo 24o/04 shows the distant galaxy pair BX 404/405, as recorded in the K-band (wavelength 2 µm, centered on the redshifted H-alpha line), without AO-correction because of the lack of a nearby, sufficiently bright "guide" star. The width of each slitlet was 0.25 arcsec and the seeing about 0.6 arcsec. The integration time on the galaxy was 2 hours "on-source". The image shown has been reconstructed by combining all of the spectral elements around the H-alpha spectral line. The spectrum of BX 405 (upper right) clearly reveals signs of a velocity shear while that of BX 404 does not. This may be a sign of rotation, a possible signature of a young disc in this galaxy. More information can be found in Appendix C. Until July 22, test observations on a number of celestial objects were performed in order to tune the instrument, to evaluate the performance and to demonstrate its astronomical capabilities. In particular, spectra were obtained of various highly interesting celestial objects and sky regions. Details about these observations (and some images obtained with the AO Module alone) are available in the Appendices to this Press Release: * a video of the motion of the moon Linus around the main-belt asteroid (22) Kalliope, providing the best view of this binary system obtained so far (Appendix B), * images and first-ever detailed spectra of many of the stars that move near the massive black hole at the Galactic Centre, with crucial information on the nature of the individual stars and their motions (Appendix C), * images and spectra of the heavily dust-obscured, active centre of the Circinus galaxy, one of the closest active galaxies, showing ordered rotation in this area and distinct broad and narrow components of the spectral line of Ca7+-ions (Appendix D), * images and spectra of the less obscured central area of NGC 7469, a more distant active galaxy, with spectral lines of molecular hydrogen and carbon monoxide showing a very different distribution of these species (Appendix E), * images and spectra of the Infrared Luminous Galaxy (ULIRG) NGC 6240, a typical galaxy merger, displaying important differences between the two nuclei (Appendix F), and * images and spectra of the young starforming galaxies BX 404/405, casting more light on the formation of disks in spiral galaxies (Appendix G) The SINFONI Teams ESO PR Photo 24g/04 ESO PR Photo 24g/04 Members of the SINFONI Adaptive Optics Commissioning Team [Preview - JPEG: 646 x 400 pix - 198k] [Normal - JPEG: 1291 x 800 pix - 618k] ESO PR Photo 24h/04 ESO PR Photo 24h/04 Members of the SPIFFI Commissioning Team [Preview - JPEG: 491 x 400 pix - 193k] [Normal - JPEG: 982 x 800 pix - 482k] Captions: ESO PR Photo 24g/04 Members of the SINFONI Adaptice Optics Commissioning Team in the VLT Control Room in the night between June 7 - 8, 2004. From left to right and top to bottom: Thomas Szeifert, Sebastien Tordo, Stefan Stroebele, Jerome Paufique, Chris Lidman, Robert Donaldson, Enrico Fedrigo, Markus Kissler Patig, Norbert Hubin, Henri Bonnet. ESO PR Photo 24h/04: Members of the SPIFFI Commissioning Team on August 17. From left to right, Roberto Abuter, Frank Eisenhauer, Andrea Gilbert and Matthew Horrobin. The first SINFONI results have been greeted with enthusiasm, in particular by the team of astronomers and engineers from the consortium of German and Dutch institutes and ESO who worked on the development of SINFONI for nearly 7 years. Some of the members of the Commissioning Teams are depicted in PR Photos 24g/04 and 24h/04; in addition to the SPIFFI team members present on the second photo, Walter Bornemann, Reinhard Genzel, Hans Gemperlein, Stefan Huber have also been working on the reintegration/commissioning in Paranal. Notes [1] This press release is issued in coordination between ESO, the Max-Planck-Institute for Extraterrestrial Physics (MPE) in Garching, Germany, and the Nederlandse Onderzoekschool Voor Astronomie in Leiden, The Netherlands. A German version is available at http://www.mpg.de/bilderBerichteDokumente/dokumentation/pressemitteilungen/2004/pressemitteilung20040824/index.html and a Dutch version at http://www.astronomy.nl/inhoud/pers/persberichten/30_08_04.html. [2] The SINFONI team consists of Roberto Abuter, Andrew Baker, Walter Bornemann, Ric Davies, Frank Eisenhauer (SPIFFI Principal Investigator), Hans Gemperlein, Reinhard Genzel (MPE Director), Andrea Gilbert, Armin Goldbrunner, Matthew Horrobin, Stefan Huber, Christof Iserlohe, Matthew Lehnert, Werner Lieb, Dieter Lutz, Nicole Nesvadba, Claudia Röhrle, Jürgen Schreiber, Linda Tacconi, Matthias Tecza, Niranjan Thatte, Harald Weisz (Max-Planck-Institut für Extraterrestrische Physik, Garching, Germany), Anthony Brown, Paul van der Werf (NOVA, Leiden, The Netherlands), Eddy Elswijk, Johan Pragt, Jan Kragt, Gabby Kroes, Ton Schoenmaker, Rik ter Horst (ASTRON, Dwingeloo, The Netherlands), Henri Bonnet (SINFONI Project Manager), Roberto Castillo, Ralf Conzelmann, Romuald Damster, Bernard Delabre, Christophe Dupuy, Robert Donaldson, Christophe Dumas, Enrico Fedrigo, Gert Finger, Gordon Gillet, Norbert Hubin (Head of Adaptive Optics Dept.), Andreas Kaufer, Franz Koch, Johann Kolb, Andrea Modigliani, Guy Monnet (Head of Telescope Systems Division), Chris Lidman, Jochen Liske, Jean Louis Lizon, Markus Kissler-Patig (SINFONI Instrument Scientist), Jerome Paufique, Juha Reunanen, Silvio Rossi, Riccardo Schmutzer, Armin Silber, Stefan Ströbele (SINFONI System Engineer), Thomas Szeifert, Sebastien Tordo, Leander Mehrgan, Joerg Stegmeier, Reinhold Dorn (European Southern Observatory). Contacts Frank Eisenhauer Max-Planck-Institut für Extraterrestrische Physik (MPE) Garching, Germany Phone: +49-89-30000-3563 Email: eisenhau@mpe.mpg.de Paul van der Werf Leiden Observatory Leiden, The Netherlands Phone: +31-71-5275883 Email: pvdwerf@strw.leidenuniv.nl Henri Bonnet European Southern Observatory (ESO) Email: hbonnet@eso.org Reinhard Genzel Max-Planck-Institut für Extraterrestrische Physik (MPE) Garching, Germany Phone: +49-89-30000-3280 Email: Norbert Hubin European Southern Observatory (ESO) Email: nhubin@eso.org Appendix A: Integral Field Spectroscopy as a Powerful Discovery Tool ESO PR Photo 24i/04 ESO PR Photo 24i/04 How Integral Field Spectroscopy Works [Preview - JPEG: 400 x 425 pix - 127k] [Normal - JPEG: 800 x 850 pix - 366k] Caption: ESO PR Photo 24i/04 shows the principle of Integrated Field Spectroscopy (IFS). The detailed explanation is found in the text. How does SINFONI work? What is Integral Field Spectroscopy (IFS)? The idea of IFS is to obtain a spectrum of each defined spatial element ("spaxel") in the field-of-view. Several techniques to do this are available - in SINFONI, the slicer principle is applied. This involves (PR Photo 24i/04) that * the two-dimensional field-of-view is cut into slices, the so-called slitlets (short slits in contrast to normal long-slit spectroscopy), * the slitlets are then arranged next to each other to form a pseudo-long-slit, * a grating is used to disperse the light, and * the photons are detected with a Near-InfraRed detector. Following data reduction, the set of generated spectra can be re-arranged in the computer to form a 3-dimensional "data cube" of two spatial, and one wavelength dimension. Thus the term "3D-Spectroscopy" is sometimes used for IFS. Appendix B: Linus' orbital motion around Kalliope ESO PR Photo 24j/04 ESO PR Photo 24j/04 Asteroid Kalliope and its Moon Linus [Preview - JPEG: 400 x 427 pix - 50k] [Normal - JPEG: 800 x 854 pix - 136k] ESO PR Video 07/04 ESO PR Video 07/04 The Motion of Linus around Kalliope [MPG: 800 x 800 pix - 128k] [AVI : 800 x 800 pix - 176k] [Animated GIF : 800 x 800 pix - 592k] Caption: ESO PR Photo 24j/04 and Video Clip 07/04 show the best-ever images of the moon Linus orbiting Asteroid (22) Kalliope. It was obtained with the SINFONI Adaptive Optics Module and a high-angular-resolution near-infrared Test Camera during commissioning in June 2004. At minimum separation, the satellite approaches Kalliope to 0.33 arcsec, i.e. the angle under which a 1 Euro coin is seen at a distance of 15 kilometers. At maximum separation, the angular distance is nearly twice as large. For clarity, the brightness of the asteroid has been artificially decreased by a factor of 15, to the level of the moon. This image processing technique also permits to perceive the variation of the asteroid's shape as Kalliope spins around its own axis with a period of 4.15 hours. The asteroid, with an angular diameter of 0.11 arcsec, is barely resolved in these VLT images (resolution 0.06 arcsec at wavelength 2.2 µm). The satellite measures about 50 km acroos and orbits Kalliope at a distance of about 1000 kilometers. ESO Video Clip 07/04 shows the 3.6-day orbital motion of the satellite (moon) Linus around the main-belt asteroid (22) Kalliope. Kalliope orbits the Sun between Mars and Jupiter; it measures about 180 km across and the diameter of its moon is 50 km. This system was observed with the SINFONI AO Module for short periods over four consecutive nights. Linus moves around Kalliope in a circular orbit, at a distance of 1000 km and with a direction of motion similar to the rotation of Kalliope (prograde rotation); the orbital plane of the moon was seen under a 60°-angle with respect to the line-of-sight. The unobserved parts of this orbit are indicated by a dotted line. A hypothetical observer on the surface of Kalliope would live in a strange world: the days would be 14 hours long, and the sky would be filled by a moon five times bigger than our own! The brightness changes of the Linus images is due to variations in the sky conditions at the time of the observations. Rapid changes in the atmosphere result in variations in the sharpness of the corrected images. During the first two nights, seeing conditions were very good, but less so during the last two nights; this can be seen as a slight loss of sharpness of the corresponding satellite images. The discovery of this asteroid satellite, named Linus after the son of Kalliope, the Greek muse of heroic poetry, was first reported in September 2001 by a group of astronomers using the Canadian-France-Hawaii telescope on Mauna Kea (Hawaii, USA). Although previously believed to consist of metal-rich material, the discovery of Linus allowed the scientists to determine the mean density of Kalliope as ~ 2 g/cm3, a rather low value and not consistent with a metal-rich object. Kalliope is now believed to be a "rubble-pile" stony asteroid. Its porous interior is due to a catastrophic collision with another, smaller asteroid early in its history and which also gave birth to Linus. Other references related to Kalliope can be found in the International Astronomical Union Circular (IAUC) 7703 (2001) and a research article "A low density M-type asteroid in the main-belt" by Margot and Brown (Science 300, 193, 2003). Appendix C: Stars at the Galactic Centre and a Flare from the Black Hole ESO PR Photo 24k/04 ESO PR Photo 24k/04 SINFONI Observations of the Galactic Centre [Preview - JPEG: 427 x 400 pix - 213k] [Normal - JPEG: 854 x 800 pix - 511k] Caption: ESO PR Photo 24k/04: The coloured image (background) shows a three-band composite image (H, K, and L-bands) obtained with the AO imager NACO on the 8.2-m VLT Yepun telescope. On July 15, 2004, the new SINFONI instrument, mounted at the Cassegrain focus of the same telescope, observed the innermost region (the central 1 x 1 arcsec) of the Milky Way Galaxy in the combined H+K band (1.45 - 2.45 µm) during a total of 110 min "on-source". The insert (upper left) shows the immediate neighbourhood of the central black hole as seen with SINFONI. The position of the black hole is marked with a yellow circle. Later in the night (03:37 UT on July 16), a flare from the black hole ocurred (a zoom-in is shown in the insert at the lower left) and the first-ever infrared spectrum of this phenomenon was observed. It was also possible to register for the first time in great detail the near-infrared spectra of young massive stars orbiting the black hole; some of these are shown in the inserts at the upper right; stars are identified by their "S"-designations. The lower right inserts show the spectra of stars in "IRS 13 E", a very compact cluster of very young and massive stars, located about 3.5 arcsec to the south-west of the black hole. The wavefront reference ("guide") star employed for these AO observations is comparably faint (red magnitude approx. 15), and it is located about 20 arcsec away from the field centre. The seeing during these observations was about 0.6 arcsec. The width of the slitlets was 0.025 arcsec. The Milky Way Centre is a unique laboratory for studying physical processes that are thought to be common in galactic nuclei. The Galactic Centre is not only the best studied case of a supermassive black hole, but the region also hosts the largest population of high-mass stars in the Galaxy. Diffraction-limited near-IR integral field spectroscopy offers a unique opportunity for exploring in detail the physical phenomena responsible for the active phases of this supermassive black hole, and for studying the dynamics and evolution of the star cluster in its immediate vicinity. Earlier observations with the VLT have been described in ESO PR 17/02 and ESO PR 26/03. With the new SINFONI observations, some of which are displayed in PR Photo 24k/04, it was possible to obtain for the first time very detailed near-infrared spectra of several young and massive stars orbiting the black hole at the centre of our galaxy. The presence of spectral signatures from ionised hydrogen (the Bracket-gamma line) and Helium clearly classify these stars as young, massive early-type stars. They are comparatively short-lived, and the large fraction of such stars in the immediate vicinity of a supermassive black hole is a mystery. The first SINFONI observations of the stellar populations in the innermost Galactic Centre region will now help to explain the origin and formation process of those stars. Moreover, the observed spectral features allow measuring their motions along the line-of-sight (the "radial velocities"). Combining them with the motions in the sky (the "proper motions") obtained from previous observations with the NACO instrument (ESO PR 17/02), it is now possible to determine all orbital parameters for the "S"-stars. This in turn makes it possible to measure directly the mass and the distance of the supermassive black hole at the centre of our galaxy. But not only this! Even more exciting, it became possible to register for the first time the infrared spectrum of a flare from the Galactic Centre black hole (cf. ESO PR 26/03). From the earlier imaging observations, it is known that such outbursts occur approximately once every 4 hours, giving us a uniquely detailed glimpse of a black hole feeding on left-over gas in its close surroundings. It is only the innovative technique of SINFONI - providing spectra for every pixel in a diffraction-limited image - that made it possible to capture the infrared spectrum of such a flare. Such spectra from SINFONI will soon allow to understand better the physics and mechanisms involved in the flare emission. Appendix D: The Active Circinus Galaxy ESO PR Photo 24l/04 ESO PR Photo 24l/04 SINFONI Observations of the Circinus Galaxy [Preview - JPEG: 824 x 400 pix - 324k] [Normal - JPEG: 412 x 800 pix - 131k] Caption: ESO PR Photo 24l/04: The Circinus galaxy - one of the nearest galaxies with an active centre (AGN) - was observed in the K-band (wavelength 2 µm) using the nucleus to guide the SINFONI AO Module. The seeing was 0.5 arcsec and the width of each slitlet 0.025 arcsec; the total integration time on the galaxy was 40 min. At the top is a K-band image of the central arcsec of the galaxy (left insert) and a K-band spectrum of the nucleus (right). In the lower half are images (left) in the light of ionised hydrogen (the Brackett-gamma line) and molecular hydrogen lines (H2), together with their combined rotation curve (middle), as well as images of the broad and narrow components of the high excitation [Ca VIII] spectral line (right). The false-colours in the images represent regions of different surface brightness. At a distance of about 13 million light-years, the Circinus galaxy is one of the nearest galaxies with a very active black hole at the centre. It is seen behind a highly obscured sky field, only 3° from the Milky Way main plane in the southern constellation of this name ("The Pair of Compasses"). Using the nucleus of this galaxy to guide the AO Module, SINFONI was able to zoom in on the central arcsec region - only 60 light-years across - and to map the immediate environment of the black hole at the centre, cf. PR Photo 24l/04. The K-band (wavelength 2 µm) image (insert at the upper left) displays a very compact structure; the emission recorded at this wavelength comes from hot dust heated by radiation from the accretion disc around the black hole. However, as may be seen in the two inserts below, both the emission from ionized hydrogen (the Brackett-gamma line) and molecular hydrogen (H2) are more extended, up to about 30 light-years. As these spectral lines (cf. the spectral tracing at the upper right) are quite narrow and show ordered rotation up to ±40km/s, it is likely that they arise from star formation in a disk around the central black hole. A surprise from the SINFONI observations is that the spectral line of Ca7+-ions (seven times ionised Calcium atoms, or [Ca VIII], which are produced by the ionizing effect of very energetic ultraviolet radiation) in this area appears to have distinct broad and narrow components (images at the lower right). The broad component is centred on the region around the black hole, and probably arises in the so-called "Broad-Line Region". The narrow component is displaced to the north-west and most likely indicates a region where there is a direct line-of-sight from the black hole to some gas clouds. Appendix E: The Active Nucleus in NGC 7469 ESO PR Photo 24m/04 ESO PR Photo 24m/04 SINFONI Observations of NGC 7469 [Preview - JPEG: 470 x 400 pix - 116k] [Normal - JPEG: 939 x 800 pix - 324k] Caption: ESO PR Photo 24m/04: NGC 7469 was observed in K band (wavelength 2 µm) using the nucleus to guide the adaptive optics. The width of each slitlet was 0.025 arcsec and the seeing was 1.1 arcsec. The total integration time on the galaxy was 70 min "on-source". To the upper left is a K-band image (2 µm) of the central arcsec of the NGC7469 and to the upper right, the spectrum of the nucleus. To the lower left is an image of the molecular hydrogen line, together with its rotation curve. There is an image in the light of ionized hydrogen (Bracket-gamma line) at the lower middle and an image of the CO 2-0 absorption bandhead which traces young stars (lower right). The galaxy NGC 7469 (seen north of the celestial equator in the constellation Pegasus) also hosts an active galactic nucleus, but contrary to the Circinus galaxy, it is relatively unobscured. Since NGC 7469 is at a much larger distance, about 225 million light-years, the 0.15 arcsec resolution achieved by SINFONI here corresponds to about 165 light-years. The K-band image (PR Photo 24m/04) shows the bright, compact nucleus of this galaxy, and the spectrum displays very broad lines of ionized hydrogen (the Brackett-gamma line) and helium. This emission arises in the "Broad-Line" region which is still unresolved, as shown by the Brackett-gamma image. On the other hand, the molecular hydrogen extends up to 650 light-years from the centre and shows an ordered rotation. In contrast, the image obtained in the light of CO-molecules - which directly traces late-type stars typical for starbursts - appears very compact. These results confirm those obtained by means of earlier AO observations, but with the new SINFONI data corresponding to various spectral lines, the detailed, two-dimensional structure and motions close to the central black hole are now clearly revealed for the first time. Appendix F: The Galaxy Merger NGC 6240 ESO PR Photo 24n/04 ESO PR Photo 24n/04 SINFONI Observations of NGC 6240 [Preview - JPEG: 506 x 400 pix - 96k] [Normal - JPEG: 1011 x 800 pix - 277k] Caption: ESO PR Photo 24n/04: The galaxy merger system NGC 6240 was observed with SINFONI in the K-band (wavelength 2 µm). This object has two nuclei; the image of the southern one is also shown enlarged, together with the corresponding spectrum. The width of each slitlet was 0.025 arcsec and the seeing was 0.8 arcsec. The total integration time on the galaxy was 80 min. The false-colours in the images represent regions of different surface brightness. The infrared-luminous galaxy NGC 6240 in the constellation Ophiuchus (The Serpent-holder) is in many ways the prototype of a gas-rich, infrared-(ultra-)luminous galaxy merger. This system has two rapidly rotating, massive bulges/nuclei at a projected angular separation of 1.6 arcsec. Each of them contains a powerful starburst region and a luminous, highly obscured, X-ray-emitting supermassive black hole. As such, NGC 6240 is probably a nearby example of dust and gas-rich galaxy merger systems seen at larger distances. NGC6240 is also the most luminous, nearby source of molecular hydrogen emission. It was observed in the K-band (wavelength 2 µm), using a faint star at a distance of about 35 arcsec as the AO "guide" star. The starburst activity is traced by the ionized gas and occurs mostly at the two nuclei in regions measuring around 650 light-years across. The distribution of the molecular gas is very different. It follows a complex spatial and dynamical pattern with several extended streamers. The high-resolution SINFONI data now makes it possible - for the first time - to investigate the distribution and motion of the molecular gas, as well as the stellar population in this galaxy with a "resolution" of about 80 light-years. Appendix G: Motions in the Young Star-Forming Galaxies BX 404/405 ESO PR Photo 24o/04 ESO PR Photo 24o/04 SINFONI Observations of the Distant Galaxy Pair BX 404/405 [Preview - JPEG: 481 x 400 pix - 86k] [Normal - JPEG: 962 x 800 pix - 251k] Caption: ESO PR Photo 24o/04 shows the distant galaxy pair BX 404/405, as recorded in the K-band (wavelength 2 µm, centered on the redshifted H-alpha line), without AO-correction because of the lack of a nearby, sufficiently bright "guide" star. The width of each slitlet was 0.25 arcsec and the seeing about 0.6 arcsec. The integration time on the galaxy was 2 hours "on-source". The image shown has been reconstructed by combining all of the spectral elements around the H-alpha spectral line. The spectrum of BX 405 (upper right) clearly reveals signs of a velocity shear while that of BX 404 does not. This may be a sign of rotation, a possible signature of a young disc in this galaxy. How and when did the discs in spiral galaxies like the Milky Way form? This is one of the longest-standing puzzles in modern cosmology. Two general models presently describe how disk galaxies may form. One is based on a scenario in which there is a gentle collapse of gas clouds that collide and lose momentum. They sink towards a "centre", thereby producing a disc of gas in which stars are formed. The other implies that galaxies grow through repeated mergers of smaller gas-rich galaxies. Together they first produce a spherical mass distribution at the centre and any remaining gas then settles into a disk. Recent studies of stars in the Milky Way system and nearby spiral galaxies suggest that the discs now present in these systems formed about 10,000 million years ago. This corresponds to the epoch when we observe galaxies at redshifts of about 1.5 - 2.5. Interestingly, studies of galaxies at these distances seem consistent with current ideas about when disks may have formed, and there is some evidence that most of the mass in the galaxies was also assembled at that time. In any case, the most direct way to verify such a connection is to observe galaxies at redshifts 1.5-2.5, in order to elucidate whether their observed properties are consistent with velocity patterns of rotating disks of gas and stars. This would be visible as a "velocity shear", i.e., a significant difference in velocity of neigbouring regions. In addition, such observations may provide a good test of the above mentioned hypotheses for how discs may have formed. Various groups of astrophysicists in the US and Europe have developed observational selection criteria which may be used to identify galaxies with properties similar to those expected for young disc galaxies. Observations with SINFONI was made of one of these objects, the galaxy pair BX 404/405 discovered by a group of astronomers at Caltech (USA). For BX 405, clear signs were found of a "velocity shear" like that expected for rotation of a forming disk, but the other object does not show this. It may thus be that the properties of star-forming galaxies at this epoch are quite complex and that only some of them have young disks.
Wavelet-Smoothed Interpolation of Masked Scientific Data for JPEG 2000 Compression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brislawn, Christopher M.
2012-08-13
How should we manage scientific data with 'holes'? Some applications, like JPEG 2000, expect logically rectangular data, but some sources, like the Parallel Ocean Program (POP), generate data that isn't defined on certain subsets. We refer to grid points that lack well-defined, scientifically meaningful sample values as 'masked' samples. Wavelet-smoothing is a highly scalable interpolation scheme for regions with complex boundaries on logically rectangular grids. Computation is based on forward/inverse discrete wavelet transforms, so runtime complexity and memory scale linearly with respect to sample count. Efficient state-of-the-art minimal realizations yield small constants (O(10)) for arithmetic complexity scaling, and in-situ implementationmore » techniques make optimal use of memory. Implementation in two dimensions using tensor product filter banks is straighsorward and should generalize routinely to higher dimensions. No hand-tuning required when the interpolation mask changes, making the method aeractive for problems with time-varying masks. Well-suited for interpolating undefined samples prior to JPEG 2000 encoding. The method outperforms global mean interpolation, as judged by both SNR rate-distortion performance and low-rate artifact mitigation, for data distributions whose histograms do not take the form of sharply peaked, symmetric, unimodal probability density functions. These performance advantages can hold even for data whose distribution differs only moderately from the peaked unimodal case, as demonstrated by POP salinity data. The interpolation method is very general and is not tied to any particular class of applications, could be used for more generic smooth interpolation.« less
López, Carlos; Lejeune, Marylène; Escrivà, Patricia; Bosch, Ramón; Salvadó, Maria Teresa; Pons, Lluis E.; Baucells, Jordi; Cugat, Xavier; Álvaro, Tomás; Jaén, Joaquín
2008-01-01
This study investigates the effects of digital image compression on automatic quantification of immunohistochemical nuclear markers. We examined 188 images with a previously validated computer-assisted analysis system. A first group was composed of 47 images captured in TIFF format, and other three contained the same images converted from TIFF to JPEG format with 3×, 23× and 46× compression. Counts of TIFF format images were compared with the other three groups. Overall, differences in the count of the images increased with the percentage of compression. Low-complexity images (≤100 cells/field, without clusters or with small-area clusters) had small differences (<5 cells/field in 95–100% of cases) and high-complexity images showed substantial differences (<35–50 cells/field in 95–100% of cases). Compression does not compromise the accuracy of immunohistochemical nuclear marker counts obtained by computer-assisted analysis systems for digital images with low complexity and could be an efficient method for storing these images. PMID:18755997
Journal of Chemical Education on CD-ROM, 1999
NASA Astrophysics Data System (ADS)
1999-12-01
The Journal of Chemical Education on CD-ROM contains the text and graphics for all the articles, features, and reviews published in the Journal of Chemical Education. This 1999 issue of the JCE CD series includes all twelve issues of 1999, as well as all twelve issues from 1998 and from 1997, and the September-December issues from 1996. Journal of Chemical Education on CD-ROM is formatted so that all articles on the CD retain as much as possible of their original appearance. Each article file begins with an abstract/keyword page followed by the article pages. All pages of the Journal that contain editorial content, including the front covers, table of contents, letters, and reviews, are included. Also included are abstracts (when available), keywords for all articles, and supplementary materials. The Journal of Chemical Education on CD-ROM has proven to be a useful tool for chemical educators. Like the Computerized Index to the Journal of Chemical Education (1) it will help you to locate articles on a particular topic or written by a particular author. In addition, having the complete article on the CD-ROM provides added convenience. It is no longer necessary to go to the library, locate the Journal issue, and read it while sitting in an uncomfortable chair. With a few clicks of the mouse, you can scan an article on your computer monitor, print it if it proves interesting, and read it in any setting you choose. Searching and Linking JCE CD is fully searchable for any word, partial word, or phrase. Successful searches produce a listing of articles that contain the requested text. Individual articles can be quickly accessed from this list. The Table of Contents of each issue is linked to individual articles listed. There are also links from the articles to any supplementary materials. References in the Chemical Education Today section (found in the front of each issue) to articles elsewhere in the issue are also linked to the article, as are WWW addresses and email addresses. If you have Internet access and a WWW browser and email utility, you can go directly to the Web site or prepare to send a message with a single mouse click.
Full-text searching of the entire CD enables you to find the articles you want. Price and Ordering An order form is inserted in this issue that provides prices and other ordering information. If this insert is not available or if you need additional information, contact: JCE Software, University of Wisconsin-Madison, 1101 University Avenue, Madison, WI 53706-1396; phone: 608/262-5153 or 800/991-5534; fax: 608/265-8094; email: jcesoft@chem.wisc.edu. Information about all our publications (including abstracts, descriptions, updates) is available from our World Wide Web site at: http://jchemed.chem.wisc.edu/JCESoft/. Hardware and Software Requirements Hardware and software requirements for JCE CD 1999 are listed in the table below:
Literature Cited 1. Schatz, P. F. Computerized Index, Journal of Chemical Education; J. Chem. Educ. Software 1993, SP 5-M. Schatz, P. F.; Jacobsen, J. J. Computerized Index, Journal of Chemical Education; J. Chem. Educ. Software 1993, SP 5-W.
Rate distortion optimal bit allocation methods for volumetric data using JPEG 2000.
Kosheleva, Olga M; Usevitch, Bryan E; Cabrera, Sergio D; Vidal, Edward
2006-08-01
Computer modeling programs that generate three-dimensional (3-D) data on fine grids are capable of generating very large amounts of information. These data sets, as well as 3-D sensor/measured data sets, are prime candidates for the application of data compression algorithms. A very flexible and powerful compression algorithm for imagery data is the newly released JPEG 2000 standard. JPEG 2000 also has the capability to compress volumetric data, as described in Part 2 of the standard, by treating the 3-D data as separate slices. As a decoder standard, JPEG 2000 does not describe any specific method to allocate bits among the separate slices. This paper proposes two new bit allocation algorithms for accomplishing this task. The first procedure is rate distortion optimal (for mean squared error), and is conceptually similar to postcompression rate distortion optimization used for coding codeblocks within JPEG 2000. The disadvantage of this approach is its high computational complexity. The second bit allocation algorithm, here called the mixed model (MM) approach, mathematically models each slice's rate distortion curve using two distinct regions to get more accurate modeling at low bit rates. These two bit allocation algorithms are applied to a 3-D Meteorological data set. Test results show that the MM approach gives distortion results that are nearly identical to the optimal approach, while significantly reducing computational complexity.
High-fidelity data embedding for image annotation.
He, Shan; Kirovski, Darko; Wu, Min
2009-02-01
High fidelity is a demanding requirement for data hiding, especially for images with artistic or medical value. This correspondence proposes a high-fidelity image watermarking for annotation with robustness to moderate distortion. To achieve the high fidelity of the embedded image, we introduce a visual perception model that aims at quantifying the local tolerance to noise for arbitrary imagery. Based on this model, we embed two kinds of watermarks: a pilot watermark that indicates the existence of the watermark and an information watermark that conveys a payload of several dozen bits. The objective is to embed 32 bits of metadata into a single image in such a way that it is robust to JPEG compression and cropping. We demonstrate the effectiveness of the visual model and the application of the proposed annotation technology using a database of challenging photographic and medical images that contain a large amount of smooth regions.
HUBBLE SHOWS EXPANSION OF ETA CARINAE DEBRIS
NASA Technical Reports Server (NTRS)
2002-01-01
The furious expansion of a huge, billowing pair of gas and dust clouds are captured in this NASA Hubble Space Telescope comparison image of the supermassive star Eta Carinae. To create the picture, astronomers aligned and subtracted two images of Eta Carinae taken 17 months apart (April 1994, September 1995). Black represents where the material was located in the older image, and white represents the more recent location. (The light and dark streaks that make an 'X' pattern are instrumental artifacts caused by the extreme brightness of the central star. The bright white region at the center of the image results from the star and its immediate surroundings being 'saturated' in one of the images.)Photo Credit: Jon Morse (University of Colorado), Kris Davidson (University of Minnesota), and NASA Image files in GIF and JPEG format and captions may be accessed on Internet via anonymous ftp from oposite.stsci.edu in /pubinfo.
HOT WHITE DWARF SHINES IN YOUNG STAR CLUSTER
NASA Technical Reports Server (NTRS)
2002-01-01
A dazzling 'jewel-box' collection of over 20,000 stars can be seen in crystal clarity in this NASA Hubble Space Telescope image, taken with the Wide Field and Planetary Camera 2. The young (40 million year old) cluster, called NGC 1818, is 164,000 light-years away in the Large Magellanic Cloud (LMC), a satellite galaxy of our Milky Way. The LMC, a site of vigorous current star formation, is an ideal nearby laboratory for studying stellar evolution. In the cluster, astronomers have found a young white dwarf star, which has only very recently formed following the burnout of a red giant. Based on this observation astronomers conclude that the red giant progenitor star was 7.6 times the mass of our Sun. Previously, astronomers have estimated that stars anywhere from 6 to 10 solar masses would not just quietly fade away as white dwarfs but abruptly self-destruct in torrential explosions. Hubble can easily resolve the star in the crowded cluster, and detect its intense blue-white glow from a sizzling surface temperature of 50,000 degrees Fahrenheit. IMAGE DATA Date taken: December 1995 Wavelength: natural color reconstruction from three filters (I,B,U) Field of view: 100 light-years, 2.2 arc minutes TARGET DATA Name: NGC 1818 Distance: 164,000 light-years Constellation: Dorado Age: 40 million years Class: Rich star cluster Apparent magnitude: 9.7 Apparent diameter: 7 arc minutes Credit: Rebecca Elson and Richard Sword, Cambridge UK, and NASA (Original WFPC2 image courtesy J. Westphal, Caltech) Image files are available electronically via the World Wide Web at: http://oposite.stsci.edu/pubinfo/1998/16 and via links in http://oposite.stsci.edu/pubinfo/latest.html or http://oposite.stsci.edu/pubinfo/pictures.html. GIF and JPEG images are available via anonymous ftp to oposite.stsci.edu in /pubinfo/GIF/9816.GIF and /pubinfo/JPEG/9816.jpg.
Morgan, Karen L. M.; Karen A. Westphal,
2016-04-28
The U.S. Geological Survey (USGS), as part of the National Assessment of Coastal Change Hazards project, conducts baseline and storm-response photography missions to document and understand the changes in vulnerability of the Nation's coasts to extreme storms (Morgan, 2009). On September 9-10, 2008, the USGS conducted an oblique aerial photographic survey from Calcasieu Lake, Louisiana, to Brownsville, Texas, aboard a Cessna C-210 (aircraft) at an altitude of 500 feet (ft) and approximately 1,000 ft offshore. This mission was flown to collect baseline data for assessing incremental changes of the beach and nearshore area, and the data can be used in the assessment of future coastal change.The photographs provided in this report are Joint Photographic Experts Group (JPEG) images. ExifTool was used to add the following to the header of each photo: time of collection, Global Positioning System (GPS) latitude, GPS longitude, keywords, credit, artist (photographer), caption, copyright, and contact information. The photograph locations are an estimate of the position of the aircraft at the time the photograph was taken and do not indicate the location of any feature in the images (see the Navigation Data page). These photographs document the state of the barrier islands and other coastal features at the time of the survey. Pages containing thumbnail images of the photographs, referred to as contact sheets, were created in 5-minute segments of flight time. These segments can be found on the Photos and Maps page. Photographs can be opened directly with any JPEG-compatible image viewer by clicking on a thumbnail on the contact sheet.In addition to the photographs, a Google Earth Keyhole Markup Language (KML) file is provided and can be used to view the images by clicking on the marker and then clicking on either the thumbnail or the link above the thumbnail. The KML file was created using the photographic navigation files. The KML file can be found in the kml folder.
Morgan, Karen L.M.; Krohn, M. Dennis
2014-01-01
The U.S. Geological Survey (USGS) conducts baseline and storm response photography missions to document and understand the changes in vulnerability of the Nation's coasts to extreme storms. On November 4-6, 2012, approximately one week after the landfall of Hurricane Sandy, the USGS conducted an oblique aerial photographic survey from Cape Lookout, N.C., to Montauk, N.Y., aboard a Piper Navajo Chieftain (aircraft) at an altitude of 500 feet (ft) and approximately 1,000 ft offshore. This mission was flown to collect post-Hurricane Sandy data for assessing incremental changes in the beach and nearshore area since the last survey in 2009. The data can be used in the assessment of future coastal change. The photographs provided here are Joint Photographic Experts Group (JPEG) images. The photograph locations are an estimate of the position of the aircraft and do not indicate the location of the feature in the images. These photos document the configuration of the barrier islands and other coastal features at the time of the survey. Exiftool was used to add the following to the header of each photo: time of collection, Global Positioning System (GPS) latitude, GPS longitude, keywords, credit, artist (photographer), caption, copyright, and contact information. Photographs can be opened directly with any JPEG-compatible image viewer by clicking on a thumbnail on the contact sheet. Table 1 provides detailed information about the GPS location, image name, date, and time each of the 9,481 photographs were taken, along with links to each photograph. The photographs are organized in segments, also referred to as contact sheets, and represent approximately 5 minutes of flight time. In addition to the photographs, a Google Earth Keyhole Markup Language (KML) file is provided and can be used to view the images by clicking on the marker and then clicking on either the thumbnail or the link above the thumbnail. The KML files were created using the photographic navigation files.
NASA Astrophysics Data System (ADS)
2004-05-01
Successful "First Light" for the Mid-Infrared VISIR Instrument on the VLT Summary Close to midnight on April 30, 2004, intriguing thermal infrared images of dust and gas heated by invisible stars in a distant region of our Milky Way appeared on a computer screen in the control room of the ESO Very Large Telescope (VLT). These images mark the successful "First Light" of the VLT Imager and Spectrometer in the InfraRed (VISIR), the latest instrument to be installed on this powerful telescope facility at the ESO Paranal Observatory in Chile. The event was greeted with a mixture of delight, satisfaction and some relief by the team of astronomers and engineers from the consortium of French and Dutch Institutes and ESO who have worked on the development of VISIR for around 10 years [1]. Pierre-Olivier Lagage (CEA, France), the Principal Investigator, is content : "This is a wonderful day! A result of many years of dedication by a team of engineers and technicians, who can today be proud of their work. With VISIR, astronomers will have at their disposal a great instrument on a marvellous telescope. And the gain is enormous; 20 minutes of observing with VISIR is equivalent to a whole night of observing on a 3-4m class telescope." Dutch astronomer and co-PI Jan-Willem Pel (Groningen, The Netherlands) adds: "What's more, VISIR features a unique observing mode in the mid-infrared: spectroscopy at a very high spectral resolution. This will open up new possibilities such as the study of warm molecular hydrogen most likely to be an important component of our galaxy." PR Photo 16a/04: VISIR under the Cassegrain focus of the Melipal telescope PR Photo 16b/04: VISIR mounted behind the mirror of the Melipal telescope PR Photo 16c/04: Colour composite of the star forming region G333.6-0.2 PR Photo 16d/04: Colour composite of the Galactic Centre PR Photo 16e/04: The Ant Planetary Nebula at 12.8 μm PR Photo 16f/04: The starburst galaxy He2-10 at 11.3μm PR Photo 16g/04: High-resolution spectrum of G333.6-0.2 around 12.8μm PR Photo 16h/04: High-resolution spectrum of the Ant Planetary Nebula around 12.8μm From cometary tails to centres of galaxies The mid-infrared spectral region extends from a few to a few tens of microns in wavelength and provides a unique view of our Universe. Optical astronomy, that is astronomy at wavelengths to which our eyes are sensitive, is mostly directed towards light emitted by gas, be it in stars, nebulae or galaxies. Mid-Infrared astronomy, however, allows us to also detect solid dust particles at temperatures of -200 to +300 °C. Dust is very abundant in the universe in many different environments, ranging from cometary tails to the centres of galaxies. This dust also often totally absorbs and hence blocks the visible light reaching us from such objects. Red light, and especially infrared light, can propagate much better in dust clouds. Many important astrophysical processes occur in regions of high obscuration by dust, most notably star formation and the late stages of their evolution, when stars that have burnt nearly all their fuel shed much of their outer layers and dust grains form in their "stellar wind". Stars are born in so-called molecular clouds. The proto-stars feed from these clouds and are shielded from the outside by them. Infrared is a tool - very much as ultrasound is for medical inspections - for looking into those otherwise hidden regions to study the stellar "embryos". It is thus crucial to also observe the Universe in the infrared and mid-infrared. Unfortunately, there are also infrared-emitting molecules in the Earth's atmosphere, e.g. water vapour, Nitric Oxides, Ozone, Methane. Because of these gases, the atmosphere is completely opaque at certain wavelengths, except in a few "windows" where the Earth's atmosphere is transparent. Even in these windows, however, the sky and telescope emit radiation in the infrared to an extent that observing in the mid-infrared at night is comparable to trying to do optical astronomy in daytime. Ground-based infrared astronomers have thus become extremely adept at developing special techniques called "chopping' and "nodding" for detecting the extremely faint astronomical signals against this unwanted bright background [3]. VISIR: an extremely complex instrument VISIR - the VLT Imager and Spectrometer in the InfraRed - is a complex multi-mode instrument designed to operate in the 10 and 20 μm atmospheric windows, i.e. at wavelengths up to about 40 times longer than visible light and to provide images as well as spectra at a wide range of resolving power up to ~ 30.000. It can sample images down to the diffraction limit of the 8.2-m Melipal telescope (0.27 arcsec at 10 μm wavelength, i.e. corresponding to a resolution of 500 m on the Moon), which is expected to be reached routinely due to the excellent seeing conditions experienced for a large fraction of the time at the VLT [2]. Because at room temperature the metal and glass of VISIR would emit strongly at exactly the same wavelengths and would swamp any faint mid-infrared astronomical signals, the whole VISIR instrument is cooled to a temperature close to -250° C and its two panoramic 256x256 pixel array detectors to even lower temperatures, only a few degrees above absolute zero. It is also kept in a vacuum tank to avoid the unavoidable condensation of water and icing which would otherwise occur. The complete instrument is mounted on the telescope and must remain rigid to within a few thousandths of a millimetre as the telescope moves to acquire and then track objects anywhere in the sky. Needless to say, this makes for an extremely complex instrument and explains the many years needed to develop and bring it to the telescope on the top of Paranal. VISIR also includes a number of important technological innovations, most notably its unique cryogenic motor drive systems comprising integrated stepper motors, gears and clutches whose shape is similar to that of the box of the famous French Camembert cheese. VISIR is mounted on Melipal ESO PR Photo 16a/04 ESO PR Photo 16a/04 VISIR under the Cassegrain focus of the Melipal telescope [Preview - JPEG: 400 x 476 pix - 271k] [Normal - JPEG: 800 x 951 pix - 600k] ESO PR Photo 16b/04 ESO PR Photo 16b/04 VISIR mounted behind the mirror of the Melipal telescope [Preview - JPEG: 400 x 603 pix - 366k] [Normal - JPEG: 800 x 1206 pix - 945k] Caption: ESO PR Photo 16a/04 shows VISIR about to be attached at the Cassegrain focus of the Melipal telescope. On ESO PR Photo 16b/04, VISIR appears much smaller once mounted behind the enormous 8.2-m diameter mirror of the Melipal telescope. The fully integrated VISIR plus all the associated equipment (amounting to a total of around 8 tons) was air freighted from Paris to Santiago de Chile and arrived at the Paranal Observatory on 25th March after a subsequent 1500 km journey by road. Following tests to confirm that nothing had been damaged, VISIR was mounted on the third VLT telescope "Melipal" on April 27th. PR Photos 16a/04 and 16b/04 show the approximately 1.6 tons of VISIR being mounted at the Cassegrain focus, below the 8.2-m main mirror. First technical light on a star was achieved on April 29th, shortly after VISIR had been cooled down to its operating temperature. This allowed to proceed with the necessary first basic operations, including focusing the telescope, and tests. While telescope focusing was one of the difficult and frequent tasks faced by astronomers in the past, this is no longer so with the active optics feature of the VLT telescopes which, in principle, has to be focused only once after which it will forever be automatically kept in perfect focus. First images and spectra from VISIR ESO PR Photo 16c/04 ESO PR Photo 16c/04 Colour composite of the star forming region G333.6-0.2 [Preview - JPEG: 400 x 477 pix - 78k] [Normal - JPEG: 800 x 954 pix - 191k] ESO PR Photo 16d/04 ESO PR Photo 16d/04 Colour composite of the Galactic Centre [Preview - JPEG: 400 x 478 pix - 159k] [Normal - JPEG: 800 x 955 pix - 348k] Caption: ESO PR Photo 16c/04 is a colour composite image of the visually obscured G333.6-0.2 star-forming region at a distance of nearly 10,000 light-years in our Milky Way galaxy. This image was made by combining three digital images of the intensity of the infrared emission at wavelengths of 11.3μm (one of the Polycyclic Aromatic Hydrocarbon features, coded blue), 12.8 μm (an emission line of [NeII], coded green) and 19μm (warm dust emission, coded red). Each pixel subtends 0.127 arcsec and the total field is ~ 33 x 33 arcsec with North at the top and East to the left. The total integration times were 13 seconds at the shortest and 35 seconds at the longer wavelengths. The brighter spots locate regions where the dust, which obscures all the visible light, has been heated by recently formed stars. ESO PR Photo 16d/04 shows another colour composite, this time of the Galactic Centre at a distance of about 30,000 light-years. It was made by combining images in filters centred at 8.6μm (Polycyclic Aromatic Hydrocarbon molecular feature - coded blue), 12.8μm ([NeII] - coded green) and 19.5μm (coded red). Each pixel subtends 0.127 arcsec and the total field is ~ 33 x 33 arcsec with North at the top and East to the left. Total integration times were 300, 160 and 300 s for the 3 filters, respectively. This region is very rich, full of stars, dust, ionised and molecular gas. One of the scientific goals will be to detect and monitor the signal from the black hole at the centre of our galaxy. ESO PR Photo 16e/04 ESO PR Photo 16e/04 The Ant Planetary Nebula at 12.8 μm [Preview - JPEG: 400 x 477 pix - 77k] [Normal - JPEG: 800 x 954 pix - 182k] Caption: ESO PR Photo 16e/04 is an image of the "Ant" Planetary Nebula (Mz3) in the narrow-band filter centred at wavelength 12.8 μm. The scale is 0.127 arcsec/pixel and the total field-of-view is 33 x 33 arcsec, with North at the top and East to the left. The total integration time was 200 seconds. Note the diffraction rings around the central star which confirm that the maximum spatial resolution possible with the 8.2-m telescope is being achieved. ESO PR Photo 16f/04 ESO PR Photo 16f/04 The starburst galaxy He2-10 at 11.3μm [Preview - JPEG: 400 x 477 pix - 69k] [Normal - JPEG: 800 x 954 pix - 172k] Caption: ESO PR Photo 16f/04 is an image at wavelength 11.3 μm of the "nearby" (distance about 30 million light-years) blue compact galaxy He2-10, which is actively forming stars. The scale is 0.127 arcsec per pixel and the full field covers 15 x 15 arcsec with North at the top and East on the left. The total integration time for this observation is one hour. Several star forming regions are detected, as well as a diffuse emission, which was unknown until these VISIR observations. The star-forming regions on the left of the image are not visible in optical images. ESO PR Photo 16g/04 ESO PR Photo 16g/04 High-resolution spectrum of G333.6-0.2 around 12.8 μm [Preview - JPEG: 652 x 400 pix - 123k] [Normal - JPEG: 1303 x 800 pix - 277k] Caption: ESO PR Photo 16g/04 is a reproduction of a high-resolution spectrum of the Ne II line (ionised Neon) at 12.8135 μm of the star-forming region G333.6-0.2 shown in ESO PR Photo 16c/04. This spectrum reveals the complex motions of the ionized gas in this region. The images are 256 x 256 frames of 50 x 50 micron pixels. The "field" direction is horizontal, with total slit length of 32.5 arcsec; North is left and South is to the right. The dispersion direction is vertical, with the wavelength increasing downward. The total integration time was 80 sec. ESO PR Photo 16h/04 ESO PR Photo 16h/04 High-resolution spectrum of the Ant nebula around 12.8 μm [Preview - JPEG: 610 x 400 pix - 354k] [Normal - JPEG: 1219 x 800 pix - 901k] Caption: ESO PR Photo 16h/04 is a reproduction of a high-resolution spectrum of the Ne II line (ionised Neon) at 12.8135 microns of the Ant Planetary Nebula, also known as Mz-3, shown in ESO PR Photo 16d/04. The technical details are similar to ESO PR Photo 16g/04. The total integration time was 120 sec. The photos above resulted from some of the first observational tests with VISIR. PR Photo 16c/04 shows the scientific "First Light" image, obtained one day later on April 30th, of a visually obscured star forming region nearly 10,000 light-years away in our galaxy, the Milky Way. The picture shown here is a false-colour image made by combining three digital images of the intensity of the infrared emission from this region at wavelengths of 11.3 μm (one of the Polycyclic Aromatic Hydrocarbon - PAH - features), 12.8 μm (an emission line of ionised neon) and 19 μm (cool dust emission). Ten times sharper Until now, an elegant way to avoid the problems caused by the emission and absorption of the atmosphere was to fly infrared telescopes on satellites as was done in the highly successful IRAS and ISO missions and currently the Spitzer observatory. For both technical and cost reasons, however, such telescopes have so far been limited to only 60-85 cm in diameter. While very sensitive therefore, the spatial resolution (sharpness) delivered by these telescopes is 10 times worse than that of the 8.2-m diameter VLT telescopes. They have also not been equipped with the very high spectral resolution capability, a feature of the VISIR instrument, which is thus expected to remain the instrument of choice for a wide range of studies for many years to come despite the competition from space. More information A corresponding [1]: The consortium of institutes responsible for building the VISIR instrument under contract to ESO comprises the CEA/DSM/DAPNIA, Saclay, France - led by the Principal Investigator (PI), Pierre-Olivier Lagage and the Netherlands Foundation for Research in Astronomy/ASTRON - (Dwingeloo, The Netherlands) with Jan-Willem Pel from Groningen University as Co-PI for the spectrometer. [2]: Stellar radiation on its way to the observer is also affected by the turbulence of the Earth's atmosphere. This is the effect which makes the stars twinkle for the human eye. While the general public enjoys this phenomenon as something that makes the night sky interesting and may be entertaining, the twinkling is a major concern for amateur and professional astronomers, as it smears out the optical images. Infrared radiation is less affected by this effect. Therefore an instrument like VISIR can make full use of the extremely high optical quality of modern telescopes, like the VLT. [3]: Observations from the ground at wavelengths of 10 to 20 μm are particularly difficult because this is the wavelength region in which both the telescope and the atmosphere emits most strongly. In order to minimize its effect, the images shown here were made by tilting the telescope secondary mirror every few seconds (chopping) and the whole telescope every minute (nodding) so that this unwanted telescope and sky background emission could be measured and subtracted from the science images faster than it varies.
Surprise Discovery of Highly Developed Structure in the Young Universe
NASA Astrophysics Data System (ADS)
2005-03-01
ESO-VLT and ESA XMM-Newton Together Discover Earliest Massive Cluster of Galaxies Known Summary Combining observations with ESO's Very Large Telescope and ESA's XMM-Newton X-ray observatory, astronomers have discovered the most distant, very massive structure in the Universe known so far. It is a remote cluster of galaxies that is found to weigh as much as several thousand galaxies like our own Milky Way and is located no less than 9,000 million light-years away. The VLT images reveal that it contains reddish and elliptical, i.e. old, galaxies. Interestingly, the cluster itself appears to be in a very advanced state of development. It must therefore have formed when the Universe was less than one third of its present age. The discovery of such a complex and mature structure so early in the history of the Universe is highly surprising. Indeed, until recently it would even have been deemed impossible. PR Photo 05a/05: Discovery X-Ray Image of the Distant Cluster (ESA XMM-Netwon) PR Photo 05b/05: False Colour Image of XMMU J2235.3-2557 (FORS/VLT and ESA XMM-Newton) Serendipitous discovery ESO PR Photo 05a/05 ESO PR Photo 05a/05 Discovery X-Ray Image of the Distant Cluster (ESA XMM-Newton) [Preview - JPEG: 400 x 421 pix - 106k] [Normal - JPEG: 800 x 842 pix - 843k] [Full Res - JPEG: 2149 x 2262 pix - 2.5M] Caption: ESO PR Photo 05a/05 is a reproduction of the XMM-Newton observations of the nearby active galaxy NGC7314 (bright object in the centre) from which the newly found distant cluster (white box) was serendipitously identified. The circular field-of-view of XMM-Newton is half-a-degree in diameter, or about the same angular size as the Full Moon. The inset shows the diffuse X-ray emission from the distant cluster XMMU J2235.3-2557. Clusters of galaxies are gigantic structures containing hundreds to thousands of galaxies. They are the fundamental building blocks of the Universe and their study thus provides unique information about the underlying architecture of the Universe as a whole. About one-fifth of the optically invisible mass of a cluster is in the form of a diffuse, very hot gas with a temperature of several tens of millions of degrees. This gas emits powerful X-ray radiation and clusters of galaxies are therefore best discovered by means of X-ray satellites (cf. ESO PR 18/03 and 15/04). It is for this reason that a team of astronomers [1] has initiated a search for distant, X-ray luminous clusters "lying dormant" in archive data from ESA's XMM-Newton satellite observatory. Studying XMM-Newton observations targeted at the nearby active galaxy NGC 7314, the astronomers found evidence of a galaxy cluster in the background, far out in space. This source, now named XMMU J2235.3-2557, appeared extended and very faint: no more than 280 X-ray photons were detected over the entire 12 hour-long observations. A Mature Cluster at Redshift 1.4 ESO PR Photo 05b/05 ESO PR Photo 05b/05 False Colour Image of XMMU J2235.3-2557 (FORS/VLT and ESA XMM-Newton) [Preview - JPEG: 400 x 455 pix - 50k] [Normal - JPEG: 800 x 909 pix - 564k] [Full Res - JPEG: 1599 x 1816 pix - 1.5M] Caption: ESO PR Photo 05b/05 is a false colour image of the XMMU J2235.3-2557 cluster of galaxies, overlaid with the X-ray intensity contours derived from the ESA XMM-Newton data. The red channel is a VLT-ISAAC image (exposure time: 1 hour) obtained in the near-infrared Ks-band (at wavelength 2.2 microns); the green channel is a VLT-FORS2 z-band image (910 nm; 480 sec); the blue channel is a VLT-FORS2 R-band image (; 657 nm; 1140 sec). The VLT reveals 12 reddish galaxies, of elliptical types, as members of the cluster. Knowing where to look, the astronomers then used the European Southern Observatory's Very Large Telescope (VLT) at Paranal (Chile) to obtain images in the visible wavelength region. They confirmed the nature of this cluster and it was possible to identify 12 comparatively bright member galaxies on the images (see ESO PR Photo 05b/05). The galaxies appear reddish and are of the elliptical type. They are full of old, red stars. All of this indicates that these galaxies are already several thousand million years old. Moreover, the cluster itself has a largely spherical shape, also a sign that it is already a very mature structure. In order to determine the distance of the cluster - and hence its age - Christopher Mullis, former European Southern Observatory post-doctoral fellow and now at the University of Michigan in the USA, and his colleagues used again the VLT, now in the spectroscopic mode. By means of one of the FORS multi-mode instruments, the astronomers zoomed-in on the individual galaxies in the field, taking spectral measurements that reveal their overall characteristics, in particular their redshift and hence, distance [2]. The FORS instruments are among the most efficient and versatile available anywhere for this delicate work, obtaining on the average quite detailed spectra of 30 or more galaxies at a time. The VLT data measured the redshift of this cluster as 1.4, indicating a distance of 9,000 million light-years, 500 million light years farther out than the previous record holding cluster. This means that the present cluster must have formed when the Universe was less than one third of its present age. The Universe is now believed to be 13,700 million years old. "We are quite surprised to see that a fully-fledged structure like this could exist at such an early epoch," says Christopher Mullis. "We see an entire network of stars and galaxies in place, just a few thousand million years after the Big Bang". "We seem to have underestimated how quickly the early Universe matured into its present-day state," adds Piero Rosati of ESO, another member of the team. "The Universe did grow up fast!" Towards a Larger Sample This discovery was relative easy to make, once the space-based XMM and the ground-based VLT observations were combined. As an impressive result of the present pilot programme that is specifically focused on the identification of very distant galaxy clusters, it makes the astronomers very optimistic about their future searches. The team is now carrying out detailed follow-up observations both from ground- and space-based observatories. They hope to find many more exceedingly distant clusters, which would then allow them to test competing theories of the formation and evolution of such large structures. "This discovery encourages us to search for additional distant clusters by means of this very efficient technique," says Axel Schwope, team leader at the Astrophysical Institute Potsdam (Germany) and responsible for the source detection from the XMM-Newton archival data. Hans Böhringer of the Max Planck Institute for Extraterrestrial Physics (MPE) in Garching, another member of the team, adds: "Our result also confirms the great promise inherent in other facilities to come, such as APEX (Atacama Pathfinder Experiment) at Chajnantor, the site of the future Atacama Large Millimeter Array. These intense searches will ultimately place strong constraints on some of the most fundamental properties of the Universe." More information This finding is presented today by Christopher Mullis at a scientific meeting in Kona, Hawaii, entitled "The Future of Cosmology with Clusters of Galaxies". It will also soon appear in The Astrophysical Journal ("Discovery of an X-ray Luminous Galaxy Cluster at z=1.4", by C. R. Mullis et al.). More images and information is available on Christopher Mullis' dedicated web page at http://www.astro.lsa.umich.edu/~cmullis/research/xmmuj2235/. A German version of the press release is issued by the Max Planck Society and is available at http://www.mpg.de/bilderBerichteDokumente/dokumentation/pressemitteilungen/2005/pressemitteilung20050228/presselogin/ .
An FPGA-Based People Detection System
NASA Astrophysics Data System (ADS)
Nair, Vinod; Laprise, Pierre-Olivier; Clark, James J.
2005-12-01
This paper presents an FPGA-based system for detecting people from video. The system is designed to use JPEG-compressed frames from a network camera. Unlike previous approaches that use techniques such as background subtraction and motion detection, we use a machine-learning-based approach to train an accurate detector. We address the hardware design challenges involved in implementing such a detector, along with JPEG decompression, on an FPGA. We also present an algorithm that efficiently combines JPEG decompression with the detection process. This algorithm carries out the inverse DCT step of JPEG decompression only partially. Therefore, it is computationally more efficient and simpler to implement, and it takes up less space on the chip than the full inverse DCT algorithm. The system is demonstrated on an automated video surveillance application and the performance of both hardware and software implementations is analyzed. The results show that the system can detect people accurately at a rate of about[InlineEquation not available: see fulltext.] frames per second on a Virtex-II 2V1000 using a MicroBlaze processor running at[InlineEquation not available: see fulltext.], communicating with dedicated hardware over FSL links.
Lim, Eugene Y; Lee, Chiang; Cai, Weidong; Feng, Dagan; Fulham, Michael
2007-01-01
Medical practice is characterized by a high degree of heterogeneity in collaborative and cooperative patient care. Fast and effective communication between medical practitioners can improve patient care. In medical imaging, the fast delivery of medical reports to referring medical practitioners is a major component of cooperative patient care. Recently, mobile phones have been actively deployed in telemedicine applications. The mobile phone is an ideal medium to achieve faster delivery of reports to the referring medical practitioners. In this study, we developed an electronic medical report delivery system from a medical imaging department to the mobile phones of the referring doctors. The system extracts a text summary of medical report and a screen capture of diagnostic medical image in JPEG format, which are transmitted to 3G GSM mobile phones.
Iurov, Iu B; Khazatskiĭ, I A; Akindinov, V A; Dovgilov, L V; Kobrinskiĭ, B A; Vorsanova, S G
2000-08-01
Original software FISHMet has been developed and tried for improving the efficiency of diagnosis of hereditary diseases caused by chromosome aberrations and for chromosome mapping by fluorescent in situ hybridization (FISH) method. The program allows creation and analysis of pseudocolor chromosome images and hybridization signals in the Windows 95 system, allows computer analysis and editing of the results of pseudocolor hybridization in situ, including successive imposition of initial black-and-white images created using fluorescent filters (blue, green, and red), and editing of each image individually or of a summary pseudocolor image in BMP, TIFF, and JPEG formats. Components of image computer analysis system (LOMO, Leitz Ortoplan, and Axioplan fluorescent microscopes, COHU 4910 and Sanyo VCB-3512P CCD cameras, Miro-Video, Scion LG-3 and VG-5 image capture maps, and Pentium 100 and Pentium 200 computers) and specialized software for image capture and visualization (Scion Image PC and Video-Cup) have been used with good results in the study.
Compression strategies for LiDAR waveform cube
NASA Astrophysics Data System (ADS)
Jóźków, Grzegorz; Toth, Charles; Quirk, Mihaela; Grejner-Brzezinska, Dorota
2015-01-01
Full-waveform LiDAR data (FWD) provide a wealth of information about the shape and materials of the surveyed areas. Unlike discrete data that retains only a few strong returns, FWD generally keeps the whole signal, at all times, regardless of the signal intensity. Hence, FWD will have an increasingly well-deserved role in mapping and beyond, in the much desired classification in the raw data format. Full-waveform systems currently perform only the recording of the waveform data at the acquisition stage; the return extraction is mostly deferred to post-processing. Although the full waveform preserves most of the details of the real data, it presents a serious practical challenge for a wide use: much larger datasets compared to those from the classical discrete return systems. Atop the need for more storage space, the acquisition speed of the FWD may also limit the pulse rate on most systems that cannot store data fast enough, and thus, reduces the perceived system performance. This work introduces a waveform cube model to compress waveforms in selected subsets of the cube, aimed at achieving decreased storage while maintaining the maximum pulse rate of FWD systems. In our experiments, the waveform cube is compressed using classical methods for 2D imagery that are further tested to assess the feasibility of the proposed solution. The spatial distribution of airborne waveform data is irregular; however, the manner of the FWD acquisition allows the organization of the waveforms in a regular 3D structure similar to familiar multi-component imagery, as those of hyper-spectral cubes or 3D volumetric tomography scans. This study presents the performance analysis of several lossy compression methods applied to the LiDAR waveform cube, including JPEG-1, JPEG-2000, and PCA-based techniques. Wide ranges of tests performed on real airborne datasets have demonstrated the benefits of the JPEG-2000 Standard where high compression rates incur fairly small data degradation. In addition, the JPEG-2000 Standard-compliant compression implementation can be fast and, thus, used in real-time systems, as compressed data sequences can be formed progressively during the waveform data collection. We conclude from our experiments that 2D image compression strategies are feasible and efficient approaches, thus they might be applied during the acquisition of the FWD sensors.
JPEG2000-coded image error concealment exploiting convex sets projections.
Atzori, Luigi; Ginesu, Giaime; Raccis, Alessio
2005-04-01
Transmission errors in JPEG2000 can be grouped into three main classes, depending on the affected area: LL, high frequencies at the lower decomposition levels, and high frequencies at the higher decomposition levels. The first type of errors are the most annoying but can be concealed exploiting the signal spatial correlation like in a number of techniques proposed in the past; the second are less annoying but more difficult to address; the latter are often imperceptible. In this paper, we address the problem of concealing the second class or errors when high bit-planes are damaged by proposing a new approach based on the theory of projections onto convex sets. Accordingly, the error effects are masked by iteratively applying two procedures: low-pass (LP) filtering in the spatial domain and restoration of the uncorrupted wavelet coefficients in the transform domain. It has been observed that a uniform LP filtering brought to some undesired side effects that negatively compensated the advantages. This problem has been overcome by applying an adaptive solution, which exploits an edge map to choose the optimal filter mask size. Simulation results demonstrated the efficiency of the proposed approach.
Sliter, Ray W.; Triezenberg, Peter J.; Hart, Patrick E.; Watt, Janet T.; Johnson, Samuel Y.; Scheirer, Daniel S.
2009-01-01
The U.S. Geological Survey (USGS) collected high-resolution shallow seismic-reflection and marine magnetic data in June 2008 in the offshore areas between the towns of Cayucos and Pismo Beach, Calif., from the nearshore (~6-m depth) to just west of the Hosgri Fault Zone (~200-m depth). These data are in support of the California State Waters Mapping Program and the Cooperative Research and Development Agreement (CRADA) between the Pacific Gas & Electric Co. and the U.S. Geological Survey. Seismic-reflection and marine magnetic data were acquired aboard the R/V Parke Snavely, using a SIG 2Mille minisparker seismic source and a Geometrics G882 cesium-vapor marine magnetometer. More than 550 km of seismic and marine magnetic data was collected simultaneously along shore-perpendicular transects spaced 800 m apart, with an additional 220 km of marine magnetometer data collected across the Hosgri Fault Zone, resulting in spacing locally as smallas 400 m. This report includes maps of the seismic-survey sections, linked to Google Earth software, and digital data files showing images of each transect in SEG-Y, JPEG, and TIFF formats, as well as preliminary gridded marine-magnetic-anomaly and residual-magnetic-anomaly (shallow magnetic source) maps.
Southern Fireworks above ESO Telescopes
NASA Astrophysics Data System (ADS)
1999-05-01
New Insights from Observations of Mysterious Gamma-Ray Burst International teams of astronomers are now busy working on new and exciting data obtained during the last week with telescopes at the European Southern Observatory (ESO). Their object of study is the remnant of a mysterious cosmic explosion far out in space, first detected as a gigantic outburst of gamma rays on May 10. Gamma-Ray Bursters (GRBs) are brief flashes of very energetic radiation - they represent by far the most powerful type of explosion known in the Universe and their afterglow in optical light can be 10 million times brighter than the brightest supernovae [1]. The May 10 event ranks among the brightest one hundred of the over 2500 GRB's detected in the last decade. The new observations include detailed images and spectra from the VLT 8.2-m ANTU (UT1) telescope at Paranal, obtained at short notice during a special Target of Opportunity programme. This happened just over one month after that powerful telescope entered into regular service and demonstrates its great potential for exciting science. In particular, in an observational first, the VLT measured linear polarization of the light from the optical counterpart, indicating for the first time that synchrotron radiation is involved . It also determined a staggering distance of more than 7,000 million light-years to this GRB . The astronomers are optimistic that the extensive observations will help them to better understand the true nature of such a dramatic event and thus to bring them nearer to the solution of one of the greatest riddles of modern astrophysics. A prime example of international collaboration The present story is about important new results at the front-line of current research. At the same time, it is also a fine illustration of a successful collaboration among several international teams of astronomers and the very effective way modern science functions. It began on May 10, at 08:49 hrs Universal Time (UT), when the Burst And Transient Source Experiment (BATSE) onboard NASA's Compton Gamma-Ray Observatory (CGRO) high in orbit around the Earth, suddenly registered an intense burst of gamma-ray radiation from a direction less than 10° from the celestial south pole. Independently, the Gamma-Ray Burst Monitor (GRBM) on board the Italian-Dutch BeppoSAX satellite also detected the event (see GCN GRB Observation Report 304 [2]). Following the BATSE alert, the BeppoSAX Wide-Field Cameras (WFC) quickly localized the sky position of the burst within a circle of 3 arcmin radius in the southern constellation Chamaeleon. It was also detected by other satellites, including the ESA/NASA Ulysses spacecraft , since some years in a wide orbit around the Sun. The event was designated GRB 990510 and the measured position was immediately distributed by BeppoSAX Mission Scientist Luigi Piro to a network of astronomers. It was also published on Circular No. 7160 of the International Astronomical Union (IAU). From Amsterdam (The Netherlands), Paul Vreeswijk, Titus Galama , and Evert Rol of the Amsterdam/Huntsville GRB follow-up team (led by Jan van Paradijs ) immediately contacted astronomers at the 1-meter telescope of the South African Astronomical Observatory (SAAO) (Sutherland, South Africa) of the PLANET network microlensing team, an international network led by Penny Sackett in Groningen (The Netherlands). There, John Menzies of SAAO and Karen Pollard (University of Canterbury, New Zealand) were about to begin the last of their 14 nights of observations, part of a continuous world-wide monitoring program looking for evidence of planets around other stars. Other PLANET sites in Australia and Tasmania where it was still nighttime were unfortunately clouded out (some observations were in fact made that night at the Mount Stromlo observatory in Australia, but they were only announced one day later). As soon as possible - immediately after sundown and less than 9 hours after the initial burst was recorded - the PLANET observers turned their telescope and quickly obtained a series of CCD images in visual light of the sky region where the gamma-ray burst was detected, then shipped them off electronically to their Dutch colleagues [3]. Comparing the new photos with earlier ones in the digital sky archive, Vreeswijk, Galama and Rol almost immediately discovered a new, relatively bright visual source in the region of the gamma-ray burst, which they proposed as the optical counterpart of the burst, cf. their dedicated webpage at http://www.astro.uva.nl/~titus/grb990510/. The team then placed a message on the international Gamma-Ray Burster web-noteboard ( GCN Circular 310), thereby alerting their colleagues all over the world. One hour later, the narrow-field instruments on BeppoSax identified a new X-Ray source at the same location ( GCN Circular 311), thus confirming the optical identification. All in all, a remarkable synergy of human and satellite resources! Observations of GRB 990510 at ESO Vreeswijk, Galama and Rol, in collaboration with Nicola Masetti, Eliana Palazzi and Elena Pian of the BeppoSAX GRB optical follow-up team (led by Filippo Frontera ) and the Huntsville optical follow-up team (led by Chryssa Kouveliotou ), also contacted the European Southern Observatory (ESO). Astronomers at this Organization's observatories in Chile were quick to exploit this opportunity and crucial data were soon obtained with several of the main telescopes at La Silla and Paranal, less than 14 hours after the first detection of this event by the satellite. ESO PR Photo 22a/99 ESO PR Photo 22a/99 [Preview - JPEG: 211 x 400 pix - 72k] [Normal - JPEG: 422 x 800 pix - 212k] [High-Res - JPEG: 1582 x 3000 pix - 2.6M] ESO PR Photo 22b/99 ESO PR Photo 22b/99 [Preview - JPEG: 400 x 437 pix - 297k] [Normal - JPEG: 800 x 873 pix - 1.1M] [High-Res - JPEG: 2300 x 2509 pix - 5.9M] Caption to PR Photo 22a/99 : This wide-field photo was obtained with the Wide-Field Imager (WFI) at the MPG/ESO 2.2-m telescope at La Silla on May 11, 1999, at 08:42 UT, under inferior observing conditions (seeing = 1.9 arcsec). The exposure time was 450 sec in a B(lue) filter. The optical image of the afterglow of GRB 990510 is indicated with an arrow in the upper part of the field that measures about 8 x 16 arcmin 2. The original scale is 0.24 pix/arcsec and there are 2k x 4k pixels in the original frame. North is up and East is left. Caption to PR Photo 22b/99 : This is a (false-)colour composite of the area around the optical image of the afterglow of GRB 990510, based on three near-infrared exposures with the SOFI multi-mode instrument at the 3.6-m ESO New Technology Telescope (NTT) at La Silla, obtained on May 10, 1999, between 23:15 and 23:45 UT. The exposure times were 10 min each in the J- (1.2 µm; here rendered in blue), H- (1.6 µm; green) and K-bands (2.2 µm; red); the image quality is excellent (0.6 arcsec). The field measures about 5 x 5 arcmin 2 ; the original pixel size is 0.29 arcsec. North is up and East is left. ESO PR Photo 22c/99 ESO PR Photo 22c/99 [Preview - JPEG: 400 x 235 pix - 81k] [Normal - JPEG: 800 x 469 pix - 244k] [High-Res - JPEG: 2732 x 1603 pix - 2.6M] ESO PR Photo 22d/99 ESO PR Photo 22d/99 [Preview - JPEG: 400 x 441 pix - 154k] [Normal - JPEG: 800 x 887 pix - 561k] [High-Res - JPEG: 2300 x 2537 pix - 2.3M] Caption to PR Photo 22c/99 : To the left is a reproduction of a short (30 sec) centering exposure in the V-band (green-yellow light), obtained with VLT ANTU and the multi-mode FORS1 instrument on May 11, 1999, at 03:48 UT under mediocre observing conditions (image quality 1.0 arcsec).The optical image of the afterglow of GRB 990510 is easily seen in the box, by comparison with an exposure of the same sky field before the explosion, made with the ESO Schmidt Telescope in 1986 (right).The exposure time was 120 min on IIIa-F emulsion behind a R(ed) filter. The field shown measures about 6.2 x 6.2 arcmin 2. North is up and East is left. Caption to PR Photo 22d/99 : Enlargement from the 30 sec V-exposure by the VLT, shown in Photo 22c/99. The field is about 1.9 x 1.9 arcmin 2. North is up and East is left. The data from Chile were sent to Europe where, by quick comparison of images from the Wide-Field Imager (WFI) at the MPG/ESO 2.2-m telescope at La Silla with those from SAAO, the Dutch and Italian astronomers found that the brightness of the suspected optical counterpart was fading rapidly; this was a clear sign that the identification was correct ( GCN Circular 313). With the precise sky position of GRB 990510 now available, the ESO observers at the VLT were informed and, setting other programmes aside under the Target of Opportunity scheme, were then able to obtain polarimetric data as well as a very detailed spectrum of the optical counterpart. Comprehensive early observations of this object were also made at La Silla with the ESO 3.6-m telescope (CCD images in the UBVRI-bands from the ultraviolet to the near-infrared part of the spectrum) and the ESO 3.6-m New Technology Telescope (with the SOFI multimode instrument in the infrared JHK-bands). A series of optical images in the BVRI-bands was secured with the Danish 1.5-m telescope, documenting the rapid fading of the object. Observations at longer wavelengths were made with the 15-m Swedish-ESO Submillimetre Telescope (SEST). All of the involved astronomers concur that a fantastic amount of observations has been obtained. They are still busy analyzing the data, and are confident that much will be learned from this particular burst. The VLT scores a first: Measurement of GRB polarization ESO PR Photo 22e/99 ESO PR Photo 22e/99 [Preview - JPEG: 400 x 434 pix - 92k] [Normal - JPEG: 800 x 867 pix - 228k] Caption to PR Photo 22e/99 : Preliminary polarization measurement of the optical image of the afterglow of GRB 990510, as observed with the VLT 8.2-m ANTU telescope and the multi-mode FORS1 instrument. The abscissa represents the measurement angle; the ordinate the corresponding intensity. The sinusoidal curve shows the best fit to the data points (with error bars); the resulting degree of polarization is 1.7 ± 0.2 percent. A group of Italian astronomers led by Stefano Covino of the Observatory of Brera in Milan, have observed for the first time polarization (some degree of alignment of the electric fields of emitted photons) from the optical afterglow of a gamma-ray burst, see their dedicated webpage at http://www.merate.mi.astro.it/~lazzati/GRB990510/. This yielded a polarization at a level of 1.7 ± 0.2 percent for the optical afterglow of GRB 990510, some 18 hours after the gamma-ray burst event; the magnitude was R = 19.1 at the time of this VLT observation. Independently, the Dutch astronomers Vreeswijk, Galama and Rol measured polarization of the order of 2 percent with another data set from the VLT ANTU and FORS1 obtained during the same night. This important result was made possible by the very large light-gathering power of the 8.2-m VLT-ANTU mirror and the FORS1 imaging polarimeter. Albeit small, the detected degree of polarization is highly significant; it is also one of the most precise measurements of polarization ever made in an object as faint as this one. Most importantly, it provides the strongest evidence to date that the afterglow radiation of gamma-ray bursts is, at least in part, produced by the synchrotron process , i.e. by relativistic electrons spiralling in a magnetized region. This type of process is able to imprint some linear polarization on the produced radiation, if the magnetic field is not completely chaotic. The spectrum ESO PR Photo 22f/99 ESO PR Photo 22f/99 [Preview - JPEG: 400 x 485 pix - 112k] [Normal - JPEG: 800 x 969 pix - 288k] Caption to PR Photo 22f/99 : A spectrum of the afterglow of GRB 990510, obtained with VLT ANTU and the multi-mode FORS1 instrument during the night of May 10-11, 1999. Some of the redshifted absorption lines are identified and the stronger bands from the terrestrial atmosphere are also indicated. A VLT spectrum with the multi-mode FORS1 instrument was obtained a little later and showed a number of absorption lines , e.g. from ionized Aluminium, Chromium and neutral Magnesium. They do not arise in the optical counterpart itself - the gas there is so hot and turbulent that any spectral lines will be extremely broad and hence extremely difficult to identify - but from interstellar gas in a galaxy 'hosting' the GRB source, or from intergalactic clouds along the line of sight. It is possible to measure the distance to this intervening material from the redshift of the lines; astronomers Vreeswijk, Galama and Rol found z = 1.619 ± 0.002 [4]. This allows to establish a lower limit for the distance of the explosion and also its total power. The numbers turn out to be truly enormous. The burst occurred at an epoch corresponding to about one half of the present age of the Universe (at a distance of about 7,000 million light-years [5]), and the total energy of the explosion in gamma-rays must be higher than 1.4 10 53 erg , assuming a spherical emission. This energy corresponds to the entire optical energy emitted by the Milky Way in more than 30 years; yet the gamma-ray burst took less than 100 seconds. Since the optical afterglows of gamma-ray bursts are faint, and their flux decays quite rapidly in time, the combination of large telescopes and fast response through suitable observing programs are crucial and, as demonstrated here, ESO's VLT is ideally suited to this goal! The lightcurve Combining results from a multitude of telescopes has provided most useful information. Interestingly, a "break" was observed in the light curve (the way the light of the optical counterpart fades) of the afterglow. Some 1.5 - 2 days after the explosion, the brightness began to decrease more rapidly; this is well documented with the CCD images from the Danish 1.5-m telescope at La Silla and the corresponding diagrams are available on a dedicated webpage at http://www.astro.ku.dk/~jens/grb990510/ at the Copenhagen University Observatory. Complete, regularly updated lightcurves with all published measurements, also from other observatories, may be found at another webpage in Milan at http://www.merate.mi.astro.it/~gabriele/990510/ . This may happen if the explosion emits radiation in a beam which is pointed towards the Earth. Such beams are predicted by some models for the production of gamma-ray bursts. They are also favoured by many astronomers, because they can overcome the fundamental problem that gamma-ray bursts simply produce too much energy. If the energy is not emitted equally in all directions ("isotropically"), but rather in a preferred one along a beam, less energy is needed to produce the observed phenomenon. Such a break has been observed before, but this time it occurred at a very favourable moment, when the source was still relatively bright so that high-quality spectroscopic and multi-colour information could be obtained with the ESO telescopes. Together, these observations may provide an answer to the question whether beams exist in gamma-ray bursts and thus further help us to understand the as yet unknown cause of these mysterious explosions. Latest News ESO PR Photo 22g/99 ESO PR Photo 22g/99 [Normal - JPEG: 453 x 585 pix - 304k] Caption to PR Photo 22g/99 : V(isual) image of the sky field around GRB 990510 (here denoted "OT"), as obtained with the VLT ANTU telescope and FORS1 on May 18 UT during a 20 min exposure in 0.9 arcsec seeing conditions. The reproduction is in false colours to better show differences in intensity. North is up and east is left. Further photometric and spectroscopic observations with the ESO VLT, performed by Klaus Beuermann, Frederic Hessman and Klaus Reinsch of the Göttingen group of the FORS instrument team (Germany), have revealed the character of some of the objects that are seen close to the image of the afterglow of GRB 990510 (also referred to as the "Optical Transient" - OT). Two objects to the North are cool foreground stars of spectral types dM0 and about dM3, respectively; they are located in our Milky Way Galaxy. The object just to the South of the OT is probably also a star. A V(isual)-band image (PR Photo 22g/99) taken during the night between May 17 and 18 with the VLT/ANTU telescope and FORS1 now shows the OT at magnitude V = 24.5, with still no evidence for the host galaxy that is expected to appear when the afterglow has faded sufficiently. Outlook The great distances (high redshifts) of Gamma-Ray Bursts, plus the fact that a 9th magnitude optical flash was seen when another GRB exploded on January 23 this year, has attracted the attention of astronomers outside the GRB field. In fact, GRBs may soon become a very powerful tool to probe the early universe by guiding us to regions of very early star formation and the (proto)-galaxies and (proto)-clusters of which they are part. They will also allow the study of the chemical composition of absorbing clouds at very large distances. At the end of this year, the NASA satellite HETE-II will be launched, which is expected to provide about 50 GRB alerts per year and, most importantly, accurate localisations in the sky that will allow very fast follow-up observations, while the optical counterparts are still quite bright. It will then be possible to obtain more spectra, also of extremely distant bursts, and many new distance determinations can be made, revealing the distribution of intrinsic brightness of GRB's (the "luminosity function"). Other types of observations (e.g. polarimetry, as above) will also profit, leading to a progressive refinement of the available data. Thus there is good hope that astronomers will soon come closer to identifying the progenitors of these enormous explosions and to understand what is really going on. In this process, the huge light-collecting power of the VLT and the many other facilities at the ESO observatories will undoubtedly play an important role. Notes [1] Gamma-Ray Bursts are brief flashes of high-energy radiation. Satellites in orbit around the Earth and spacecraft in interplanetary orbits have detected several thousand such events since they were first discovered in the late 1960s. Earlier investigations established that they were so evenly distributed in the sky that they must be very distant (and hence very powerful) outbursts of some kind. Only in 1997 it became possible to observe the fading "afterglow" of one of these explosions in visible light, thanks to accurate positions available from the BeppoSAX satellite. Soon thereafter, another optical afterglow was detected; it was located in a faint galaxy whose distance could be measured. In 1998, a gamma-ray burst was detected in a galaxy over 8,300 million light-years away. Even the most exotic ideas proposed for these explosions, e.g. supergiant stars collapsing to black holes, black holes merging with neutron stars or other black holes, and other weird and wonderful notions have trouble accounting for explosions with the power of 10,000 million million suns. [2] The various reports issued by astronomers working on this and other gamma-ray burst events are available as GCN Circulars on the GRB Coordinates Network web-noteboard. [3] See also the Press Release, issued by SAAO on this occasion. [4] In astronomy, the redshift (z) denotes the fraction by which the lines in the spectrum of an object are shifted towards longer wavelengths. The observed redshift of a distant galaxy or intergalactic cloud gives a direct estimate of the universal expansion (i.e. the "recession velocity"). The detailed relation between redshift and distance depends on such quantities as the Hubble Constant, the average density of the universe, and the 'cosmological' Constant. For a standard cosmological model, redshift z = 1.6 corresponds to a distance of about 7,000 million light-years. [5] Assuming a Hubble Constant H 0 = 70 km/s/Mpc, mean density Omega 0 = 0.3 and a Cosmological Constant Lambda = 0. How to obtain ESO Press Information ESO Press Information is made available on the World-Wide Web (URL: http://www.eso.org../ ). ESO Press Photos may be reproduced, if credit is given to the European Southern Observatory.
Enabling Near Real-Time Remote Search for Fast Transient Events with Lossy Data Compression
NASA Astrophysics Data System (ADS)
Vohl, Dany; Pritchard, Tyler; Andreoni, Igor; Cooke, Jeffrey; Meade, Bernard
2017-09-01
We present a systematic evaluation of JPEG2000 (ISO/IEC 15444) as a transport data format to enable rapid remote searches for fast transient events as part of the Deeper Wider Faster programme. Deeper Wider Faster programme uses 20 telescopes from radio to gamma rays to perform simultaneous and rapid-response follow-up searches for fast transient events on millisecond-to-hours timescales. Deeper Wider Faster programme search demands have a set of constraints that is becoming common amongst large collaborations. Here, we focus on the rapid optical data component of Deeper Wider Faster programme led by the Dark Energy Camera at Cerro Tololo Inter-American Observatory. Each Dark Energy Camera image has 70 total coupled-charged devices saved as a 1.2 gigabyte FITS file. Near real-time data processing and fast transient candidate identifications-in minutes for rapid follow-up triggers on other telescopes-requires computational power exceeding what is currently available on-site at Cerro Tololo Inter-American Observatory. In this context, data files need to be transmitted rapidly to a foreign location for supercomputing post-processing, source finding, visualisation and analysis. This step in the search process poses a major bottleneck, and reducing the data size helps accommodate faster data transmission. To maximise our gain in transfer time and still achieve our science goals, we opt for lossy data compression-keeping in mind that raw data is archived and can be evaluated at a later time. We evaluate how lossy JPEG2000 compression affects the process of finding transients, and find only a negligible effect for compression ratios up to 25:1. We also find a linear relation between compression ratio and the mean estimated data transmission speed-up factor. Adding highly customised compression and decompression steps to the science pipeline considerably reduces the transmission time-validating its introduction to the Deeper Wider Faster programme science pipeline and enabling science that was otherwise too difficult with current technology.
NASA Astrophysics Data System (ADS)
2000-01-01
VLT MELIPAL Achieves Successful "First Light" in Record Time This was a night to remember at the ESO Paranal Observatory! For the first time, three 8.2-m VLT telescopes were observing in parallel, with a combined mirror surface of nearly 160 m 2. In the evening of January 26, the third 8.2-m Unit Telescope, MELIPAL ("The Southern Cross" in the Mapuche language), was pointed to the sky for the first time and successfully achieved "First Light". During this night, a number of astronomical exposures were made that served to evaluate provisionally the performance of the new telescope. The ESO staff expressed great satisfaction with MELIPAL and there were broad smiles all over the mountain. The first images ESO PR Photo 04a/00 ESO PR Photo 04a/00 [Preview - JPEG: 400 x 352 pix - 95k] [Normal - JPEG: 800 x 688 pix - 110k] Caption : ESO PR Photo 04a/00 shows the "very first light" image for MELIPAL . It is that of a relatively bright star, as recorded by the Guide Probe at about 21:50 hrs local time on January 26, 2000. It is a 0.1 sec exposure, obtained after preliminary adjustment of the optics during a few iterations with the computer controlled "active optics" system. The image quality is measured as 0.46 arcsec FWHM (Full-Width at Half Maximum). ESO PR Photo 04b/00 ESO PR Photo 04b/00 [Preview - JPEG: 400 x 429 pix - 39k] [Normal - JPEG: 885 x 949 pix - 766k] Caption : ESO PR Photo 04b/00 shows the central region of the Crab Nebula, the famous supernova remnant in the constellation Taurus (The Bull). It was obtained early in the night of "First Light" with the third 8.2-m VLT Unit Telescope, MELIPAL . It is a composite of several 30-sec exposures with the VLT Test Camera in three broad-band filters, B (here rendered as blue; most synchrotron emission), V (green) and R (red; mostly emission from hydrogen atoms). The Crab Pulsar is visible to the left; it is the lower of the two brightest stars near each other. The image quality is about 0.9 arcsec, and is completely determined by the external seeing caused by the atmospheric turbulence above the telescope at the time of the observation. The coloured, vertical lines to the left are artifacts of a "bad column" of the CCD. The field measures about 1.3 x 1.3 arcmin 2. This image may be compared with that of the same area that was recently obtained with the FORS2 instrument at KUEYEN ( PR Photo 40g/99 ). Following two days of preliminary adjustments after the installation of the secondary mirror, cf. ESO PR Photos 03a-n/00 , MELIPAL was pointed to the sky above Paranal for the first time, soon after sunset in the evening of January 26. The light of a bright star was directed towards the Guide Probe camera, and the VLT Commissioning Team, headed by Dr. Jason Spyromilio , initiated the active optics procedure . This adjusts the 150 computer-controlled supports under the main 8.2-m Zerodur mirror as well as the position of the secondary 1.1-m Beryllium mirror. After just a few iterations, the optical quality of the recorded stellar image was measured as 0.46 arcsec ( PR Photo 04a/00 ), a truly excellent value, especially at this stage! Immediately thereafter, at 22:16 hrs local time (i.e., at 01:16 hrs UT on January 27), the shutter of the VLT Test Camera at the Cassegrain focus was opened. A 1-min exposure was made through a R(ed) optical filter of a distant star cluster in the constellation Eridanus (The River). The light from its faint stars was recorded by the CCD at the focal plane and the resulting frame was read into the computer. Despite the comparatively short exposure time, myriads of stars were seen when this "first frame" was displayed on the computer screen. Moreover, the sizes of these images were found to be virtually identical to the 0.6 arcsec seeing measured simultaneously with a monitor telescope, outside the telescope enclosure. This confirmed that MELIPAL was in very good shape. Nevertheless, these very first images were still slightly elongated and further optical adjustments and tests were therefore made to eliminate this unwanted effect. It is a tribute to the extensive experience and fine skills of the ESO staff that within only 1 hour, a 30 sec exposure of the central region of the Crab Nebula in Taurus with round images was obtained, cf. PR Photo 04b/00 . The ESO Director General, Dr. Catherine Cesarsky , who assumed her function in September 1999, was present in the Control Room during these operations. She expressed great satisfaction with the excellent result and warmly congratulated the ESO staff to this achievement. She was particularly impressed with the apparent ease with which a completely new telescope of this size could be adjusted in such a short time. A part of her statement on this occasion was recorded on ESO PR Video Clip 02/00 that accompanies this Press Release. Three telescopes now in operation at Paranal At 02:30 UT on January 27, 2000, three VLT Unit Telescopes were observing in parallel, with measured seeing values of 0.6 arcsec ( ANTU - "The Sun"), 0.7 arcsec ( KUEYEN -"The Moon") and 0.7 arcsec ( MELIPAL ). MELIPAL has now joined ANTU and KUEYEN that had "First Light" in May 1998 and March 1999, respectively. The fourth VLT Unit Telescope, YEPUN ("Sirius") will become operational later this year. While normal scientific observations continue with ANTU , the UVES and FORS2 astronomical instruments are now being commissioned at KUEYEN , before this telescope will be handed over to the astronomers on April 1, 2000. The telescope commissioning period will now start for MELIPAL , after which its first instrument, VIMOS will be installed later this year. Impressions from the MELIPAL "First Light" event First Light for MELIPAL ESO PR Video Clip 02/00 "First Light for MELIPAL" (3350 frames/2:14 min) [MPEG Video+Audio; 160x120 pix; 3.1Mb] [MPEG Video+Audio; 320x240 pix; 9.4 Mb] [RealMedia; streaming; 34kps] [RealMedia; streaming; 200kps] ESO Video Clip 02/00 shows sequences from the Control Room at the Paranal Observatory, recorded with a fixed TV-camera on January 27 at 03:00 UT, soon after the moment of "First Light" with the third 8.2-m VLT Unit Telescope ( MELIPAL ). The video sequences were transmitted via ESO's dedicated satellite communication link to the Headquarters in Garching for production of the Clip. It begins with a statement by the Manager of the VLT Project, Dr. Massimo Tarenghi , as exposures of the Crab Nebula are obtained with the telescope and the raw frames are successively displayed on the monitor screen. In a following sequence, ESO's Director General, Dr. Catherine Cesarsky , briefly relates the moment of "First Light" for MELIPAL , as she experienced it at the telescope controls. ESO Press Photo 04c/00 ESO Press Photo 04c/00 [Preview; JPEG: 400 x 300; 44k] [Full size; JPEG: 1600 x 1200; 241k] The computer screen with the image of a bright star, as recorded by the Guide Probe in the early evening of January 26; see also PR Photo 04a/00. This image was used for the initial adjustments by means of the active optics system. (Digital Photo). ESO Press Photo 04d/00 ESO Press Photo 04d/00 [Preview; JPEG: 400 x 314; 49k] [Full size; JPEG: 1528 x 1200; 189k] ESO staff at the moment of "First Light" for MELIPAL in the evening of January 26. The photo was made in the wooden hut on the telescope observing floor from where the telescope was controlled during the first hours. (Digital Photo). ESO PR Photos may be reproduced, if credit is given to the European Southern Observatory. The ESO PR Video Clips service to visitors to the ESO website provides "animated" illustrations of the ongoing work and events at the European Southern Observatory. The most recent clip was: ESO PR Video Clip 01/00 with aerial sequences from Paranal (12 January 2000). Information is also available on the web about other ESO videos.
Visual information processing II; Proceedings of the Meeting, Orlando, FL, Apr. 14-16, 1993
NASA Technical Reports Server (NTRS)
Huck, Friedrich O. (Editor); Juday, Richard D. (Editor)
1993-01-01
Various papers on visual information processing are presented. Individual topics addressed include: aliasing as noise, satellite image processing using a hammering neural network, edge-detetion method using visual perception, adaptive vector median filters, design of a reading test for low-vision image warping, spatial transformation architectures, automatic image-enhancement method, redundancy reduction in image coding, lossless gray-scale image compression by predictive GDF, information efficiency in visual communication, optimizing JPEG quantization matrices for different applications, use of forward error correction to maintain image fidelity, effect of peanoscanning on image compression. Also discussed are: computer vision for autonomous robotics in space, optical processor for zero-crossing edge detection, fractal-based image edge detection, simulation of the neon spreading effect by bandpass filtering, wavelet transform (WT) on parallel SIMD architectures, nonseparable 2D wavelet image representation, adaptive image halftoning based on WT, wavelet analysis of global warming, use of the WT for signal detection, perfect reconstruction two-channel rational filter banks, N-wavelet coding for pattern classification, simulation of image of natural objects, number-theoretic coding for iconic systems.
Morgan, Karen L.M.; Westphal, Karen A.
2014-01-01
The U.S. Geological Survey (USGS) conducts baseline and storm response photography missions to document and understand the changes in vulnerability of the Nation's coasts to extreme storms. On July 13, 2013, the USGS conducted an oblique aerial photographic survey from Breton Island, Louisiana, to the Alabama-Florida border, aboard a Cessna 172 flying at an altitude of 500 feet (ft) and approximately 1,000 ft offshore. This mission was flown to collect baseline data for assessing incremental changes since the last survey, and the data can be used in the assessment of future coastal change. The images provided here are Joint Photographic Experts Group (JPEG) images. ExifTtool was used to add the following to the header of each photo: time of collection, Global Positioning System (GPS) latitude, GPS longitude, keywords, credit, artist (photographer), caption, copyright, and contact information. The photograph locations are an estimate of the position of the aircraft and do not indicate the location of any feature in the images (see the Navigation Data page). These photographs document the configuration of the barrier islands and other coastal features at the time of the survey. Pages containing thumbnail images of the photographs, referred to as contact sheets, were created in 5-minute segments of flight time. These segements can be found on the Photos and Maps page. Photographs can be opened directly with any JPEG-compatible image viewer by clicking on a thumbnail on the contact sheet. Table 1 provides detailed information about the GPS location, name, date, and time each of the 1242 photographs taken along with links to each photograph. The photography is organized into segments, also referred to as contact sheets, and represent approximately 5 minutes of flight time. (Also see the Photos and Maps page). In addition to the photographs, a Google Earth Keyhole Markup Language (KML) file is provided and can be used to view the images by clicking on the marker and then clicking on either the thumbnail or the link above the thumbnail. The KML files were created using the photographic navigation files.
Morgan, Karen L.M.; Westphal, Karen A.
2014-01-01
The U.S. Geological Survey (USGS) conducts baseline and storm response photography missions to document and understand the changes in vulnerability of the Nation's coasts to extreme storms. On August 8, 2012, the USGS conducted an oblique aerial photographic survey from Dauphin Island, Alabama, to Breton Island, Louisiana, aboard a Cessna 172 at an altitude of 500 feet (ft) and approximately 1,000 ft offshore. This mission was flown to collect baseline data for assessing incremental changes since the last survey, and the data can be used in the assessment of future coastal change. The images provided here are Joint Photographic Experts Group (JPEG) images. Exiftool was used to add the following to the header of each photo: time of collection, Global Positioning System (GPS) latitude, GPS longitude, keywords, credit, artist (photographer), caption, copyright, and contact information. The photograph locations are an estimate of the position of the aircraft and do not indicate the location of any feature in the images (see the Navigation Data page). These photographs document the configuration of the barrier islands and other coastal features at the time of the survey. Pages containing thumbnail images of the photographs, referred to as contact sheets, were created in 5-minute segments of flight time. These segements can be found on the Photos and Maps page. Photographs can be opened directly with any JPEG-compatible image viewer by clicking on a thumbnail on the contact sheet. Table 1 provides detailed information about the GPS location, name, date, and time each of the 1241 photographs taken along with links to each photograph. The photography is organized into segments, also referred to as contact sheets, and represent approximately 5 minutes of flight time. (Also see the Photos and Maps page). In addition to the photographs, a Google Earth Keyhole Markup Language (KML) file is provided and can be used to view the images by clicking on the marker and then clicking on either the thumbnail or the link above the thumbnail. The KML files were created using the photographic navigation files.
Building a Steganography Program Including How to Load, Process, and Save JPEG and PNG Files in Java
ERIC Educational Resources Information Center
Courtney, Mary F.; Stix, Allen
2006-01-01
Instructors teaching beginning programming classes are often interested in exercises that involve processing photographs (i.e., files stored as .jpeg). They may wish to offer activities such as color inversion, the color manipulation effects archived with pixel thresholding, or steganography, all of which Stevenson et al. [4] assert are sought by…
Montironi, R; Thompson, D; Scarpelli, M; Bartels, H G; Hamilton, P W; Da Silva, V D; Sakr, W A; Weyn, B; Van Daele, A; Bartels, P H
2002-01-01
Objective: To describe practical experiences in the sharing of very large digital data bases of histopathological imagery via the Internet, by investigators working in Europe, North America, and South America. Materials: Experiences derived from medium power (sampling density 2.4 pixels/μm) and high power (6 pixels/μm) imagery of prostatic tissues, skin shave biopsies, breast lesions, endometrial sections, and colonic lesions. Most of the data included in this paper were from prostate. In particular, 1168 histological images of normal prostate, high grade prostatic intraepithelial neoplasia (PIN), and prostate cancer (PCa) were recorded, archived in an image format developed at the Optical Sciences Center (OSC), University of Arizona, and transmitted to Ancona, Italy, as JPEG (joint photographic experts group) files. Images were downloaded for review using the Internet application FTP (file transfer protocol). The images were then sent from Ancona to other laboratories for additional histopathological review and quantitative analyses. They were viewed using Adobe Photoshop, Paint Shop Pro, and Imaging for Windows. For karyometric analysis full resolution imagery was used, whereas histometric analyses were carried out on JPEG imagery also. Results: The three applications of the telecommunication system were remote histopathological assessment, remote data acquisition, and selection of material. Typical data volumes for each project ranged from 120 megabytes to one gigabyte, and transmission times were usually less than one hour. There were only negligible transmission errors, and no problem in efficient communication, although real time communication was an exception, because of the time zone differences. As far as the remote histopathological assessment of the prostate was concerned, agreement between the pathologist's electronic diagnosis and the diagnostic label applied to the images by the recording scientist was present in 96.6% of instances. When these images were forwarded to two pathologists, the level of concordance with the reviewing pathologist who originally downloaded the files from Tucson was as high as 97.2% and 98.0%. Initial results of studies made by researchers belonging to our group but located in others laboratories showed the feasibility of making quantitative analysis on the same images. Conclusions: These experiences show that diagnostic teleconsultation and quantitative image analyses via the Internet are not only feasible, but practical, and allow a close collaboration between researchers widely separated by geographical distance and analytical resources. PMID:12037030
VLT Images the Horsehead Nebula
NASA Astrophysics Data System (ADS)
2002-01-01
Summary A new, high-resolution colour image of one of the most photographed celestial objects, the famous "Horsehead Nebula" (IC 434) in Orion, has been produced from data stored in the VLT Science Archive. The original CCD frames were obtained in February 2000 with the FORS2 multi-mode instrument at the 8.2-m VLT KUEYEN telescope on Paranal (Chile). The comparatively large field-of-view of the FORS2 camera is optimally suited to show this extended object and its immediate surroundings in impressive detail. PR Photo 02a/02 : View of the full field around the Horsehead Nebula. PR Photo 02b/02 : Enlargement of a smaller area around the Horse's "mouth" A spectacular object ESO PR Photo 02a/02 ESO PR Photo 02a/02 [Preview - JPEG: 400 x 485 pix - 63k] [Normal - JPEG: 800 x 970 pix - 896k] [Full-Res - JPEG: 1951 x 2366 pix - 4.7M] ESO PR Photo 02b/02 ESO PR Photo 02b/02 [Preview - JPEG: 400 x 501 pix - 91k] [Normal - JPEG: 800 x 1002 pix - 888k] [Full-Res - JPEG: 1139 x 1427 pix - 1.9M] Caption : PR Photo 02a/02 is a reproduction of a composite colour image of the Horsehead Nebula and its immediate surroundings. It is based on three exposures in the visual part of the spectrum with the FORS2 multi-mode instrument at the 8.2-m KUEYEN telescope at Paranal. PR Photo 02b/02 is an enlargement of a smaller area. Technical information about these photos is available below. PR Photo 02a/02 shows the famous "Horsehead Nebula" , which is situated in the Orion molecular cloud complex. Its official name is Barnard 33 and it is a dust protrusion in the southern region of the dense dust cloud Lynds 1630 , on the edge of the HII region IC 434 . The distance to the region is about 1400 light-years (430 pc). This beautiful colour image was produced from three images obtained with the multi-mode FORS2 instrument at the second VLT Unit Telescope ( KUEYEN ), some months after it had "First Light", cf. PR 17/99. The image files were extracted from the VLT Science Archive Facility and the photo constitutes a fine example of the subsequent use of such valuable data. Details about how the photo was made and some weblinks to other pictures are available below. The comparatively large field-of-view of the FORS2 camera (nearly 7 x 7 arcmin 2 ) and the detector resolution (0.2 arcsec/pixel) make this instrument optimally suited for imaging of this extended object and its immediate surroundings. There is obviously a wealth of detail, and scientific information can be derived from the colours shown in this photo. Three predominant colours are seen in the image: red from the hydrogen (H-alpha) emission from the HII region; brown for the foreground obscuring dust; and blue-green for scattered starlight. The blue-green regions of the Horsehead Nebula correspond to regions not shadowed from the light from the stars in the H II region to the top of the picture and scatter stellar radiation towards the observer; these are thus `mountains' of dust . The Horse's `mane' is an area in which there is less dust along the line-of-sight and the background (H-alpha) emission from ionized hydrogen atoms can be seen through the foreground dust. A chaotic area At the high resolution of this image the Horsehead appears very chaotic with many wisps and filaments and diffuse dust . At the top of the figure there is a bright rim separating the dust from the HII region. This is an `ionization front' where the ionizing photons from the HII region are moving into the cloud, destroying the dust and the molecules and heating and ionizing the gas. Dust and molecules can exist in cold regions of interstellar space which are shielded from starlight by very large layers of gas and dust. Astronomers refer to elongated structures, such as the Horsehead, as `elephant trunks' (never mind the zoological confusion!) which are common on the boundaries of HII regions. They can also be seen elsewhere in Orion - another well-known example is the pillars of M16 (the "Eagle Nebula") made famous by the fine HST image - a new infrared view by VLT and ISAAC of this area was published last month, cf. PR 25/01. Such structures are only temporary as they are being constantly eroded by the expanding region of ionized gas and are destroyed on timescales of typically a few thousand years. The Horsehead as we see it today will therefore not last forever and minute changes will become observable as the time passes. The surroundings To the east of the Horsehead (at the bottom of this image) there is ample evidence for star formation in the Lynds 1630 dark cloud . Here, the reflection nebula NGC 2023 surrounds the hot B-type star HD 37903 and some Herbig Haro objects are found which represent high-speed gas outflows from very young stars with masses of around a solar mass. The HII region to the west (top of picture) is ionized by the strong radiation from the bright star Sigma Orionis , located just below the southernmost star in Orion's Belt. The chain of dust and molecular clouds are part of the Orion A and B regions (also known as Orion's `sword' ). Other images of the Horsehead Nebula The Horsehead Nebula is a favourite object for amateur astrophotographers and large numbers of images are available on the WWW. Due to its significant extension and the limited field-of-view of some professional telescopes, fewer photographs are available from today's front-line facilities, except from specialized wide-field instruments like Schmidt telescopes, etc. The links below point to a number of prominent photos obtained elsewhere and some contain further useful links to other sites with more information about this splendid sky area. "Astronomy Picture of the Day" : http://antwrp.gsfc.nasa.gov/apod/ap971025.html Hubble Heritage image : http://hubble.stsci.edu/news_.and._views/pr.cgi?2001%2B12 INT Wide-Field image : http://www.ing.iac.es/PR/science/horsehead.htm NOT image : http://www.not.iac.es/new/general/photos/astronomical/ NOAO Wide-Field image : http://www.noao.edu/outreach/press/pr01/ir0101.html Bill Arnett's site : http://www.seds.org/billa/twn/b33x.html Technical information about the photos PR Photo 02a/02 was produced from three images, obtained on February 1, 2000, with the FORS2 multi-mode instrument at the 8.2-m KUEYEN Unit Telescope and extracted from the VLT Science Archive Facility. The frames were obtained in the B-band (600 sec exposure; wavelength 429 nm; FWHM 88 nm; here rendered as blue), V-band (300 sec; 554 nm; 112 nm; green) and R-band (120 sec; 655 nm; 165 nm; red) The original pixel size is 0.2 arcsec. The photo shows the full field recorded in all three colours, approximately 6.5 x 6.7 arcmin 2. The seeing was about 0.75 arcsec. PR Photo 02b/02 is an enlargement of a smaller area, measuring 3.8 x 4.1 arcmin 2. North is to the left and east is down (the usual orientation for showing this object). The frames were recorded with a TK2048 SITe CCD and the ESO-FIERA Controller, built by the Optical Detector Team (ODT). The images were prepared by Cyril Cavadore (ESO-ODT) , by means of Prism software. ESO PR Photos 02a-b/02 may be reproduced, if credit is given the European Southern Observatory (ESO).
NASA/IPAC Infrared Archive's General Image Cutouts Service
NASA Astrophysics Data System (ADS)
Alexov, A.; Good, J. C.
2006-07-01
The NASA/IPAC Infrared Archive (IRSA) ``Cutouts" Service (http://irsa.ipac.caltech.edu/applications/Cutouts) is a general tool for creating small ``cutout" FITS images and JPEGs from collections of data archived at IRSA. This service is a companion to IRSA's Atlas tool (http://irsa.ipac.caltech.edu/applications/Atlas/), which currently serves over 25 different data collections of various sizes and complexity and returns entire images for a user-defined region of the sky. The Cutouts Services sits on top of Atlas and extends the Atlas functionality by generating subimages at locations and sizes requested by the user from images already identified by Atlas. These results can be downloaded individually, in batch mode (using the program wget), or as a tar file. Cutouts re-uses IRSA's software architecture along with the publicly available Montage mosaicking tools. The advantages and disadvantages of this approach to generic cutout serving will be discussed.
Integrated test system of infrared and laser data based on USB 3.0
NASA Astrophysics Data System (ADS)
Fu, Hui Quan; Tang, Lin Bo; Zhang, Chao; Zhao, Bao Jun; Li, Mao Wen
2017-07-01
Based on USB3.0, this paper presents the design method of an integrated test system for both infrared image data and laser signal data processing module. The core of the design is FPGA logic control, the design uses dual-chip DDR3 SDRAM to achieve high-speed laser data cache, and receive parallel LVDS image data through serial-to-parallel conversion chip, and it achieves high-speed data communication between the system and host computer through the USB3.0 bus. The experimental results show that the developed PC software realizes the real-time display of 14-bit LVDS original image after 14-to-8 bit conversion and JPEG2000 compressed image after decompression in software, and can realize the real-time display of the acquired laser signal data. The correctness of the test system design is verified, indicating that the interface link is normal.
NASA Astrophysics Data System (ADS)
Wang, Ke-Yan; Li, Yun-Song; Liu, Kai; Wu, Cheng-Ke
2008-08-01
A novel compression algorithm for interferential multispectral images based on adaptive classification and curve-fitting is proposed. The image is first partitioned adaptively into major-interference region and minor-interference region. Different approximating functions are then constructed for two kinds of regions respectively. For the major interference region, some typical interferential curves are selected to predict other curves. These typical curves are then processed by curve-fitting method. For the minor interference region, the data of each interferential curve are independently approximated. Finally the approximating errors of two regions are entropy coded. The experimental results show that, compared with JPEG2000, the proposed algorithm not only decreases the average output bit-rate by about 0.2 bit/pixel for lossless compression, but also improves the reconstructed images and reduces the spectral distortion greatly, especially at high bit-rate for lossy compression.
OXYGEN-RICH SUPERNOVA REMNANT IN THE LARGE MAGELLANIC CLOUD
NASA Technical Reports Server (NTRS)
2002-01-01
This is a NASA Hubble Space Telescope image of the tattered debris of a star that exploded 3,000 years ago as a supernova. This supernova remnant, called N132D, lies 169,000 light-years away in the satellite galaxy, the Large Magellanic Cloud. A Hubble Wide Field Planetary Camera 2 image of the inner regions of the supernova remnant shows the complex collisions that take place as fast moving ejecta slam into cool, dense interstellar clouds. This level of detail in the expanding filaments could only be seen previously in much closer supernova remnants. Now, Hubble's capabilities extend the detailed study of supernovae out to the distance of a neighboring galaxy. Material thrown out from the interior of the exploded star at velocities of more than four million miles per hour (2,000 kilometers per second) plows into neighboring clouds to create luminescent shock fronts. The blue-green filaments in the image correspond to oxygen-rich gas ejected from the core of the star. The oxygen-rich filaments glow as they pass through a network of shock fronts reflected off dense interstellar clouds that surrounded the exploded star. These dense clouds, which appear as reddish filaments, also glow as the shock wave from the supernova crushes and heats the clouds. Supernova remnants provide a rare opportunity to observe directly the interiors of stars far more massive than our Sun. The precursor star to this remnant, which was located slightly below and left of center in the image, is estimated to have been 25 times the mass of our Sun. These stars 'cook' heavier elements through nuclear fusion, including oxygen, nitrogen, carbon, iron etc., and the titanic supernova explosions scatter this material back into space where it is used to create new generations of stars. This is the mechanism by which the gas and dust that formed our solar system became enriched with the elements that sustain life on this planet. Hubble spectroscopic observations will be used to determine the exact chemical composition of this nuclear- processed material, and thereby test theories of stellar evolution. The image shows a region of the remnant 50 light-years across. The supernova explosion should have been visible from Earth's southern hemisphere around 1,000 B.C., but there are no known historical records that chronicle what would have appeared as a 'new star' in the heavens. This 'true color' picture was made by superposing images taken on 9-10 August 1994 in three of the strongest optical emission lines: singly ionized sulfur (red), doubly ionized oxygen (green), and singly ionized oxygen (blue). Photo credit: Jon A. Morse (STScI) and NASA Investigating team: William P. Blair (PI; JHU), Michael A. Dopita (MSSSO), Robert P. Kirshner (Harvard), Knox S. Long (STScI), Jon A. Morse (STScI), John C. Raymond (SAO), Ralph S. Sutherland (UC-Boulder), and P. Frank Winkler (Middlebury). Image files in GIF and JPEG format may be accessed via anonymous ftp from oposite.stsci.edu in /pubinfo: GIF: /pubinfo/GIF/N132D.GIF JPEG: /pubinfo/JPEG/N132D.jpg The same images are available via World Wide Web from links in URL http://www.stsci.edu/public.html.
2015-12-24
Ripple-Carry RCA Ripple-Carry Adder RF Radio Frequency RMS Root-Mean-Square SEU Single Event Upset SIPI Signal and Image Processing Institute SNR...correctness, where 0.5 < p < 1, and a probability (1−p) of error. Errors could be caused by noise, radio frequency (RF) interference, crosstalk...utilized in the Apollo Guidance Computer is the three input NOR Gate. . . At the time that the decision was made to use in- 11 tegrated circuits, the
2015-12-24
Ripple-Carry RCA Ripple-Carry Adder RF Radio Frequency RMS Root-Mean-Square SEU Single Event Upset SIPI Signal and Image Processing Institute SNR...correctness, where 0.5 < p < 1, and a probability (1−p) of error. Errors could be caused by noise, radio frequency (RF) interference, crosstalk...utilized in the Apollo Guidance Computer is the three input NOR Gate. . . At the time that the decision was made to use in- 11 tegrated circuits, the
Design and evaluation of web-based image transmission and display with different protocols
NASA Astrophysics Data System (ADS)
Tan, Bin; Chen, Kuangyi; Zheng, Xichuan; Zhang, Jianguo
2011-03-01
There are many Web-based image accessing technologies used in medical imaging area, such as component-based (ActiveX Control) thick client Web display, Zerofootprint thin client Web viewer (or called server side processing Web viewer), Flash Rich Internet Application(RIA) ,or HTML5 based Web display. Different Web display methods have different peformance in different network environment. In this presenation, we give an evaluation on two developed Web based image display systems. The first one is used for thin client Web display. It works between a PACS Web server with WADO interface and thin client. The PACS Web server provides JPEG format images to HTML pages. The second one is for thick client Web display. It works between a PACS Web server with WADO interface and thick client running in browsers containing ActiveX control, Flash RIA program or HTML5 scripts. The PACS Web server provides native DICOM format images or JPIP stream for theses clients.
The Hazards Data Distribution System update
Jones, Brenda K.; Lamb, Rynn M.
2010-01-01
After a major disaster, a satellite image or a collection of aerial photographs of the event is frequently the fastest, most effective way to determine its scope and severity. The U.S. Geological Survey (USGS) Emergency Operations Portal provides emergency first responders and support personnel with easy access to imagery and geospatial data, geospatial Web services, and a digital library focused on emergency operations. Imagery and geospatial data are accessed through the Hazards Data Distribution System (HDDS). HDDS historically provided data access and delivery services through nongraphical interfaces that allow emergency response personnel to select and obtain pre-event baseline data and (or) event/disaster response data. First responders are able to access full-resolution GeoTIFF images or JPEG images at medium- and low-quality compressions through ftp downloads. USGS HDDS home page: http://hdds.usgs.gov/hdds2/
The Helioviewer Project: Solar Data Visualization and Exploration
NASA Astrophysics Data System (ADS)
Hughitt, V. Keith; Ireland, J.; Müller, D.; García Ortiz, J.; Dimitoglou, G.; Fleck, B.
2011-05-01
SDO has only been operating a little over a year, but in that short time it has already transmitted hundreds of terabytes of data, making it impossible for data providers to maintain a complete archive of data online. By storing an extremely efficiently compressed subset of the data, however, the Helioviewer project has been able to maintain a continuous record of high-quality SDO images starting from soon after the commissioning phase. The Helioviewer project was not designed to deal with SDO alone, however, and continues to add support for new types of data, the most recent of which are STEREO EUVI and COR1/COR2 images. In addition to adding support for new types of data, improvements have been made to both the server-side and client-side products that are part of the project. A new open-source JPEG2000 (JPIP) streaming server has been developed offering a vastly more flexible and reliable backend for the Java/OpenGL application JHelioviewer. Meanwhile the web front-end, Helioviewer.org, has also made great strides both in improving reliability, and also in adding new features such as the ability to create and share movies on YouTube. Helioviewer users are creating nearly two thousand movies a day from the over six million images that are available to them, and that number continues to grow each day. We provide an overview of recent progress with the various Helioviewer Project components and discuss plans for future development.
Joint reconstruction of multiview compressed images.
Thirumalai, Vijayaraghavan; Frossard, Pascal
2013-05-01
Distributed representation of correlated multiview images is an important problem that arises in vision sensor networks. This paper concentrates on the joint reconstruction problem where the distributively compressed images are decoded together in order to take benefit from the image correlation. We consider a scenario where the images captured at different viewpoints are encoded independently using common coding solutions (e.g., JPEG) with a balanced rate distribution among different cameras. A central decoder first estimates the inter-view image correlation from the independently compressed data. The joint reconstruction is then cast as a constrained convex optimization problem that reconstructs total-variation (TV) smooth images, which comply with the estimated correlation model. At the same time, we add constraints that force the reconstructed images to be as close as possible to their compressed versions. We show through experiments that the proposed joint reconstruction scheme outperforms independent reconstruction in terms of image quality, for a given target bit rate. In addition, the decoding performance of our algorithm compares advantageously to state-of-the-art distributed coding schemes based on motion learning and on the DISCOVER algorithm.
Detection of Copy-Rotate-Move Forgery Using Zernike Moments
NASA Astrophysics Data System (ADS)
Ryu, Seung-Jin; Lee, Min-Jeong; Lee, Heung-Kyu
As forgeries have become popular, the importance of forgery detection is much increased. Copy-move forgery, one of the most commonly used methods, copies a part of the image and pastes it into another part of the the image. In this paper, we propose a detection method of copy-move forgery that localizes duplicated regions using Zernike moments. Since the magnitude of Zernike moments is algebraically invariant against rotation, the proposed method can detect a forged region even though it is rotated. Our scheme is also resilient to the intentional distortions such as additive white Gaussian noise, JPEG compression, and blurring. Experimental results demonstrate that the proposed scheme is appropriate to identify the forged region by copy-rotate-move forgery.
A Portrait of One Hundred Thousand and One Galaxies
NASA Astrophysics Data System (ADS)
2002-08-01
Rich and Inspiring Experience with NGC 300 Images from the ESO Science Data Archive Summary A series of wide-field images centred on the nearby spiral galaxy NGC 300 , obtained with the Wide-Field Imager (WFI) on the MPG/ESO 2.2-m telescope at the La Silla Observatory , have been combined into a magnificent colour photo. These images have been used by different groups of astronomers for various kinds of scientific investigations, ranging from individual stars and nebulae in NGC 300, to distant galaxies and other objects in the background. This material provides an interesting demonstration of the multiple use of astronomical data, now facilitated by the establishment of extensively documented data archives, like the ESO Science Data Archive that now is growing rapidly and already contains over 15 Terabyte. Based on the concept of Astronomical Virtual Observatories (AVOs) , the use of archival data sets is on the rise and provides a large number of scientists with excellent opportunities for front-line investigations without having to wait for precious observing time. In addition to presenting a magnificent astronomical photo, the present account also illustrates this important new tool of the modern science of astronomy and astrophysics. PR Photo 18a/02 : WFI colour image of spiral galaxy NGC 300 (full field) . PR Photo 18b/02 : Cepheid stars in NGC 300 PR Photo 18c/02 : H-alpha image of NGC 300 PR Photo 18d/02 : Distant cluster of galaxies CL0053-37 in the NGC 300 field PR Photo 18e/02 : Dark matter distribution in CL0053-37 PR Photo 18f/02 : Distant, reddened cluster of galaxies in the NGC 300 field PR Photo 18g/02 : Distant galaxies, seen through the outskirts of NGC 300 PR Photo 18h/02 : "The View Beyond" ESO PR Photo 18a/02 ESO PR Photo 18a/02 [Preview - JPEG: 400 x 412 pix - 112k] [Normal - JPEG: 1200 x 1237 pix - 1.7M] [Hi-Res - JPEG: 4000 x 4123 pix - 20.3M] Caption : PR Photo 18a/02 is a reproduction of a colour-composite image of the nearby spiral galaxy NGC 300 and the surrounding sky field, obtained in 1999 and 2000 with the Wide-Field Imager (WFI) on the MPG/ESO 2.2-m telescope at the La Silla Observatory. See the text for details about the many different uses of this photo. Smaller areas in this large field are shown in Photos 18b-h/02 , cf. below. The High-Res version of this image has been compressed by a factor 4 (2 x 2 pixel rebinning) to reduce it to a reasonably transportable size. Technical information about this and the other photos is available at the end of this communication. Located some 7 million light-years away, the spiral galaxy NGC 300 [1] is a beautiful representative of its class, a Milky-Way-like member of the prominent Sculptor group of galaxies in the southern constellation of that name. NGC 300 is a big object in the sky - being so close, it extends over an angle of almost 25 arcmin, only slightly less than the size of the full moon. It is also relative bright, even a small pair of binoculars will unveil this magnificent spiral galaxy as a hazy glowing patch on a dark sky background. The comparatively small distance of NGC 300 and its face-on orientation provide astronomers with a wonderful opportunity to study in great detail its structure as well as its various stellar populations and interstellar medium. It was exactly for this purpose that some images of NGC 300 were obtained with the Wide-Field Imager (WFI) on the MPG/ESO 2.2-m telescope at the La Silla Observatory. This advanced 67-million pixel digital camera has already produced many impressive pictures, some of which are displayed in the WFI Photo Gallery [2]. With its large field of view, 34 x 34 arcmin 2 , the WFI is optimally suited to show the full extent of the spiral galaxy NGC 300 and its immediate surroundings in the sky, cf. PR Photo 18a/02 . NGC 300 and "Virtual Astronomy" In addition to being a beautiful sight in its own right, the present WFI-image of NGC 300 is also a most instructive showcase of how astronomers with very different research projects nowadays can make effective use of the same observations for their programmes . The idea to exploit one and the same data set is not new, but thanks to rapid technological developments it has recently developed into a very powerful tool for the astronomers in their continued quest to understand the Universe. This kind of work has now become very efficient with the advent of a fully searchable data archive from which observational data can then - after the expiry of a nominal one-year proprietary period for the observers - be made available to other astronomers. The ESO Science Data Archive was established some years ago and now encompasses more than 15 Terabyte [3]. Normally, the identification of specific data sets in such a large archive would be a very difficult and time-consuming task. However, effective projects and software "tools" like ASTROVIRTEL and Querator now allow the users quickly to "filter" large amounts of data and extract those of their specific interest. Indeed, "Archival Astronomy" has already led to many important discoveries, cf. the ASTROVIRTEL list of publications. There is no doubt that "Virtual Astronomical Observatories" will play an increasingly important role in the future, cf. ESO PR 26/01. The present wide-field images of NGC 300 provide an impressive demonstration of the enormous potential of this innovative approach. Some of the ways they were used are explained below. Cepheids in NGC 300 and the cosmic distance scale ESO PR Photo 18b/02 ESO PR Photo 18b/02 [Preview - JPEG: 468 x 400 pix - 112k] [Full-Res - JPEG: 1258 x 1083 pix - 1.6M] Caption : PR Photo 18b/02 shows some of the Cepheid type stars in the spiral galaxy NGC 300 (at the centre of the markers), as they were identified by Wolfgang Gieren and collaborators during the research programme for which the WFI images of NGC 300 were first obtained. In this area of NGC 300, there is also a huge cloud of ionized hydrogen (a "HII shell"). It measures about 2000 light-years in diameter, thus dwarfing even the enormous Tarantula Nebula in the LMC, also photographed with the WFI (cf. ESO PR Photos 14a-g/02 ). The largest versions ("normal" or "full-res") of this and the following photos are shown with their original pixel size, demonstrating the incredible amount of detail visible on one WFI image. Technical information about this photo is available below. In 1999, Wolfgang Gieren (Universidad de Concepcion, Chile) and his colleagues started a search for Cepheid-type variable stars in NGC 300. These stars constitute a key element in the measurement of distances in the Universe. It has been known since many years that the pulsation period of a Cepheid-type star depends on its intrinsic brightness (its "luminosity"). Thus, once its period has been measured, the astronomers can calculate its luminosity. By comparing this to the star's apparent brightness in the sky, and applying the well-known diminution of light with the second power of the distance, they can obtain the distance to the star. This fundamental method has allowed some of the most reliable measurements of distances in the Universe and has been essential for all kinds of astrophysics, from the closest stars to the remotest galaxies. Previous to Gieren's new project, only about a dozen Cepheids were known in NGC 300. However, by regularly obtaining wide-field WFI exposures of NGC 300 from July 1999 through January 2000 and carefully monitoring the apparent brightness of its brighter stars during that period, the astronomers detected more than 100 additional Cepheids . The brightness variations (in astronomical terminology: "light curves") could be determined with excellent precision from the WFI data. They showed that the pulsation periods of these Cepheids range from about 5 to 115 days. Some of these Cepheids are identified on PR Photo 18b/02 , in the middle of a very crowded field in NGC 300. When fully studied, these unique observational data will yield a new and very accurate distance to NGC 300, making this galaxy a future cornerstone in the calibration of the cosmic distance scale . Moreover, they will also allow to understand in more detail how the brightness of a Cepheid-type star depends on its chemical composition, currently a major uncertainty in the application of the Cepheid method to the calibration of the extragalactic distance scale. Indeed, the effect of the abundance of different elements on the luminosity of a Cepheid can be especially well measured in NGC 300 due to the existence of large variations of these abundances in the stars located in the disk of this galaxy. Gieren and his group, in collaboration with astronomers Fabio Bresolin and Rolf Kudritzki (Institute of Astronomy, Hawaii, USA) are currently measuring the variations of these chemical abundances in stars in the disk of NGC 300, by means of spectra of about 60 blue supergiant stars, obtained with the FORS multi-mode instruments at the ESO Very Large Telescope (VLT) on Paranal. These stars, that are among the optically brightest in NGC 300, were first identified in the WFI images of this galaxy obtained in different colours - the same that were used to produce PR Photo 18a/02 . The nature of those stars was later spectroscopically confirmed at the VLT. As an important byproduct of these measurements, the luminosities of the blue supergiant stars in NGC 300 will themselves be calibrated (as a new cosmic "standard candle"), taking advantage of their stellar wind properties that can be measured from the VLT spectra. The WFI Cepheid observations in NGC 300, as well as the VLT blue supergiant star observations, form part of a large research project recently initiated by Gieren and his group that is concerned with the improvement of various stellar distance indicators in nearby galaxies (the "ARAUCARIA" project ). Clues on star formation history in NGC 300 ESO PR Photo 18c/02 ESO PR Photo 18c/02 [Preview - JPEG: 440 x 400 pix - 63k] [Normal - JPEG: 1200 x 1091 pix - 664k] [Full-Res - JPEG: 5515 x 5014 pix - 14.3M] Caption : PR Photo 18c/02 displays NGC 300, as seen through a narrow optical filter (H-alpha) in the red light of hydrogen atoms. A population of intrinsically bright and young stars turned "on" just a few million years ago. Their radiation and strong stellar winds have shaped many of the clouds of ionized hydrogen gas ("HII shells") seen in this photo. The "rings" near some of the bright stars are caused by internal reflections in the telescope. Technical information about this photo is available below.. But there is much more to discover on these WFI images of NGC 300! The WFI images obtained in several broad and narrow band filters from the ultraviolet to the near-infrared spectral region (U, B, V, R, I and H-alpha) allow a detailed study of groups of heavy, hot stars (known as "OB associations") and a large number of huge clouds of ionized hydrogen ("HII shells") in this galaxy. Corresponding studies have been carried out by Gieren's group, resulting in the discovery of an amazing number of OB associations, including a number of giant associations. These investigations, taken together with the observed distribution of the pulsation periods of the Cepheids, allow to better understand the history of star formation in NGC 300. For example, three distinct peaks in the number distribution of the pulsation periods of the Cepheids seem to indicate that there have been at least three different bursts of star formation within the past 100 million years. The large number of OB associations and HII shells ( PR Photo 18c/02 ) furthermore indicate the presence of a numerous, very young stellar population in NGC 300, aged only a few million years. Dark matter and the observed shapes of distant galaxies In early 2002, Thomas Erben and Mischa Schirmer from the "Institut für Astrophysik and extraterrestrische Forschung" ( IAEF , Universität Bonn, Germany), in the course of their ASTROVIRTEL programme, identified and retrieved all available broad-band and H-alpha images of NGC 300 available in the ESO Science Data Archive. Most of these have been observed for the project by Gieren and his colleagues, described above. However, the scientific interest of the German astronomers was very different from that of their colleagues and they were not at all concerned about the main object in the field, NGC 300. In a very different approach, they instead wanted to study those images to measure the amount of dark matter in the Universe, by means of the weak gravitational lensing effect produced by distant galaxy clusters. Various observations, ranging from the measurement of internal motions ("rotation curves") in spiral galaxies to the presence of hot X-ray gas in clusters of galaxies and the motion of galaxies in those clusters, indicate that there is about ten times more matter in the Universe than what is observed in the form of stars, gas and galaxies ("luminous matter"). As this additional matter does not emit light at any wavelengths, it is commonly referred to as "dark" matter - its true nature is yet entirely unclear. Insight into the distribution of dark matter in the Universe can be gained by looking at the shapes of images of very remote galaxies, billions of light-years away, cf. ESO PR 24/00. Light from such distant objects travels vast distances through space before arriving here on Earth, and whenever it passes heavy clusters of galaxies, it is bent a little due to the associated gravitational field. Thus, in long-exposure, high-quality images, this "weak lensing" effect can be perceived as a coherent pattern of distortion of the images of background galaxies. Gravitational lensing in the NGC 300 field ESO PR Photo 18d/02 ESO PR Photo 18d/02 [Preview - JPEG: 400 x 495 pix - 82k] [Full-Res - JPEG: 1304 x 1615 pix - 3.2M] Caption : PR Photo 18d/02 shows the distant cluster of galaxies CL0053-37 , as imaged on the WFI photo of the NGC 300 sky field. The elongated distribution of the cluster galaxies, as well as the presence of two large, early-type elliptical galaxies indicate that this cluster is still in the process of formation. Some of the galaxies appear to be merging. From the measured redshift ( z = 0.1625), a distance of about 2.1 billion light-years is deduced. Technical information about this photo is available below. ESO PR Photo 18e/02 ESO PR Photo 18e/02 [Preview - JPEG: 400 x 567 pix - 89k] [Normal - JPEG: 723 x 1024 pix - 424k] Caption : PR Photo 18e/02 is a "map" of the dark matter distribution (black contours) in the cluster of galaxies CL0053-37 (shown in PR Photo 18d/02 ), as obtained from the weak lensing effects detected in the WFI images, and the X-ray flux (green contours) taken from the All-Sky Survey carried out by the ROSAT satellite observatory. The distribution of galaxies resembles the elongated, dark-matter profile. Because of ROSAT's limited image sharpness (low "angular resolution"), it cannot be entirely ruled out that the observed X-ray emission is due to an active nucleus of a galaxy in CL0053-37, or even a foreground stellar binary system in NGC 300. The WFI NGC 300 images appeared promising for gravitational lensing research because of the exceptionally long total exposure time. Although the large foreground galaxy NGC 300 would block the light of tens of thousands of galaxies in the background, a huge number of others would still be visible in the outskirts of this sky field, making a search for clusters of galaxies and associated lensing effects quite feasible. To ensure the best possible image sharpness in the combined image, and thus to obtain the most reliable measurements of the shapes of the background objects, only red (R-band) images obtained under the best seeing conditions were combined. In order to provide additional information about the colours of these faint objects, a similar approach was adopted for images in the other bands as well. The German astronomers indeed measured a significant lensing effect for one of the galaxy clusters in the field ( CL0053-37 , see PR Photo 18d/02 ); the images of background galaxies around this cluster were noticeably distorted in the direction tangential to the cluster center. Based on the measured degree of distortion, a map of the distribution of (dark) matter in this direction was constructed ( PR Photo 18e/02 ). The separation of unlensed foreground (bluer) and lensed background galaxies (redder) greatly profited from the photometric measurements done by Gieren's group in the course of their work on the Cepheids in NGC 300. Assuming that the lensed background galaxies lie at a mean redshift of 1.0, i.e. a distance of 8 billion light-years, a mass of about 2 x 10 14 solar masses was obtained for the CL0053-37 cluster. This lensing analysis in the NGC 300 field is part of the Garching-Bonn Deep Survey (GaBoDS) , a weak gravitational lensing survey led by Peter Schneider (IAEF). GaBoDS is based on exposures made with the WFI and until now a sky area of more than 12 square degrees has been imaged during very good seeing conditions. Once complete, this investigation will allow more insight into the distribution and cosmological evolution of galaxy cluster masses, which in turn provide very useful information about the structure and history of the Universe. One hundred thousand galaxies ESO PR Photo 18f/02 ESO PR Photo 18f/02 [Preview - JPEG: 400 x 526 pix - 93k] [Full-Res - JPEG: 756 x 994 pix - 1.0M] Caption : PR Photo 18f/02 shows a group of galaxies , seen on the NGC 300 images. They are all quite red and their similar colours indicate that they must be about equally distant. They probably constitute a distant cluster, now in the stage of formation. Technical information about this photo is available below. ESO PR Photo 18g/02 ESO PR Photo 18g/02 [Preview - JPEG: 469 x 400 pix - xxk] [Full-Res - JPEG: 1055 x 899 pix - 968k] Caption : PR Photo 18g/02 shows an area in the outer regions of NGC 300. Disks of spiral galaxies are usually quite "thin" (some hundred light-years), as compared to their radial extent (tens of thousands of light-years across). In areas where only small amounts of dust are present, it is possible to see much more distant galaxies right through the disk of NGC 300 , as demonstrated by this image. Technical information about this photo is available below. ESO PR Photo 18h/02 ESO PR Photo 18h/02 [Preview - JPEG: 451 x 400 pix - 89k] [Normal - JPEG: 902 x 800 pix - 856k] [Full-Res - JPEG: 2439 x 2163 pix - 6.0M] Caption : PR Photo 18h/02 is an astronomers' joy ride to infinity. Such a rarely seen view of our universe imparts a feeling of the vast distances in space. In the upper half of the image, the outer region of NGC 300 is resolved into innumerable stars, while in the lower half, myriads of galaxies - a thousand times more distant - catch the eye. In reality, many of them are very similar to NGC 300, they are just much more remote. In addition to allowing a detailed investigation of dark matter and lensing effects in this field, the present, very "deep" colour image of NGC 300 invites to perform a closer inspection of the background galaxy population itself . No less than about 100,000 galaxies of all types are visible in this amazing image. Three known quasars ([ICS96] 005342.1-375947, [ICS96] 005236.1-374352, [ICS96] 005336.9-380354) with redshifts 2.25, 2.35 and 2.75, respectively, happen to lie inside this sky field, together with many interacting galaxies, some of which feature tidal tails. There are also several groups of highly reddened galaxies - probably distant clusters in formation, cf. PR Photo 18f/02 . Others are seen right through the outer regions of NGC 300, cf. PR Photo 18g/02 . More detailed investigations of the numerous galaxies in this field are now underway. From the nearby spiral galaxy NGC 300 to objects in the young Universe, it is all there, truly an astronomical treasure trove, cf. PR Photo 18h/02 ! Notes [1]: "NGC" means "New General Catalogue" (of nebulae and clusters) that was published in 1888 by J.L.E. Dreyer in the "Memoirs of the Royal Astronomical Society". [2]: Other colour composite images from the Wide-Field Imager at the MPG/ESO 2.2-m telescope at the La Silla Observatory are available at the ESO Outreach website at http://www.eso.org/esopia"bltxt">Tarantula Nebula in the LMC, cf. ESO PR Photos 14a-g/02. [3]: 1 Terabyte = 10 12 byte = 1000 Gigabyte = 1 million million byte. Technical information about the photos PR Photo 18a/02 and all cutouts were made from 110 WFI images obtained in the B-band (total exposure time 11.0 hours, rendered as blue), 105 images in the V-band (10.4 hours, green), 42 images in the R-band (4.2 hours, red) and 21 images through a H-alpha filter (5.1 hours, red). In total, 278 images of NGC 300 have been assembled to produce this colour image, together with about as many calibration images (biases, darks and flats). 150 GB of hard disk space were needed to store all uncompressed raw data, and about 1 TB of temporary files was produced during the extensive data reduction. Parallel processing of all data sets took about two weeks on a four-processor Sun Enterprise 450 workstation. The final colour image was assembled in Adobe Photoshop. To better show all details, the overall brightness of NGC 300 was reduced as compared to the outskirts of the field. The (red) "rings" near some of the bright stars originate from the H-alpha frames - they are caused by internal reflections in the telescope. The images were prepared by Mischa Schirmer at the Institut für Astrophysik und Extraterrestrische Forschung der Universität Bonn (IAEF) by means of a software pipeline specialised for reduction of multiple CCD wide-field imaging camera data. The raw data were extracted from the public sector of the ESO Science Data Archive. The extensive observations were performed at the ESO La Silla Observatory by Wolfgang Gieren, Pascal Fouque, Frederic Pont, Hermann Boehnhardt and La Silla staff, during 34 nights between July 1999 and January 2000. Some additional observations taken during the second half of 2000 were retrieved by Mischa Schirmer and Thomas Erben from the ESO archive. CD-ROM with full-scale NGC 300 image soon available PR Photo 18a/02 has been compressed by a factor 4 (2 x 2 rebinning). For PR Photos 18b-h/02 , the largest-size versions of the images are shown at the original scale (1 pixel = 0.238 arcsec). A full-resolution TIFF-version (approx. 8000 x 8000 pix; 200 Mb) of PR Photo 18a/02 will shortly be made available by ESO on a special CD-ROM, together with some other WFI images of the same size. An announcement will follow in due time.
Adaptive intercolor error prediction coder for lossless color (rgb) picutre compression
NASA Astrophysics Data System (ADS)
Mann, Y.; Peretz, Y.; Mitchell, Harvey B.
2001-09-01
Most of the current lossless compression algorithms, including the new international baseline JPEG-LS algorithm, do not exploit the interspectral correlations that exist between the color planes in an input color picture. To improve the compression performance (i.e., lower the bit rate) it is necessary to exploit these correlations. A major concern is to find efficient methods for exploiting the correlations that, at the same time, are compatible with and can be incorporated into the JPEG-LS algorithm. One such algorithm is the method of intercolor error prediction (IEP), which when used with the JPEG-LS algorithm, results on average in a reduction of 8% in the overall bit rate. We show how the IEP algorithm can be simply modified and that it nearly doubles the size of the reduction in bit rate to 15%.
Lossless data embedding for all image formats
NASA Astrophysics Data System (ADS)
Fridrich, Jessica; Goljan, Miroslav; Du, Rui
2002-04-01
Lossless data embedding has the property that the distortion due to embedding can be completely removed from the watermarked image without accessing any side channel. This can be a very important property whenever serious concerns over the image quality and artifacts visibility arise, such as for medical images, due to legal reasons, for military images or images used as evidence in court that may be viewed after enhancement and zooming. We formulate two general methodologies for lossless embedding that can be applied to images as well as any other digital objects, including video, audio, and other structures with redundancy. We use the general principles as guidelines for designing efficient, simple, and high-capacity lossless embedding methods for three most common image format paradigms - raw, uncompressed formats (BMP), lossy or transform formats (JPEG), and palette formats (GIF, PNG). We close the paper with examples of how the concept of lossless data embedding can be used as a powerful tool to achieve a variety of non-trivial tasks, including elegant lossless authentication using fragile watermarks. Note on terminology: some authors coined the terms erasable, removable, reversible, invertible, and distortion-free for the same concept.
An Energy-Efficient Compressive Image Coding for Green Internet of Things (IoT).
Li, Ran; Duan, Xiaomeng; Li, Xu; He, Wei; Li, Yanling
2018-04-17
Aimed at a low-energy consumption of Green Internet of Things (IoT), this paper presents an energy-efficient compressive image coding scheme, which provides compressive encoder and real-time decoder according to Compressive Sensing (CS) theory. The compressive encoder adaptively measures each image block based on the block-based gradient field, which models the distribution of block sparse degree, and the real-time decoder linearly reconstructs each image block through a projection matrix, which is learned by Minimum Mean Square Error (MMSE) criterion. Both the encoder and decoder have a low computational complexity, so that they only consume a small amount of energy. Experimental results show that the proposed scheme not only has a low encoding and decoding complexity when compared with traditional methods, but it also provides good objective and subjective reconstruction qualities. In particular, it presents better time-distortion performance than JPEG. Therefore, the proposed compressive image coding is a potential energy-efficient scheme for Green IoT.
NASA Astrophysics Data System (ADS)
Kusyk, Janusz; Eskicioglu, Ahmet M.
2005-10-01
Digital watermarking is considered to be a major technology for the protection of multimedia data. Some of the important applications are broadcast monitoring, copyright protection, and access control. In this paper, we present a semi-blind watermarking scheme for embedding a logo in color images using the DFT domain. After computing the DFT of the luminance layer of the cover image, the magnitudes of DFT coefficients are compared, and modified. A given watermark is embedded in three frequency bands: Low, middle, and high. Our experiments show that the watermarks extracted from the lower frequencies have the best visual quality for low pass filtering, adding Gaussian noise, JPEG compression, resizing, rotation, and scaling, and the watermarks extracted from the higher frequencies have the best visual quality for cropping, intensity adjustment, histogram equalization, and gamma correction. Extractions from the fragmented and translated image are identical to extractions from the unattacked watermarked image. The collusion and rewatermarking attacks do not provide the hacker with useful tools.
SEMG signal compression based on two-dimensional techniques.
de Melo, Wheidima Carneiro; de Lima Filho, Eddie Batista; da Silva Júnior, Waldir Sabino
2016-04-18
Recently, two-dimensional techniques have been successfully employed for compressing surface electromyographic (SEMG) records as images, through the use of image and video encoders. Such schemes usually provide specific compressors, which are tuned for SEMG data, or employ preprocessing techniques, before the two-dimensional encoding procedure, in order to provide a suitable data organization, whose correlations can be better exploited by off-the-shelf encoders. Besides preprocessing input matrices, one may also depart from those approaches and employ an adaptive framework, which is able to directly tackle SEMG signals reassembled as images. This paper proposes a new two-dimensional approach for SEMG signal compression, which is based on a recurrent pattern matching algorithm called multidimensional multiscale parser (MMP). The mentioned encoder was modified, in order to efficiently work with SEMG signals and exploit their inherent redundancies. Moreover, a new preprocessing technique, named as segmentation by similarity (SbS), which has the potential to enhance the exploitation of intra- and intersegment correlations, is introduced, the percentage difference sorting (PDS) algorithm is employed, with different image compressors, and results with the high efficiency video coding (HEVC), H.264/AVC, and JPEG2000 encoders are presented. Experiments were carried out with real isometric and dynamic records, acquired in laboratory. Dynamic signals compressed with H.264/AVC and HEVC, when combined with preprocessing techniques, resulted in good percent root-mean-square difference [Formula: see text] compression factor figures, for low and high compression factors, respectively. Besides, regarding isometric signals, the modified two-dimensional MMP algorithm outperformed state-of-the-art schemes, for low compression factors, the combination between SbS and HEVC proved to be competitive, for high compression factors, and JPEG2000, combined with PDS, provided good performance allied to low computational complexity, all in terms of percent root-mean-square difference [Formula: see text] compression factor. The proposed schemes are effective and, specifically, the modified MMP algorithm can be considered as an interesting alternative for isometric signals, regarding traditional SEMG encoders. Besides, the approach based on off-the-shelf image encoders has the potential of fast implementation and dissemination, given that many embedded systems may already have such encoders available, in the underlying hardware/software architecture.
Morgan, Karen L. M.; Krohn, M. Dennis; Guy, Kristy K.
2016-04-28
The U.S. Geological Survey (USGS), as part of the National Assessment of Coastal Change Hazards project, conducts baseline and storm-response photography missions to document and understand the changes in vulnerability of the Nation's coasts to extreme storms (Morgan, 2009). On September 14-15, 2008, the USGS conducted an oblique aerial photographic survey along the Alabama, Mississippi, and Louisiana barrier islands and the north Texas coast, aboard a Beechcraft Super King Air 200 (aircraft) at an altitude of 500 feet (ft) and approximately 1,200 ft offshore. This mission was flown to collect post-Hurricane Ike data for assessing incremental changes in the beach and nearshore area since the last survey, flown on September 9-10, 2008, and the data can be used in the assessment of future coastal change.The photographs provided in this report are Joint Photographic Experts Group (JPEG) images. ExifTool was used to add the following to the header of each photo: time of collection, Global Positioning System (GPS) latitude, GPS longitude, keywords, credit, artist (photographer), caption, copyright, and contact information. The photograph locations are an estimate of the position of the aircraft at the time the photograph was taken and do not indicate the location of any feature in the images (see the Navigation Data page). These photographs document the state of the barrier islands and other coastal features at the time of the survey. Pages containing thumbnail images of the photographs, referred to as contact sheets, were created in 5-minute segments of flight time. These segments can be found on the Photos and Maps page. Photographs can be opened directly with any JPEG-compatible image viewer by clicking on a thumbnail on the contact sheet.In addition to the photographs, a Google Earth Keyhole Markup Language (KML) file is provided and can be used to view the images by clicking on the marker and then clicking on either the thumbnail or the link above the thumbnail. The KML file was created using the photographic navigation files. The KML file can be found in the kml folder.
Morgan, Karen L. M.; Karen A. Westphal,
2016-04-21
The U.S. Geological Survey (USGS), as part of the National Assessment of Coastal Change Hazards project, conducts baseline and storm-response photography missions to document and understand the changes in vulnerability of the Nation's coasts to extreme storms (Morgan, 2009). On September 2-3, 2012, the USGS conducted an oblique aerial photographic survey along the Alabama, Mississippi, and Louisiana barrier islands aboard a Cessna 172 (aircraft) at an altitude of 500 feet (ft) and approximately 1,000 ft offshore. This mission was flown to collect post-Hurricane Isaac data for assessing incremental changes in the beach and nearshore area since the last survey, flown in September 2008 (central Louisiana barrier islands) and June 2011 (Dauphin Island, Alabama, to Breton Island, Louisiana), and the data can be used in the assessment of future coastal change.The photographs provided in this report are Joint Photographic Experts Group (JPEG) images. ExifTool was used to add the following to the header of each photo: time of collection, Global Positioning System (GPS) latitude, GPS longitude, keywords, credit, artist (photographer), caption, copyright, and contact information. The photograph locations are an estimate of the position of the aircraft at the time the photograph was taken and do not indicate the location of any feature in the images (see the Navigation Data page). These photographs document the state of the barrier islands and other coastal features at the time of the survey. Pages containing thumbnail images of the photographs, referred to as contact sheets, were created in 5-minute segments of flight time. These segments can be found on the Photos and Maps page. Photographs can be opened directly with any JPEG-compatible image viewer by clicking on a thumbnail on the contact sheet.In addition to the photographs, a Google Earth Keyhole Markup Language (KML) file is provided and can be used to view the images by clicking on the marker and then clicking on either the thumbnail or the link above the thumbnail. The KML files were created using the photographic navigation files. These KML file(s) can be found in the kml folder.
Face detection on distorted images using perceptual quality-aware features
NASA Astrophysics Data System (ADS)
Gunasekar, Suriya; Ghosh, Joydeep; Bovik, Alan C.
2014-02-01
We quantify the degradation in performance of a popular and effective face detector when human-perceived image quality is degraded by distortions due to additive white gaussian noise, gaussian blur or JPEG compression. It is observed that, within a certain range of perceived image quality, a modest increase in image quality can drastically improve face detection performance. These results can be used to guide resource or bandwidth allocation in a communication/delivery system that is associated with face detection tasks. A new face detector based on QualHOG features is also proposed that augments face-indicative HOG features with perceptual quality-aware spatial Natural Scene Statistics (NSS) features, yielding improved tolerance against image distortions. The new detector provides statistically significant improvements over a strong baseline on a large database of face images representing a wide range of distortions. To facilitate this study, we created a new Distorted Face Database, containing face and non-face patches from images impaired by a variety of common distortion types and levels. This new dataset is available for download and further experimentation at www.ideal.ece.utexas.edu/˜suriya/DFD/.
An analysis of absorbing image on the Indonesian text by using color matching
NASA Astrophysics Data System (ADS)
Hutagalung, G. A.; Tulus; Iryanto; Lubis, Y. F. A.; Khairani, M.; Suriati
2018-03-01
The insertion of messages in an image is performed by inserting per character message in some pixels. One way of inserting a message into an image is by inserting the ASCII decimal value of a character to the decimal value of the primary color of the image. Messages that use characters in letters, numbers or symbols, where the use of letters of each word is different in number and frequency of use, as well as the use of letters in various messages within each language. In Indonesian language, the use of the letter A to be the most widely used, and the use of other letters greatly affect the clarity of a message or text presented in the language. This study aims to determine the capacity to absorb the message in Indonesian language from an image and what are the things that affect the difference. The data used in this study consists of several images in JPG or JPEG format can be obtained from the image drawing software or hardware of the image makers at different image sizes. The results of testing on four samples of a color image have been obtained by using an image size of 1200 X 1920.
A Forceful Demonstration by FORS
NASA Astrophysics Data System (ADS)
1998-09-01
New VLT Instrument Provides Impressive Images Following a tight schedule, the ESO Very Large Telescope (VLT) project forges ahead - full operative readiness of the first of the four 8.2-m Unit Telescopes will be reached early next year. On September 15, 1998, another crucial milestone was successfully passed on-time and within budget. Just a few days after having been mounted for the first time at the first 8.2-m VLT Unit Telescope (UT1), the first of a powerful complement of complex scientific instruments, FORS1 ( FO cal R educer and S pectrograph), saw First Light . Right from the beginning, it obtained some excellent astronomical images. This major event now opens a wealth of new opportunities for European Astronomy. FORS - a technological marvel FORS1, with its future twin (FORS2), is the product of one of the most thorough and advanced technological studies ever made of a ground-based astronomical instrument. This unique facility is now mounted at the Cassegrain focus of the VLT UT1. Despite its significant dimensions, 3 x 1.5 metres and 2.3 tonnes, it appears rather small below the giant 53 m 2 Zerodur main mirror. Profiting from the large mirror area and the excellent optical properties of the UT1, FORS has been specifically designed to investigate the faintest and most remote objects in the universe. This complex VLT instrument will soon allow European astronomers to look beyond current observational horizons. The FORS instruments are "multi-mode instruments" that may be used in several different observation modes. It is, e.g., possible to take images with two different image scales (magnifications) and spectra at different resolutions may be obtained of individual or multiple objects. Thus, FORS may first detect the images of distant galaxies and immediately thereafter obtain recordings of their spectra. This allows for instance the determination of their stellar content and distances. As one of the most powerful astronomical instruments of its kind, FORS1 is a real workhorse for the study of the distant universe. How FORS was built The FORS project is being carried out under ESO contract by a consortium of three German astronomical institutes, namely the Heidelberg State Observatory and the University Observatories of Göttingen and Munich. When this project is concluded, the participating institutes will have invested about 180 man-years of work. The Heidelberg State Observatory was responsible for directing the project, for designing the entire optical system, for developing the components of the imaging, spectroscopic, and polarimetric optics, and for producing the special computer software needed for handling and analysing the measurements obtained with FORS. Moreover, a telescope simulator was built in the shop of the Heidelberg observatory that made it possible to test all major functions of FORS in Europe, before the instrument was shipped to Paranal. The University Observatory of Göttingen performed the design, the construction and the installation of the entire mechanics of FORS. Most of the high-precision parts, in particular the multislit unit, were manufactured in the observatory's fine-mechanical workshops. The procurement of the huge instrument housings and flanges, the computer analysis for mechanical and thermal stability of the sensitive spectrograph and the construction of the handling, maintenance and aligning equipment as well as testing the numerous opto- and electro-mechanical functions were also under the responsibility of this Observatory. The University of Munich had the responsibility for the management of the project, the integration and test in the laboratory of the complete instrument, for design and installation of all electronics and electro-mechanics, and for developing and testing the comprehensive software to control FORS in all its parts completely by computers (filter and grism wheels, shutters, multi-object slit units, masks, all optical components, electro motors, encoders etc.). In addition, detailed computer software was provided to prepare the complex astronomical observations with FORS in advance and to monitor the instrument performance by quality checks of the scientific data accumulated. In return for building FORS for the community of European astrophysicists, the scientists in the three institutions of the FORS Consortium have received a certain amount of Guaranteed Observing Time at the VLT. This time will be used for various research projects concerned, among others, with minor bodies in the outer solar system, stars at late stages of their evolution and the clouds of gas they eject, as well as galaxies and quasars at very large distances, thereby permitting a look-back towards the early epoch of the universe. First tests of FORS1 at the VLT UT1: a great success After careful preparation, the FORS consortium has now started the so-called commissioning of the instrument. This comprises the thorough verification of the specified instrument properties at the telescope, checking the correct functioning under software control from the Paranal control room and, at the end of this process, a demonstration that the instrument fulfills its scientific purpose as planned. While performing these tests, the commissioning team at Paranal were able to obtain images of various astronomical objects, some of which are shown here. Two of these were obtained on the night of "FORS First Light". The photos demonstrate some of the impressive posibilities with this new instrument. They are based on observations with the FORS standard resolution collimator (field size 6.8 x 6.8 armin = 2048 x 2048 pixels; 1 pixel = 0.20 arcsec). Spiral galaxy NGC 1288 ESO PR Photo 37a/98 ESO PR Photo 37a/98 [Preview - JPEG: 800 x 908 pix - 224k] [High-Res - JPEG: 3000 x 3406 pix - 1.5Mb] A colour image of spiral galaxy NGC 1288, obtained on the night of "FORS First Light". The first photo shows a reproduction of a colour composite image of the beautiful spiral galaxy NGC 1288 in the southern constellation Fornax. PR Photo 37a/98 covers the entire field that was imaged on the 2048 x 2048 pixel CCD camera. It is based on CCD frames in different colours that were taken under good seeing conditions during the night of First Light (15 September 1998). The distance to this galaxy is about 300 million light-years; it recedes with a velocity of 4500 km/sec. Its diameter is about 200,000 light-years. Technical information : Photo 37a/98 is based on a composite of three images taken behind three different filters: B (420 nm; 6 min), V (530 nm; 3 min) and I (800 nm; 3min) during a period of 0.7 arcsec seeing. The field shown measures 6.8 x 6.8 arcmin. North is left; East is down. Distant cluster of galaxies ESO PR Photo 37b/98 ESO PR Photo 37b/98 [Preview - JPEG: 657 x 800 pix - 248k] [High-Res - JPEG: 2465 x 3000 pix - 1.9Mb] A peculiar cluster of galaxies in a sky field near the quasar PB5763 . ESO PR Photo 37c/98 ESO PR Photo 37c/98 [Preview - JPEG: 670 x 800 pix - 272k] [High-Res - JPEG: 2512 x 3000 pix - 1.9Mb] Enlargement from PR Photo 37b/98, showing the peculiar cluster of galaxies in more detail. The next photos are reproduced from a 5-min near-infrared exposure, also obtained during the night of First Light of the FORS1 instrument (September 15, 1998). PR Photo 37b/98 shows a sky field near the quasar PB5763 in which is also seen a peculiar, quite distant cluster of galaxies. It consists of a large number of faint and distant galaxies that have not yet been thoroughly investigated. Many other fainter galaxies are seen in other areas, for instance in the right part of the field. This cluster is a good example of a type of object to which much observing time with FORS will be dedicated, once it enters into regular operation. An enlargement of the same field is reproduced in PR Photo 37c/98. It shows the individual members of this cluster of galaxies in more detail. Note in particular the interesting spindle-shaped galaxy that apparently possesses an equatorial ring. There is also a fine spiral galaxy and many fainter galaxies. They may be dwarf members of the cluster or be located in the background at even larger distances. Technical information : PR Photos 37b/98 (negative) and 37c/98 (positive) are based on a monochrome image taken in 0.8 arcsec seeing through a near-infrared (I; 800 nm) filtre. The exposure time was 5 minutes and the image was flat-fielded. The fields shown measure 6.8 x 6.8 arcmin and 2.5 x 2.3 arcmin, respectively. North is to the upper left; East is to the lower left. Spiral galaxy NGC 1232 ESO PR Photo 37d/98 ESO PR Photo 37d/98 [Preview - JPEG: 800 x 912 pix - 760k] [High-Res - JPEG: 3000 x 3420 pix - 5.7Mb] A colour image of spiral galaxy NGC 1232, obtained on September 21, 1998. ESO PR Photo 37e/98 ESO PR Photo 37e/98 [Preview - JPEG: 800 x 961 pix - 480k] [High-Res - JPEG: 3000 x 3602 pix - 3.5Mb] Enlargement of central area of PR Photo 37d/98. This spectacular image (Photo 37d/98) of the large spiral galaxy NGC 1232 was obtained on September 21, 1998, during a period of good observing conditions. It is based on three exposures in ultra-violet, blue and red light, respectively. The colours of the different regions are well visible: the central areas (Photo 37e/98) contain older stars of reddish colour, while the spiral arms are populated by young, blue stars and many star-forming regions. Note the distorted companion galaxy on the left side of Photo 37d/98, shaped like the greek letter "theta". NGC 1232 is located 20 o south of the celestial equator, in the constellation Eridanus (The River). The distance is about 100 million light-years, but the excellent optical quality of the VLT and FORS allows us to see an incredible wealth of details. At the indicated distance, the edge of the field shown in PR Photo 37d/98 corresponds to about 200,000 lightyears, or about twice the size of the Milky Way galaxy. Technical information : PR Photos 37d/98 and 37e/98 are based on a composite of three images taken behind three different filters: U (360 nm; 10 min), B (420 nm; 6 min) and R (600 nm; 2:30 min) during a period of 0.7 arcsec seeing. The fields shown measure 6.8 x 6.8 arcmin and 1.6 x 1.8 arcmin, respectively. North is up; East is to the left. Note: [1] This Press Release is published jointly (in English and German) by the European Southern Observatory, the Heidelberg State Observatory and the University Observatories of Goettingen and Munich. Eine Deutsche Fassung dieser Pressemitteilung steht ebenfalls zur Verfügung. How to obtain ESO Press Information ESO Press Information is made available on the World-Wide Web (URL: http://www.eso.org ). ESO Press Photos may be reproduced, if credit is given to the European Southern Observatory.
Watermarking scheme for authentication of compressed image
NASA Astrophysics Data System (ADS)
Hsieh, Tsung-Han; Li, Chang-Tsun; Wang, Shuo
2003-11-01
As images are commonly transmitted or stored in compressed form such as JPEG, to extend the applicability of our previous work, a new scheme for embedding watermark in compressed domain without resorting to cryptography is proposed. In this work, a target image is first DCT transformed and quantised. Then, all the coefficients are implicitly watermarked in order to minimize the risk of being attacked on the unwatermarked coefficients. The watermarking is done through registering/blending the zero-valued coefficients with a binary sequence to create the watermark and involving the unembedded coefficients during the process of embedding the selected coefficients. The second-order neighbors and the block itself are considered in the process of the watermark embedding in order to thwart different attacks such as cover-up, vector quantisation, and transplantation. The experiments demonstrate the capability of the proposed scheme in thwarting local tampering, geometric transformation such as cropping, and common signal operations such as lowpass filtering.
NASA Astrophysics Data System (ADS)
Chang, Ching-Chun; Liu, Yanjun; Nguyen, Son T.
2015-03-01
Data hiding is a technique that embeds information into digital cover data. This technique has been concentrated on the spatial uncompressed domain, and it is considered more challenging to perform in the compressed domain, i.e., vector quantization, JPEG, and block truncation coding (BTC). In this paper, we propose a new data hiding scheme for BTC-compressed images. In the proposed scheme, a dynamic programming strategy was used to search for the optimal solution of the bijective mapping function for LSB substitution. Then, according to the optimal solution, each mean value embeds three secret bits to obtain high hiding capacity with low distortion. The experimental results indicated that the proposed scheme obtained both higher hiding capacity and hiding efficiency than the other four existing schemes, while ensuring good visual quality of the stego-image. In addition, the proposed scheme achieved a low bit rate as original BTC algorithm.
Forensic steganalysis: determining the stego key in spatial domain steganography
NASA Astrophysics Data System (ADS)
Fridrich, Jessica; Goljan, Miroslav; Soukal, David; Holotyak, Taras
2005-03-01
This paper is an extension of our work on stego key search for JPEG images published at EI SPIE in 2004. We provide a more general theoretical description of the methodology, apply our approach to the spatial domain, and add a method that determines the stego key from multiple images. We show that in the spatial domain the stego key search can be made significantly more efficient by working with the noise component of the image obtained using a denoising filter. The technique is tested on the LSB embedding paradigm and on a special case of embedding by noise adding (the +/-1 embedding). The stego key search can be performed for a wide class of steganographic techniques even for sizes of secret message well below those detectable using known methods. The proposed strategy may prove useful to forensic analysts and law enforcement.
Mobile healthcare information management utilizing Cloud Computing and Android OS.
Doukas, Charalampos; Pliakas, Thomas; Maglogiannis, Ilias
2010-01-01
Cloud Computing provides functionality for managing information data in a distributed, ubiquitous and pervasive manner supporting several platforms, systems and applications. This work presents the implementation of a mobile system that enables electronic healthcare data storage, update and retrieval using Cloud Computing. The mobile application is developed using Google's Android operating system and provides management of patient health records and medical images (supporting DICOM format and JPEG2000 coding). The developed system has been evaluated using the Amazon's S3 cloud service. This article summarizes the implementation details and presents initial results of the system in practice.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Temple, Brian Allen; Armstrong, Jerawan Chudoung
This document is a mid-year report on a deliverable for the PYTHON Radiography Analysis Tool (PyRAT) for project LANL12-RS-107J in FY15. The deliverable is deliverable number 2 in the work package and is titled “Add the ability to read in more types of image file formats in PyRAT”. Right now PyRAT can only read in uncompressed TIF files (tiff files). It is planned to expand the file formats that can be read by PyRAT, making it easier to use in more situations. A summary of the file formats added include jpeg, jpg, png and formatted ASCII files.
Grid-based implementation of XDS-I as part of image-enabled EHR for regional healthcare in Shanghai.
Zhang, Jianguo; Zhang, Kai; Yang, Yuanyuan; Sun, Jianyong; Ling, Tonghui; Wang, Guangrong; Ling, Yun; Peng, Derong
2011-03-01
Due to the rapid growth of Shanghai city to 20 million residents, the balance between healthcare supply and demand has become an important issue. The local government hopes to ameliorate this problem by developing an image-enabled electronic healthcare record (EHR) sharing mechanism between certain hospitals. This system is designed to enable healthcare collaboration and reduce healthcare costs by allowing review of prior examination data obtained at other hospitals. Here, we present a design method and implementation solution of image-enabled EHRs (i-EHRs) and describe the implementation of i-EHRs in four hospitals and one regional healthcare information center, as well as their preliminary operating results. We designed the i-EHRs with service-oriented architecture (SOA) and combined the grid-based image management and distribution capability, which are compliant with IHE XDS-I integration profile. There are seven major components and common services included in the i-EHRs. In order to achieve quick response for image retrieving in low-bandwidth network environments, we use a JPEG2000 interactive protocol and progressive display technique to transmit images from a Grid Agent as Imaging Source Actor to the PACS workstation as Imaging Consumer Actor. The first phase of pilot testing of our image-enabled EHR was implemented in the Zhabei district of Shanghai for imaging document sharing and collaborative diagnostic purposes. The pilot testing began in October 2009; there have been more than 50 examinations daily transferred between the City North Hospital and the three community hospitals for collaborative diagnosis. The feedback from users at all hospitals is very positive, with respondents stating the system to be easy to use and reporting no interference with their normal radiology diagnostic operation. The i-EHR system can provide event-driven automatic image delivery for collaborative imaging diagnosis across multiple hospitals based on work flow requirements. This project demonstrated that the grid-based implementation of IHE XDS-I for image-enabled EHR could scale effectively to serve a regional healthcare solution with collaborative imaging services. The feedback from users of community hospitals and large hospital is very positive.
Iris Recognition: The Consequences of Image Compression
NASA Astrophysics Data System (ADS)
Ives, Robert W.; Bishop, Daniel A.; Du, Yingzi; Belcher, Craig
2010-12-01
Iris recognition for human identification is one of the most accurate biometrics, and its employment is expanding globally. The use of portable iris systems, particularly in law enforcement applications, is growing. In many of these applications, the portable device may be required to transmit an iris image or template over a narrow-bandwidth communication channel. Typically, a full resolution image (e.g., VGA) is desired to ensure sufficient pixels across the iris to be confident of accurate recognition results. To minimize the time to transmit a large amount of data over a narrow-bandwidth communication channel, image compression can be used to reduce the file size of the iris image. In other applications, such as the Registered Traveler program, an entire iris image is stored on a smart card, but only 4 kB is allowed for the iris image. For this type of application, image compression is also the solution. This paper investigates the effects of image compression on recognition system performance using a commercial version of the Daugman iris2pi algorithm along with JPEG-2000 compression, and links these to image quality. Using the ICE 2005 iris database, we find that even in the face of significant compression, recognition performance is minimally affected.
The comparison between SVD-DCT and SVD-DWT digital image watermarking
NASA Astrophysics Data System (ADS)
Wira Handito, Kurniawan; Fauzi, Zulfikar; Aminy Ma’ruf, Firda; Widyaningrum, Tanti; Muslim Lhaksmana, Kemas
2018-03-01
With internet, anyone can publish their creation into digital data simply, inexpensively, and absolutely easy to be accessed by everyone. However, the problem appears when anyone else claims that the creation is their property or modifies some part of that creation. It causes necessary protection of copyrights; one of the examples is with watermarking method in digital image. The application of watermarking technique on digital data, especially on image, enables total invisibility if inserted in carrier image. Carrier image will not undergo any decrease of quality and also the inserted image will not be affected by attack. In this paper, watermarking will be implemented on digital image using Singular Value Decomposition based on Discrete Wavelet Transform (DWT) and Discrete Cosine Transform (DCT) by expectation in good performance of watermarking result. In this case, trade-off happen between invisibility and robustness of image watermarking. In embedding process, image watermarking has a good quality for scaling factor < 0.1. The quality of image watermarking in decomposition level 3 is better than level 2 and level 1. Embedding watermark in low-frequency is robust to Gaussian blur attack, rescale, and JPEG compression, but in high-frequency is robust to Gaussian noise.
Rubble-Pile Minor Planet Sylvia and Her Twins
NASA Astrophysics Data System (ADS)
2005-08-01
VLT NACO Instrument Helps Discover First Triple Asteroid One of the thousands of minor planets orbiting the Sun has been found to have its own mini planetary system. Astronomer Franck Marchis (University of California, Berkeley, USA) and his colleagues at the Observatoire de Paris (France) [1] have discovered the first triple asteroid system - two small asteroids orbiting a larger one known since 1866 as 87 Sylvia [2]. "Since double asteroids seem to be common, people have been looking for multiple asteroid systems for a long time," said Marchis. "I couldn't believe we found one." The discovery was made with Yepun, one of ESO's 8.2-m telescopes of the Very Large Telescope Array at Cerro Paranal (Chile), using the outstanding image' sharpness provided by the adaptive optics NACO instrument. Via the observatory's proven "Service Observing Mode", Marchis and his colleagues were able to obtain sky images of many asteroids over a six-month period without actually having to travel to Chile. ESO PR Photo 25a/05 ESO PR Photo 25a/05 Orbits of Twin Moonlets around 87 Sylvia [Preview - JPEG: 400 x 516 pix - 145k] [Normal - JPEG: 800 x 1032 pix - 350k] ESO PR Photo 25b/05 ESO PR Photo 25b/05 Artist's impression of the triple asteroid system [Preview - JPEG: 420 x 400 pix - 98k] [Normal - JPEG: 849 x 800 pix - 238k] [Full Res - JPEG: 4000 x 3407 pix - 3.7M] [Full Res - TIFF: 4000 x 3000 pix - 36.0M] Caption: ESO PR Photo 25a/05 is a composite image showing the positions of Remus and Romulus around 87 Sylvia on 9 different nights as seen on NACO images. It clearly reveals the orbits of the two moonlets. The inset shows the potato shape of 87 Sylvia. The field of view is 2 arcsec. North is up and East is left. ESO PR Photo 25b/05 is an artist rendering of the triple system: Romulus, Sylvia, and Remus. ESO Video Clip 03/05 ESO Video Clip 03/05 Asteroid Sylvia and Her Twins [Quicktime Movie - 50 sec - 384 x 288 pix - 12.6M] Caption: ESO PR Video Clip 03/05 is an artist rendering of the triple asteroid system showing the large asteroid 87 Sylvia spinning at a rapid rate and surrounded by two smaller asteroids (Remus and Romulus) in orbit around it. This computer animation is also available in broadcast quality to the media (please contact Herbert Zodet). One of these asteroids was 87 Sylvia, which was known to be double since 2001, from observations made by Mike Brown and Jean-Luc Margot with the Keck telescope. The astronomers used NACO to observe Sylvia on 27 occasions, over a two-month period. On each of the images, the known small companion was seen, allowing Marchis and his colleagues to precisely compute its orbit. But on 12 of the images, the astronomers also found a closer and smaller companion. 87 Sylvia is thus not double but triple! Because 87 Sylvia was named after Rhea Sylvia, the mythical mother of the founders of Rome [3], Marchis proposed naming the twin moons after those founders: Romulus and Remus. The International Astronomical Union approved the names. Sylvia's moons are considerably smaller, orbiting in nearly circular orbits and in the same plane and direction. The closest and newly discovered moonlet, orbiting about 710 km from Sylvia, is Remus, a body only 7 km across and circling Sylvia every 33 hours. The second, Romulus, orbits at about 1360 km in 87.6 hours and measures about 18 km across. The asteroid 87 Sylvia is one of the largest known from the asteroid main belt, and is located about 3.5 times further away from the Sun than the Earth, between the orbits of Mars and Jupiter. The wealth of details provided by the NACO images show that 87 Sylvia is shaped like a lumpy potato, measuring 380 x 260 x 230 km (see ESO PR Photo 25a/05). It is spinning at a rapid rate, once every 5 hours and 11 minutes. The observations of the moonlets' orbits allow the astronomers to precisely calculate the mass and density of Sylvia. With a density only 20% higher than the density of water, it is likely composed of water ice and rubble from a primordial asteroid. "It could be up to 60 percent empty space," said co-discoverer Daniel Hestroffer (Observatoire de Paris, France). "It is most probably a "rubble-pile" asteroid", Marchis added. These asteroids are loose aggregations of rock, presumably the result of a collision. Two asteroids smacked into each other and got disrupted. The new rubble-pile asteroid formed later by accumulation of large fragments while the moonlets are probably debris left over from the collision that were captured by the newly formed asteroid and eventually settled into orbits around it. "Because of the way they form, we expect to see more multiple asteroid systems like this." Marchis and his colleagues will report their discovery in the August 11 issue of the journal Nature, simultaneously with an announcement that day at the Asteroid Comet Meteor conference in Armação dos Búzios, Rio de Janeiro state, Brazil.
Chandra and the VLT Jointly Investigate the Cosmic X-Ray Background
NASA Astrophysics Data System (ADS)
2001-03-01
Summary Important scientific advances often happen when complementary investigational techniques are brought together . In the present case, X-ray and optical/infrared observations with some of the world's foremost telescopes have provided the crucial information needed to solve a 40-year old cosmological riddle. Very detailed observations of a small field in the southern sky have recently been carried out, with the space-based NASA Chandra X-Ray Observatory as well as with several ground-based ESO telescopes, including the Very Large Telescope (VLT) at the Paranal Observatory (Chile). Together, they have provided the "deepest" combined view at X-ray and visual/infrared wavelengths ever obtained into the distant Universe. The concerted observational effort has already yielded significant scientific results. This is primarily due to the possibility to 'identify' most of the X-ray emitting objects detected by the Chandra X-ray Observatory on ground-based optical/infrared images and then to determine their nature and distance by means of detailed (spectral) observations with the VLT . In particular, there is now little doubt that the so-called 'X-ray background' , a seemingly diffuse short-wave radiation first detected in 1962, in fact originates in a vast number of powerful black holes residing in active nuclei of distant galaxies . Moreover, the present investigation has permitted to identify and study in some detail a prime example of a hitherto little known type of object, a distant, so-called 'Type II Quasar' , in which the central black hole is deeply embedded in surrounding gas and dust. These achievements are just the beginning of a most fruitful collaboration between "space" and "ground". It is yet another impressive demonstration of the rapid progress of modern astrophysics, due to the recent emergence of a new generation of extremely powerful instruments. PR Photo 09a/01 : Images of a small part of the Chandra Deep Field South , obtained with ESO telescopes in three different wavebands. PR Photo 09b/01 : A VLT/FORS1 spectrum of a 'Type II Quasar' discovered during this programme. The 'Chandra Deep Field South' and the X-Ray Background ESO PR Photo 09a/01 ESO PR Photo 09a/01 [Preview - JPEG: 400 x 183 pix - 76k] [Normal - JPEG: 800 x 366 pix - 208k] [Hires - JPEG: 3000 x 1453 pix - 1.4M] Caption : PR Photo 09a/01 shows optical/infrared images in three wavebands ('Blue', 'Red', 'Infrared') from ESO telescopes of the Type II Quasar CXOCDFS J033229.9 -275106 (at the centre), one of the distant X-ray sources identified in the Chandra Deep Field South (CDFS) area during the present study. Technical information about these photos is available below. The 'Chandra Deep Field South (CDFS)' is a small sky area in the southern constellation Fornax (The Oven). It measures about 16 arcmin across, or roughly half the diameter of the full moon. There is unusually little gas and dust within the Milky Way in this direction and observations towards the distant Universe within this field thus profit from an particularly clear view. That is exactly why this sky area was selected by an international team of astronomers [1] to carry out an ultra-deep survey of X-ray sources with the orbiting Chandra X-Ray Observatory . In order to detect the faintest possible sources, NASA's satellite telescope looked in this direction during an unprecedented total of almost 1 million seconds of exposure time (11.5 days). The main scientific goal of this survey is to understand the nature and evolution of the elusive sources that make up the 'X-ray background' . This diffuse glare in the X-ray sky was discovered by Riccardo Giacconi and his collaborators during a pioneering rocket experiment in 1962. The excellent imaging quality of Chandra (the angular resolution is about 1 arcsec) makes it possible to do extremely deep exposures without encountering problems introduced by the "confusion effect". This refers to the overlapping of images of sources that are seen close to each other in the sky and thus are difficult to study individually. Previous X-ray satellites were not able to obtain sufficiently sharp X-ray images and the earlier deep X-ray surveys therefore suffered severely from this effect. Moreover, Chandra has much better sensitivity at shorter wavelengths (higher energies) which are less affected by obscuration effects. It can therefore better detect faint sources that emit very energetic ("hard") X-rays. X-ray and optical surveys in the Chandra Deep Field South The one-million second Chandra observations were completed in December 2000. In parallel, a group of astronomers based at institutes in Europe and the USA (the CFDS-team [1]) has been collecting deep images and extensive spectroscopic data with the VLT during the past 2 years (cf. PR Photo 09a/01 ). Their aim was to 'identify' the Chandra X-ray sources, i.e., to unveil their nature and measure their distances. For the identification of these sources, the team has also made extensive use of the observations that were carried out as a part of the comprehensive ESO Imaging Survey Project (EIS). More than 300 X-ray sources were detected in the CDFS by Chandra . A significant fraction of these objects shine so faintly in the optical and near-infrared wavebands that only long-exposure observations with the VLT have been able to detect them. During five observing nights with the FORS1 multi-mode instrument at the 8.2-m VLT ANTU telescope in October and November 2000, the CDFS team was able to identify and obtain spectra of more than one hundred of the X-ray sources registered by Chandra . Nature of the X-ray sources The first results from this study have now confirmed that the 'hard' X-ray background is mainly due to Active Galactic Nuclei (AGN) . The observations also reveal that a large fraction of them are of comparatively low brightness (referred to as 'low-luminosity AGN'), heavily enshrouded by dust and located at distances of 8,000 - 9,000 million light-years (corresponding to a redshift of about 1 and a look-back time of 57% of the age of the Universe [2]) . It is generally believed that all these sources are powered by massive black holes at their centres. Previous X-ray surveys missed most of these objects because they were too faint to be observed by the telescopes then available, in particular at short X-ray wavelengths ('hard X-ray photons') where more radiation from the highly active centres is able to pass through the surrounding, heavily absorbing gas and dust clouds. Other types of well-known X-ray sources, e.g., QSOs ('quasars' = high-luminosity AGN) as well as clusters or groups of galaxies were also detected during these observations. Studies of all classes of objects in the CDFS are also being carried out by several other European groups. This sky field, already a standard reference in the southern hemisphere, will be the subject of several multi-wavelength investigations for many years to come. A prime example will be the Great Observatories Origins Deep Survey (GOODS) which will be carried out by the NASA SIRTF infrared satellite in 2003. Discovery of a distant Type II Quasar ESO PR Photo 09b/01 ESO PR Photo 09b/01 [Preview - JPEG: 400 x 352 pix - 56k] [Normal - JPEG: 800 x 703 pix - 128k] Caption : PR Photo 09b/01 displays the optical spectrum of the distant Type II Quasar CXOCDFS J033229.9 -275106 in the Chandra Deep Field South (CDFS), obtained with the FORS1 multi-mode instrument at VLT ANTU. Strong, redshifted emission lines of Hydrogen and ionised Helium, Oxygen, Nitrogen and Carbon are marked. Technical information about this photo is available below. One particular X-ray source that was identified with the VLT during the present investigation has attracted much attention - it is the discovery of a dust-enshrouded quasar (QSO) at very high redshift ( z = 3.7, corresponding to a distance of about 12,000 million light-years; [2]), cf. PR Photo 09a/01 and PR Photo 09b/01 . It is the first very distant representative of this elusive class of objects (referred to as ' Type II Quasars ') which are believed to account for approximately 90% of the black-hole-powered quasars in the distant Universe. The 'sum' of the identified Chandra X-ray sources in the CDFS was found to match both the intensity and the spectral properties of the observed X-ray background. This important result is a significant step forward towards the definitive resolution of this long-standing cosmological problem. Naturally, ESO astronomer Piero Rosati and his colleagues are thrilled: " It is clearly the combination of the new and detailed Chandra X-ray observations and the enormous light-gathering power of the VLT that has been instrumental to this success. " However, he says, " the identification of the remaining Chandra X-ray sources will be the next challenge for the VLT since they are extremely faint. This is because they are either heavily obscured by dust or because they are extremely distant ". More Information This Press Release is issued simultaneously with a NASA Press Release (see also the Harvard site ). Some of the first results are described in a research paper ("First Results from the X-ray and Optical Survey of the Chandra Deep Field South" available on the web at astro-ph/0007240. More information about science results from the Chandra X-Ray Observatory may be found at: http://asc.harvard.edu/. The optical survey of CDFS at ESO with the Wide-Field Imager is described in connection with PR Photos 46a-b/99 ('100,000 galaxies at a glance'). An image of the Chandra Deep Field South is available at the ESO website on the EIS Image Gallery webpage. . Notes [1]: The Chandra Team is lead by Riccardo Giacconi (Association of Universities Inc. [AUI], Washington, USA) and includes: Piero Rosati , Jacqueline Bergeron , Roberto Gilmozzi , Vincenzo Mainieri , Peter Shaver (European Southern Observatory [ESO]), Paolo Tozzi , Mario Nonino , Stefano Borgani (Osservatorio Astronomico, Trieste, Italy), Guenther Hasinger , Gyula Szokoly (Astrophysical Institute Potsdam [AIP], Germany), Colin Norman , Roberto Gilli , Lisa Kewley , Wei Zheng , Andrew Zirm , JungXian Wang (Johns Hopkins University [JHU], Baltimore, USA), Ken Kellerman (National Radio Astronomy Observatory [NRAO], Charlottesville, USA), Ethan Schreier , Anton Koekemoer and Norman Grogin (Space Telescope Science Institute (STScI), Baltimore, USA). [2] In astronomy, the redshift denotes the fraction by which the lines in the spectrum of an object are shifted towards longer wavelengths. The observed redshift of a distant galaxy or quasar gives a direct estimate of the apparent recession velocity as caused by the universal expansion. Since the expansion rate increases with the distance, the velocity is itself a function (the Hubble relation) of the distance to the object. Redshifts of 1 and 3.7 correspond to when the Universe was about 43% and 12% of its present age. The distances indicated in this Press Release depend on the cosmological model chosen and are based on an age of 19,000 million years. Technical information about the photos PR Photo 09a/01 shows B-, R- and I-band images of a 20 x 20 arcsec 2 area within the CDFS, centred on the Type II Quasar CXOCDFS J033229.9 -275106 . They were obtained with the MPG/ESO 2.2-m telescope and the Wide-Field Imager (WFI) at La Silla (B-band; 8 hrs exposure time) and the 8.2-m VLT ANTU telescope with the FORS1 multi-mode instrument at Paranal (R- and I-bands; each 2 hrs exposure). The measured magnitudes are R=23.5 and I=22.7. The overlaid contours show the associated Chandra X-ray source (smoothed with a sigma = 1 arcsec gaussian profile). North is up and East is left. The spectrum shown in PR Photo 09b/01 was obtained on November 25, 2000, with VLT ANTU and FORS1 in the multislit mode (150-I grism, 1.2 arcsec slit). The exposure time was 3 hours.
Chassy, Philippe; Lindell, Trym A E; Jones, Jessica A; Paramei, Galina V
2015-01-01
Image aesthetic pleasure (AP) is conjectured to be related to image visual complexity (VC). The aim of the present study was to investigate whether (a) two image attributes, AP and VC, are reflected in eye-movement parameters; and (b) subjective measures of AP and VC are related. Participants (N=26) explored car front images (M=50) while their eye movements were recorded. Following image exposure (10 seconds), its VC and AP were rated. Fixation count was found to positively correlate with the subjective VC and its objective proxy, JPEG compression size, suggesting that this eye-movement parameter can be considered an objective behavioral measure of VC. AP, in comparison, positively correlated with average dwelling time. Subjective measures of AP and VC were related too, following an inverted U-shape function best-fit by a quadratic equation. In addition, AP was found to be modulated by car prestige. Our findings reveal a close relationship between subjective and objective measures of complexity and aesthetic appraisal, which is interpreted within a prototype-based theory framework. © The Author(s) 2015.
NASA Astrophysics Data System (ADS)
Yu, Shanshan; Murakami, Yuri; Obi, Takashi; Yamaguchi, Masahiro; Ohyama, Nagaaki
2006-09-01
The article proposes a multispectral image compression scheme using nonlinear spectral transform for better colorimetric and spectral reproducibility. In the method, we show the reduction of colorimetric error under a defined viewing illuminant and also that spectral accuracy can be improved simultaneously using a nonlinear spectral transform called Labplus, which takes into account the nonlinearity of human color vision. Moreover, we show that the addition of diagonal matrices to Labplus can further preserve the spectral accuracy and has a generalized effect of improving the colorimetric accuracy under other viewing illuminants than the defined one. Finally, we discuss the usage of the first-order Markov model to form the analysis vectors for the higher order channels in Labplus to reduce the computational complexity. We implement a multispectral image compression system that integrates Labplus with JPEG2000 for high colorimetric and spectral reproducibility. Experimental results for a 16-band multispectral image show the effectiveness of the proposed scheme.
Casella, Ivan Benaduce; Fukushima, Rodrigo Bono; Marques, Anita Battistini de Azevedo; Cury, Marcus Vinícius Martins; Presti, Calógero
2015-03-01
To compare a new dedicated software program and Adobe Photoshop for gray-scale median (GSM) analysis of B-mode images of carotid plaques. A series of 42 carotid plaques generating ≥50% diameter stenosis was evaluated by a single observer. The best segment for visualization of internal carotid artery plaque was identified on a single longitudinal view and images were recorded in JPEG format. Plaque analysis was performed by both programs. After normalization of image intensity (blood = 0, adventitial layer = 190), histograms were obtained after manual delineation of plaque. Results were compared with nonparametric Wilcoxon signed rank test and Kendall tau-b correlation analysis. GSM ranged from 00 to 100 with Adobe Photoshop and from 00 to 96 with IMTPC, with a high grade of similarity between image pairs, and a highly significant correlation (R = 0.94, p < .0001). IMTPC software appears suitable for the GSM analysis of carotid plaques. © 2014 Wiley Periodicals, Inc.
2-Step scalar deadzone quantization for bitplane image coding.
Auli-Llinas, Francesc
2013-12-01
Modern lossy image coding systems generate a quality progressive codestream that, truncated at increasing rates, produces an image with decreasing distortion. Quality progressivity is commonly provided by an embedded quantizer that employs uniform scalar deadzone quantization (USDQ) together with a bitplane coding strategy. This paper introduces a 2-step scalar deadzone quantization (2SDQ) scheme that achieves same coding performance as that of USDQ while reducing the coding passes and the emitted symbols of the bitplane coding engine. This serves to reduce the computational costs of the codec and/or to code high dynamic range images. The main insights behind 2SDQ are the use of two quantization step sizes that approximate wavelet coefficients with more or less precision depending on their density, and a rate-distortion optimization technique that adjusts the distortion decreases produced when coding 2SDQ indexes. The integration of 2SDQ in current codecs is straightforward. The applicability and efficiency of 2SDQ are demonstrated within the framework of JPEG2000.
Practical steganalysis of digital images: state of the art
NASA Astrophysics Data System (ADS)
Fridrich, Jessica; Goljan, Miroslav
2002-04-01
Steganography is the art of hiding the very presence of communication by embedding secret messages into innocuous looking cover documents, such as digital images. Detection of steganography, estimation of message length, and its extraction belong to the field of steganalysis. Steganalysis has recently received a great deal of attention both from law enforcement and the media. In our paper, we classify and review current stego-detection algorithms that can be used to trace popular steganographic products. We recognize several qualitatively different approaches to practical steganalysis - visual detection, detection based on first order statistics (histogram analysis), dual statistics methods that use spatial correlations in images and higher-order statistics (RS steganalysis), universal blind detection schemes, and special cases, such as JPEG compatibility steganalysis. We also present some new results regarding our previously proposed detection of LSB embedding using sensitive dual statistics. The recent steganalytic methods indicate that the most common paradigm in image steganography - the bit-replacement or bit substitution - is inherently insecure with safe capacities far smaller than previously thought.
A Robust Image Watermarking in the Joint Time-Frequency Domain
NASA Astrophysics Data System (ADS)
Öztürk, Mahmut; Akan, Aydın; Çekiç, Yalçın
2010-12-01
With the rapid development of computers and internet applications, copyright protection of multimedia data has become an important problem. Watermarking techniques are proposed as a solution to copyright protection of digital media files. In this paper, a new, robust, and high-capacity watermarking method that is based on spatiofrequency (SF) representation is presented. We use the discrete evolutionary transform (DET) calculated by the Gabor expansion to represent an image in the joint SF domain. The watermark is embedded onto selected coefficients in the joint SF domain. Hence, by combining the advantages of spatial and spectral domain watermarking methods, a robust, invisible, secure, and high-capacity watermarking method is presented. A correlation-based detector is also proposed to detect and extract any possible watermarks on an image. The proposed watermarking method was tested on some commonly used test images under different signal processing attacks like additive noise, Wiener and Median filtering, JPEG compression, rotation, and cropping. Simulation results show that our method is robust against all of the attacks.
'Lyell' Panorama inside Victoria Crater (False Color)
NASA Technical Reports Server (NTRS)
2008-01-01
During four months prior to the fourth anniversary of its landing on Mars, NASA's Mars Exploration Rover Opportunity examined rocks inside an alcove called 'Duck Bay' in the western portion of Victoria Crater. The main body of the crater appears in the upper right of this stereo panorama, with the far side of the crater lying about 800 meters (half a mile) away. Bracketing that part of the view are two promontories on the crater's rim at either side of Duck Bay. They are 'Cape Verde,' about 6 meters (20 feet) tall, on the left, and 'Cabo Frio,' about 15 meters (50 feet) tall, on the right. The rest of the image, other than sky and portions of the rover, is ground within Duck Bay. Opportunity's targets of study during the last quarter of 2007 were rock layers within a band exposed around the interior of the crater, about 6 meters (20 feet) from the rim. Bright rocks within the band are visible in the foreground of the panorama. The rover science team assigned informal names to three subdivisions of the band: 'Steno,' 'Smith,' and 'Lyell.' This view combines many images taken by Opportunity's panoramic camera (Pancam) from the 1,332nd through 1,379th Martian days, or sols, of the mission (Oct. 23 to Dec. 11, 2007). Images taken through Pancam filters centered on wavelengths of 753 nanometers, 535 nanometers and 432 nanometers were mixed to produce this view, which is presented in a false-color stretch to bring out subtle color differences in the scene. Some visible patterns in dark and light tones are the result of combining frames that were affected by dust on the front sapphire window of the rover's camera. Opportunity landed on Jan. 25, 2004, Universal Time, (Jan. 24, Pacific Time) inside a much smaller crater about 6 kilometers (4 miles) north of Victoria Crater, to begin a surface mission designed to last 3 months and drive about 600 meters (0.4 mile).Local wavelet transform: a cost-efficient custom processor for space image compression
NASA Astrophysics Data System (ADS)
Masschelein, Bart; Bormans, Jan G.; Lafruit, Gauthier
2002-11-01
Thanks to its intrinsic scalability features, the wavelet transform has become increasingly popular as decorrelator in image compression applications. Throuhgput, memory requirements and complexity are important parameters when developing hardware image compression modules. An implementation of the classical, global wavelet transform requires large memory sizes and implies a large latency between the availability of the input image and the production of minimal data entities for entropy coding. Image tiling methods, as proposed by JPEG2000, reduce the memory sizes and the latency, but inevitably introduce image artefacts. The Local Wavelet Transform (LWT), presented in this paper, is a low-complexity wavelet transform architecture using a block-based processing that results in the same transformed images as those obtained by the global wavelet transform. The architecture minimizes the processing latency with a limited amount of memory. Moreover, as the LWT is an instruction-based custom processor, it can be programmed for specific tasks, such as push-broom processing of infinite-length satelite images. The features of the LWT makes it appropriate for use in space image compression, where high throughput, low memory sizes, low complexity, low power and push-broom processing are important requirements.
TreeRipper web application: towards a fully automated optical tree recognition software.
Hughes, Joseph
2011-05-20
Relationships between species, genes and genomes have been printed as trees for over a century. Whilst this may have been the best format for exchanging and sharing phylogenetic hypotheses during the 20th century, the worldwide web now provides faster and automated ways of transferring and sharing phylogenetic knowledge. However, novel software is needed to defrost these published phylogenies for the 21st century. TreeRipper is a simple website for the fully-automated recognition of multifurcating phylogenetic trees (http://linnaeus.zoology.gla.ac.uk/~jhughes/treeripper/). The program accepts a range of input image formats (PNG, JPG/JPEG or GIF). The underlying command line c++ program follows a number of cleaning steps to detect lines, remove node labels, patch-up broken lines and corners and detect line edges. The edge contour is then determined to detect the branch length, tip label positions and the topology of the tree. Optical Character Recognition (OCR) is used to convert the tip labels into text with the freely available tesseract-ocr software. 32% of images meeting the prerequisites for TreeRipper were successfully recognised, the largest tree had 115 leaves. Despite the diversity of ways phylogenies have been illustrated making the design of a fully automated tree recognition software difficult, TreeRipper is a step towards automating the digitization of past phylogenies. We also provide a dataset of 100 tree images and associated tree files for training and/or benchmarking future software. TreeRipper is an open source project licensed under the GNU General Public Licence v3.
Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising.
Zhang, Kai; Zuo, Wangmeng; Chen, Yunjin; Meng, Deyu; Zhang, Lei
2017-07-01
The discriminative model learning for image denoising has been recently attracting considerable attentions due to its favorable denoising performance. In this paper, we take one step forward by investigating the construction of feed-forward denoising convolutional neural networks (DnCNNs) to embrace the progress in very deep architecture, learning algorithm, and regularization method into image denoising. Specifically, residual learning and batch normalization are utilized to speed up the training process as well as boost the denoising performance. Different from the existing discriminative denoising models which usually train a specific model for additive white Gaussian noise at a certain noise level, our DnCNN model is able to handle Gaussian denoising with unknown noise level (i.e., blind Gaussian denoising). With the residual learning strategy, DnCNN implicitly removes the latent clean image in the hidden layers. This property motivates us to train a single DnCNN model to tackle with several general image denoising tasks, such as Gaussian denoising, single image super-resolution, and JPEG image deblocking. Our extensive experiments demonstrate that our DnCNN model can not only exhibit high effectiveness in several general image denoising tasks, but also be efficiently implemented by benefiting from GPU computing.
Providing Internet Access to High-Resolution Lunar Images
NASA Technical Reports Server (NTRS)
Plesea, Lucian
2008-01-01
The OnMoon server is a computer program that provides Internet access to high-resolution Lunar images, maps, and elevation data, all suitable for use in geographical information system (GIS) software for generating images, maps, and computational models of the Moon. The OnMoon server implements the Open Geospatial Consortium (OGC) Web Map Service (WMS) server protocol and supports Moon-specific extensions. Unlike other Internet map servers that provide Lunar data using an Earth coordinate system, the OnMoon server supports encoding of data in Moon-specific coordinate systems. The OnMoon server offers access to most of the available high-resolution Lunar image and elevation data. This server can generate image and map files in the tagged image file format (TIFF) or the Joint Photographic Experts Group (JPEG), 8- or 16-bit Portable Network Graphics (PNG), or Keyhole Markup Language (KML) format. Image control is provided by use of the OGC Style Layer Descriptor (SLD) protocol. Full-precision spectral arithmetic processing is also available, by use of a custom SLD extension. This server can dynamically add shaded relief based on the Lunar elevation to any image layer. This server also implements tiled WMS protocol and super-overlay KML for high-performance client application programs.
Providing Internet Access to High-Resolution Mars Images
NASA Technical Reports Server (NTRS)
Plesea, Lucian
2008-01-01
The OnMars server is a computer program that provides Internet access to high-resolution Mars images, maps, and elevation data, all suitable for use in geographical information system (GIS) software for generating images, maps, and computational models of Mars. The OnMars server is an implementation of the Open Geospatial Consortium (OGC) Web Map Service (WMS) server. Unlike other Mars Internet map servers that provide Martian data using an Earth coordinate system, the OnMars WMS server supports encoding of data in Mars-specific coordinate systems. The OnMars server offers access to most of the available high-resolution Martian image and elevation data, including an 8-meter-per-pixel uncontrolled mosaic of most of the Mars Global Surveyor (MGS) Mars Observer Camera Narrow Angle (MOCNA) image collection, which is not available elsewhere. This server can generate image and map files in the tagged image file format (TIFF), Joint Photographic Experts Group (JPEG), 8- or 16-bit Portable Network Graphics (PNG), or Keyhole Markup Language (KML) format. Image control is provided by use of the OGC Style Layer Descriptor (SLD) protocol. The OnMars server also implements tiled WMS protocol and super-overlay KML for high-performance client application programs.
A complete passive blind image copy-move forensics scheme based on compound statistics features.
Peng, Fei; Nie, Yun-ying; Long, Min
2011-10-10
Since most sensor pattern noise based image copy-move forensics methods require a known reference sensor pattern noise, it generally results in non-blinded passive forensics, which significantly confines the application circumstances. In view of this, a novel passive-blind image copy-move forensics scheme is proposed in this paper. Firstly, a color image is transformed into a grayscale one, and wavelet transform based de-noising filter is used to extract the sensor pattern noise, then the variance of the pattern noise, the signal noise ratio between the de-noised image and the pattern noise, the information entropy and the average energy gradient of the original grayscale image are chosen as features, non-overlapping sliding window operations are done to the images to divide them into different sub-blocks. Finally, the tampered areas are detected by analyzing the correlation of the features between the sub-blocks and the whole image. Experimental results and analysis show that the proposed scheme is completely passive-blind, has a good detection rate, and is robust against JPEG compression, noise, rotation, scaling and blurring. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Shadow of a Large Disc Casts New Light on the Formation of High Mass Stars
NASA Astrophysics Data System (ADS)
2004-05-01
Massive Star Observed that Forms through a Rotating Accretion Disc Summary Based on a large observational effort with different telescopes and instruments, mostly from the European Southern Observatory (ESO), a team of European astronomers [1] has shown that in the M 17 nebula a high mass star [2] forms via accretion through a circumstellar disc, i.e. through the same channel as low-mass stars. To reach this conclusion, the astronomers used very sensitive infrared instruments to penetrate the south-western molecular cloud of M 17 so that faint emission from gas heated up by a cluster of massive stars, partly located behind the molecular cloud, could be detected through the dust. Against the background of this hot region a large opaque silhouette, which resembles a flared disc seen nearly edge-on, is found to be associated with an hour-glass shaped reflection nebula. This system complies perfectly with a newly forming high-mass star surrounded by a huge accretion disc and accompanied by an energetic bipolar mass outflow. The new observations corroborate recent theoretical calculations which claim that stars up to 40 times more massive than the Sun can be formed by the same processes that are active during the formation of stars of smaller masses. PR Photo 15a/04: Stellar cluster and star-forming region M 17 (also available without text inside photo) PR Photo 15b/04: Silhouette disc seen in M 17 PR Photo 15c/04: Rotation of the disc in M 17. PR Photo 15d/04: Bipolar reflection nebula and silhouette disc of a young, massive star in M 17 PR Photo 15e/04: Optical spectrum of the bipolar nebula. PR Video 03/04: Zooming in onto the disc. The M 17 region ESO PR Photo 15a/04 ESO PR Photo 15a/04 [Preview - JPEG: 400 x 497 pix - 271k] [Normal - JPEG: 800 x 958 pix - 604k] ESO PR Photo 15a1/04 ESO PR Photo 15a/04 (without text within photo) [Preview - JPEG: 400 x 480 pix - 275k] [Normal - JPEG: 800 x 959 pix - 634k] [High-Res - JPEG: 3000 x 3597 pix - 3.8M] [Full-Res - JPEG: 3815 x 4574 pix - 5.4M] Caption: PR Photo 15a/04 is a reproduction of a three-colour composite of the sky region of M 17, a H II region excited by a cluster of young, hot stars. A large silhouette disc has been found to the south-west of the cluster centre. The area within the indicated square is shown in more detail in PR Photo 15b/04. The present image was obtained with the ISAAC near-infrared instrument at the 8.2-m VLT ANTU telescope at Paranal. In the left photo, the orientation and the scale at the distance of M 17 (7,000 light-years) are indicated, and the main regions are identified. To the right, this beautiful photo is available without text and in full resolution for reproduction purposes. While many details related to the formation and early evolution of low-mass stars like the Sun are now well understood, the basic scenario that leads to the formation of high-mass stars [2] still remains a mystery. Two possible scenarios for the formation of massive stars are currently being studied. In the first, such stars form by accretion of large amounts of circumstellar material; the infall onto the nascent star varies with time. Another possibility is formation by collision (coalescence) of protostars of intermediate masses, increasing the stellar mass in "jumps". In their continuing quest to add more pieces to the puzzle and help providing an answer to this fundamental question, a team of European astronomers [1] used a battery of telescopes, mostly at two of the European Southern Observatory's Chilean sites of La Silla and Paranal, to study in unsurpassed detail the Omega nebula. The Omega nebula, also known as the 17th object in the list of famous French astronomer Charles Messier, i.e. Messier 17 or M 17, is one of the most prominent star forming regions in our Galaxy. It is located at a distance of 7,000 light-years. M 17 is extremely young - in astronomical terms - as witnessed by the presence of a cluster of high-mass stars that ionise the surrounding hydrogen gas and create a so-called H II region. The total luminosity of these stars exceeds that of our Sun by almost a factor of ten million. Adjacent to the south-western edge of the H II region, there is a huge cloud of molecular gas which is believed to be a site of ongoing star formation. In order to search for newly forming high-mass stars, Rolf Chini of the Ruhr-Universität Bochum (Germany) and his collaborators have recently investigated the interface between the H II region and the molecular cloud by means of very deep optical and infrared imaging between 0.4 and 2.2 µm. This was done with ISAAC (at 1.25, 1.65 and 2.2 µm) at the ESO Very Large Telescope (VLT) on Cerro Paranal in September 2002 and with EMMI (at 0.45, 0.55, 0.8 µm) at the ESO New Technology Telescope (NTT), La Silla, in July 2003. The image quality was limited by atmospheric turbulence and varied between 0.4 and 0.8 arcsec. The result of these efforts is shown in PR Photo 15a/04. Rolf Chini is pleased: "Our measurements are so sensitive that the south-western molecular cloud of M 17 is penetrated and the faint nebular emission of the H II region, which is partly located behind the molecular cloud, could be detected through the dust." Against the nebular background of the H II region a large opaque silhouette is seen associated with an hourglass shaped reflection nebula. The silhouette disc ESO PR Photo 15b/04 ESO PR Photo 15b/04 [Preview - JPEG: 400 x 475 pix - 348k] [Normal - JPEG: 800 x 950 pix - 907k] Caption: PR Photo 15b/04 shows a Ks-band image of the silhouette disc obtained with the NACO Adaptive Optics camera at the 8.2-m VLT YEPUN telescope at Paranal. The displayed field-of-view is outlined in PR Photo 15a/04. White contours delineate the densest part of the disc (inner torus). The visible stars (slightly elongated due to the adaptive optics technique) are embedded within the molecular cloud but are probably unrelated to the disc. The insert shows a deconvolved zoomed version of the central object of about 450 x 240 AU; its major axis is tilted by about 15 degrees against the direction perpendicular to the disc. ESO PR Video Clip 03/04 ESO PR Video Clip 03/04 [QuickTime Video+Audio; 160x120 pix; 18Mb] Caption: PR Video Clip 03/04 zooms in towards the disc, starting from the ISAAC image of the full nebula to the NACO image of the silhouette disc. This shows the remarkable power of the set of instruments on the Very Large Telescope. ESO PR Photo 15c/04 ESO PR Photo 15c/04 [Preview - JPEG: 533 x 400 pix - 80k] [Normal - JPEG: 1067 x 800 pix - 185k] Caption: PR Photo 15c/04 Position-velocity diagram revealing the rotation of the disc. It is derived from a cut along the major axis of the disc, using the IRAM Plateau de Bure interferometer. For comparison, the theoretically expected position-velocity curve for an edge-on disc around a star of 15 solar masses is shown, the outer part of which (radii larger than about 15,400 AU) is in Keplerian rotation while its inner part is modeled as a rigid rotator. To obtain a better view of the structure, the team of astronomers turned then to Adaptive Optics imaging using the NAOS-CONICA instrument on the VLT. Adaptive optics is a "wonder-weapon" in ground-based astronomy, allowing astronomers to "neutralize" the image-smearing turbulence of the terrestrial atmosphere (seen by the unaided eye as the twinkling of stars) so that much sharper images can be obtained. With NAOS-CONICA on the VLT, the astronomers were able to obtain images with a resolution better than one tenth of the "seeing", that is, as what they could observe with ISAAC. PR Photo 15b/04 shows the high-resolution near-infrared (2.2 µm) image they obtained. It clearly suggests that the morphology of the silhouette resembles a flared disc, seen nearly edge-on. The disc has a diameter of about 20,000 AU [3] - which is 500 times the distance of the farthest planet in our solar system - and is by far the largest circumstellar disc ever detected. To study the disc structure and properties, the astronomers then turned to radio astronomy and carried out molecular line spectroscopy at the IRAM Plateau de Bure interferometer near Grenoble (France) in April 2003. The astronomers have observed the region in the rotational transitions of the 12CO, 13CO and C18O molecules, and in the adjacent continuum at 3 mm. Velocity resolutions of 0.1 and 0.2 km/s, respectively, were achieved. Dieter Nürnberger, member of the team, sees this as a confirmation: "Our 13CO data obtained with IRAM indicate that the disc/envelope system slowly rotates with its north-western part approaching the observer." Over an extent of 30,800 AU a velocity shift of 1.7 km/s is indeed measured (PR Photo 15c/04). From these observations, adopting standard values for the abundance ratio between the different isotopic carbon monoxide molecules (12CO and 13CO) and for the conversion factor to derive molecular hydrogen densities from the mesured CO intensities, the astronomers were also able to derive a conservative lower limit for the disc mass of 110 solar masses. This is by far the most massive and largest accretion disc ever observed directly around a young massive star. The largest silhouette disc so far is known as 114-426 in Orion and has a diameter of about 1,000 AU; however, its central star is likely a low-mass object rather than a massive protostar. Although there are a small number of candidates for massive young stellar objects (YSOs) some of which are associated with outflows, the largest circumstellar disc hitherto detected around these objects has a diameter of only 130 AU. The bipolar nebula ESO PR Photo 15d/04 ESO PR Photo 15d/04 [Preview - JPEG: 450 x 400 pix - 119k] [Normal - JPEG: 913 x 800 pix - 272k] Caption: PR Photo 15d/04 displays a collection of images of the silhouette disc and, perpendicular to that, the bipolar reflection nebula. These images were obtained in different optical and near-infrared wavebands with different instruments: EMMI at the ESO New Technology Telescope on La Silla (top row; wavelengths 0.45 [B-band], 0.55 [V-band], 0.8 µm [I-band], respectively) and ISAAC at the ESO Very Large Telescope on Cerro Paranal (bottom row; 1.25 [J], 1.65 [H] and 2.2 µm [K]). All images are centred on the central massive protostar and cover an area of 30 x 30 arcsec2, corresponding to 1.0 x 1.0 light-years2 at the distance of M 17 (about 7,000 light-years). The obscuration diminishes with increasing wavelength and the background emission of the H II region becomes more and more evident (represented by entirely black colours at K). ESO PR Photo 15e/04 ESO PR Photo 15e/04 [Preview - JPEG: 757 x 400 pix - 136k] [Normal - JPEG: 1513 x 800 pix - 311k] Caption: PR Photo 15e/04 shows an optical spectrum of the bipolar nebula, obtained with EFOSC2 at the ESO 3.6 m telescope and with EMMI at the ESO 3.5 m NTT, both located on La Silla, Chile. A number of identified emission lines, like Hα and the Ca II triplet 849.8, 854.2 and 866.2 nm, are denoted. The second morphological structure that is visible on all images throughout the entire spectral range from visible to infrared (0.4 to 2.2 µm) is an hourglass-shaped nebula perpendicular to the plane of the disc (PR Photo 15d/04). This is believed to be an energetic outflow coming from the central massive object. To confirm this, the astronomers went back to ESO's telescopes to perform spectroscopic observations. The optical spectra of the bipolar outflow were measured in April/June 2003 with EFOSC2 at the ESO 3.6 m telescope and with EMMI at the ESO 3.5 m NTT, both located on La Silla, Chile. The observed spectrum (PR Photo 15e/04) is dominated by the emission lines of hydrogen (Hα), calcium (the Ca II triplet 849.8, 854.2 and 866.2 nm), and helium (He I 667.8 nm). In the case of low-mass stars, these lines provide indirect evidence for ongoing accretion from the inner disc onto the star. The Ca II triplet was also shown to be a product of disc accretion for both a large sample of low and intermediate-mass protostars, known as T Tauri and Herbig Ae/Be stars, respectively. Moreover, the Hα line is extremely broad and shows a deep blue-shifted absorption typically associated with accretion disc-driven outflows. In the spectrum, numerous iron (Fe II) lines were also observed, which are velocity-shifted by ± 120 km/s. This is clear evidence for the existence of shocks with velocities of more than 50 km/s, hence another confirmation of the outflow hypothesis. The central protostar Due to heavy extinction, the nature of an accreting protostellar object, i.e. a star in the process of formation, is usually difficult to infer. Accessible are only those that are located in the neighbourhood of their elder brethren, e.g. next to a cluster of hot stars (cf. ESO PR 15/03). Such already evolved massive stars are a rich source of energetic photons and produce powerful stellar winds of protons (like the "solar wind" but much stronger) which impact on the surrounding interstellar gas and dust clouds. This process may lead to partial evaporation and dispersion of those clouds, thereby "lifting the curtain" and allowing us to look directly at young stars in that region. However, for all high-mass protostellar candidates located away from such a hostile environment there is not a single direct evidence for a (proto-)stellar central object; likewise, the origin of the luminosity - typically about ten thousand solar luminosities - is unclear and may be due to multiple objects or even embedded clusters. The new disc in M 17 is the only system which exhibits a central object at the expected position of the forming star. The 2.2 µm emission is relatively compact (240 AU x 450 AU) - too small to host a cluster of stars. Assuming that the emission is due solely to the star, the astronomers derive an absolute infrared brightness of about K = -2.5 magnitudes which would correspond to a main sequence star of about 20 solar masses. Given the fact that the accretion process is still active, and that models predict that about 30-50% of the circumstellar material can be accumulated onto the central object, it is likely that in the present case a massive protostar is currently being born. Theoretical calculations show that an initial gas cloud of 60 to 120 solar masses may evolve into a star of approximately 30-40 solar masses while the remaining mass is rejected into the interstellar medium. The present observations may be the first to show this happening.
First Results from the UT1 Science Verification Programme
NASA Astrophysics Data System (ADS)
1998-11-01
Performance verification is a step which has regularly been employed in space missions to assess and qualify the scientific capabilities of an instrument. Within this framework, it was the goal of the Science Verification program to submit the VLT Unit Telescope No. 1 (UT1) to the scrutiny that can only be achieved in an actual attempt to produce scientifically valuable results. To this end, an attractive and diversified set of observations were planned in advance to be executed at the VLT. These Science Verification observations at VLT UT1 took place as planned in the period from August 17 to September 1, 1998, cf. the September issue of the ESO Messenger ( No. 93, p. 1 ) and ESO PR 12/98 for all details. Although the meteorological conditions on Paranal were definitely below average, the telescope worked with spectacular efficiency and performance throughout the entire period, and very valuable data were gathered. After completion of all observations, the Science Verification Team started to prepare all of the datasets for the public release that took place on October 2, 1998. The data related to the Hubble Deep Field South (now extensively observed by the Hubble Space Telescope) were made public world-wide, while the release of other data was restricted to ESO member states. With this public release ESO intended to achieve two specific goals: offer to the scientific community an early opportunity to work on valuable VLT data, and in the meantime submit the VLT to the widest possible scrutiny. With the public release, many scientists started to analyse scientifically the VLT data, and the following few examples of research programmes are meant to give a sample of the work that has been carried out on the Science Verification data during the past two months. They represent typical investigations that will be carried out in the future with the VLT. Many of these will be directed towards the distant universe, in order to gather insight on the formation and evolution of galaxies, galaxy clusters, and large scale structure. Others will concentrate on more nearby objects, including stars and nebulae in the Milky Way galaxy, and some will attempt to study our own solar system. The following six research programmes were presented at the Press Conference that took place at the ESO Headquarters in Garching (Germany) today. Deep Galaxy Counts and Photometric Redshifts in the HDF-S NIC3 Field The goal of this programme was to verify the capability of the VLT by obtaining the deepest possible ground-based images and using multicolour information to derive the redshifts (and hence the distances) of the faintest galaxies. The space distribution, luminosity and colour of these extreme objects may provide crucial information on the initial phases of the evolution of the universe. The method is known as photometric redshift determination . The VLT Test Camera was used to collect CCD images for a total of 16.6 hours in five spectral filters (U, B, V, R and I) in the so-called HDF-S NIC3 field. This is a small area (about 1 arcmin square) of the southern sky where very deep observations in the infrared bands J, H and K (1.1, 1.6 and 2.2µm, respectively) have been obtained by the Hubble Space Telescope (HST). The observations were combined and analyzed by a team of astronomers at ESO and the Observatory of Rome (Italy). Galaxies were detected in the field down to magnitude ~ 27-28. In most colours, the planned limiting values of the fluxes were successfully reached. ESO PR Photo 48a/98 ESO PR Photo 48a/98 [Preview - JPEG: 800 x 856 pix - 144k] [High-Res - JPEG: 3000 x 3210 pix - 728k] PR Photo 48a/98 shows some examples of photometric redshift determination for faint galaxies in the HDF-S NIC3 field. The filled points are the fluxes measured in the five colors observed with the VLT Test Camera (U, B, V, R and I) and in the infrared H spectral band with the NICMOS instrument on the Hubble Space Telescope. The curves constitute the best fit to the points obtained from a library of more than 400,000 synthetic spectra of galaxies at various redshifts (Fontana et al., in preparation). For most of these very faint sources, it is not possible to collect enough photons to measure the recession velocity (the redshift) by spectroscopy, even with an 8-m telescope. The redshifts and the main galaxy properties are then determined by comparing the colour observations with synthetic spectra (see PR Photo 48a/98 ). This has been done for more than one hundred galaxies in the field brighter than magnitude 26.5. Around 20 are found to be at redshifts larger than 2. The brighter ones are excellent candidates for future detailed studies with the UT1 instruments FORS1 and ISAAC. The scientists involved in this study are: Sandro D'Odorico, Richard Hook, Alvio Renzini, Piero Rosati, Rodolfo Viezzer (ESO) and Adriano Fontana, Emanuele Giallongo, Francesco Poli (Rome Observatory, Italy). A Gravitational Einstein Ring Because the gravitational pull of matter bends the path of light rays, astronomical objects - stars, galaxies and galaxy clusters - can act like lenses, which magnify and severely distort the images of galaxies behind them, producing weird pictures as in a hall of mirrors. In the most extreme case, where the foreground lensing galaxy and the background galaxy are perfectly lined up, the image of the background galaxy is stretched into a ring. Such an image is known as an Einstein ring , because the correct formula for the bending of light was first described by the famous phycisist Albert Einstein . ESO PR Photo 48b/98 ESO PR Photo 48b/98 [Preview - JPEG: 800 x 1106 pix - 952k] [High-Res - JPEG: 3000 x 4148 pix - 5.4Mb] ESO PR Photo 48c/98 ESO PR Photo 48c/98 [Preview - JPEG: 800 x 977 pix - 272k] [High-Res - JPEG: 3000 x 3664 pix - 1.4Mb] PR Photo 48b/98 (left) shows a new, true colour image of an Einstein ring (upper centre of photo), first discovered at ESO in 1995. The ring, which is the stretched image of a galaxy far out in the Universe, stands out clearly in green, and the red galaxy inside the ring is the lens. The discovery image was very faint, but this new picture, taken with the VLT during the Science Verification Programme allows a much clearer view of the ring because of the great light-gathering capacity of the telescope and, not least, because of the superb image quality. In Photo 48c/98 (right), four images illustrate the deduced model of the lensing effect. In the upper left, the observed ring has been enlarged and the image of the lensing galaxy removed by image processing. Below it is a model of the gravitational field (potential) around this galaxy along with the "true" image of the background galaxy shown. At the lower right is the resulting gravitationally magnified and distorted image of the background galaxy, which to the upper right has been de-sharpened to the same image quality as the observed image. The similarity between the two is most convincing. The picture shows a new, true colour image of an Einstein ring, first discovered at ESO in 1995. The ring, which is the stretched image of a galaxy far out in the Universe, stands out clearly in green, and the red galaxy inside the ring is the lens. The discovery image was very faint, but this new picture, taken with the VLT during the Science Verification Programme allows a much clearer view of the ring because of the great light-gathering capacity the telescope and, not least, because of the superb image quality. Gravitational lensing provides a very useful tool with which to study the Universe. As "weighing scales", it provides a measure of the mass within the lensing body, and as a "magnifying glass", it allows us to see details in objects which would otherwise be beyond the reach of current telescopes. This new detailed picture has allowed a much more accurate measurement of the mass of the lensing galaxy, revealing the presence of vast quantities of "unseen" matter, five times more than if just the light from the galaxy is taken into account. This additional material represents some of the Universe's dark matter . The gravitational lens action is also magnifying the background object by a factor of ten, providing an unparalleled view of this very distant galaxy which is in a stage of active star-formation. The scientists involved in this study are : Palle Møller (ESO), Stephen J. Warren (Blackett Laboratory, Imperial College, UK), Paul C. Hewett (Institute of Astronomy, Cambridge, UK) and Geraint F. Lewis (Dept. of Physics and Astronomy, University of Victoria, Canada). An Extremely Red Galaxy One of the main goals of modern cosmology is to understand when and how the galaxies formed. In the very last years, many high-redshift (i.e. very distant) galaxies have been found, suggesting that some galaxies were already assembled, when the Universe was much younger than now. None of these high-redshift galaxies have ever been found to be a bona-fide red elliptical galaxy . The VLT, however, with its very good capabilities for infrared observations, is an ideal instrument to investigate when and how the red elliptical galaxies formed. The VLT Science Verification images have provided unique multicolour information about an extremely red galaxy that was originally (Treu et al., 1998, A&A Letters, Vol. 340, p. 10) identified on the Hubble Deep Field South (HDF-S) Test Image. This galaxy is shown in PR Photo 48d/98 that is an enlargment from ESO PR Photo 35b/98. It was detected on Near-IR images and also on images obtained in the optical part of the spectrum, at the very faint limit of magnitude B ~ 29 in the blue. However, this galaxy has not been detected in the near-ultraviolet band. ESO PR Photo 48d/98 ESO PR Photo 48d/98 [Preview - JPEG: 800 x 594 pix - 264k] [High-Res - JPEG: 3000 x 2229 pix - 1.8Mb] ESO PR Photo 48e/98 ESO PR Photo 48e/98 [Preview - JPEG: 800 x 942 pix - 96k] [High-Res - JPEG: 3000 x 3533 pix - 576k] PR Photo 48d/98 (left) shows the very red galaxy (at the arrow) in the Hubble Deep Field South , discussed here. Photo 48e/98 (right) is the spectrum of a typical elliptical galaxy, redshifted to z = 1.8 and compared with the brightness of the galaxy in different wavebands (crosses), as measured during the VLT SV programme and the Hubble Deep Field South Test Program (the cross to the right). The arrow indicates the upper limit by the VLT SV in the ultraviolet band. It can be seen that these observations are fully consistent with the object being an old, elliptical galaxy at the high redshift of z=1.8 , i.e. at an epoch, when the Universe was much younger than now. The new ISAAC instrument at VLT UT1 will be able to obtain an infrared spectrum of this galaxy and thus to affirm or refute this provisional conclusion. The colours measured at the VLT and on the HST Test Image are very well matched by those of an old elliptical galaxy at redshift z ~ 1.8 ; see Photo 48e/98 . All the available evidence is thus consistent with this object being an elliptical galaxy with the highest-known redshift for this galaxy type. A preliminary analysis of Hubble Deep Field South data, just released, seems to support this hypothesis. If these conclusions are confirmed by direct measurement of its spectrum, this galaxy must already have been "old" (i.e. significantly evolved) when the Universe had an age of only about one fifth of its present value. A spectroscopic confirmation is still outstanding, but is now possible with the ISAAC instrument at VLT UT1. A positive result would demonstrate that elliptical galaxies can form very early in the history of the Universe. The scientists involved in this study are: Massimo Stiavelli, Tommaso Treu (also Scuola Normale Superiore, Italy), Stefano Casertano, Mark Dickinson, Henry Ferguson, Andrew Fruchter, Crystal Martin (STSci, Baltimore, USA), Piero Rosati and Rodolfo Viezzer (ESO), Marcella Carollo (Johns Hopkins University, Baltimore, USA) and Henry Tieplitz (NASA, Goddard Space Flight Center, Greenbelt, USA). Lyman-alpha Companions and Extended Nebulosity around a Quasar at Redshift z=2.2 In current theories of galaxy formation, luminous galaxies we see to-day were built up through repeated merging of smaller protogalactic clumps. Quasars, prodigious sources pouring out 100 to 1000 times as much light as an entire galaxy, have been used as markers of galaxy formation activity and have guided astronomers in their hunting of primeval galaxies and large-scale structures at high redshift. A supermassive black-hole, swallowing stars, gas and dust, is thought to be the engine powering a quasar and the interaction of the galaxy hosting the black-hole with neighboring galaxies is expected to play a key role in "feeding the monster". At intermediate redshift, a large fraction of radio-loud quasars and radio galaxies inhabit rich clusters of galaxies, whereas radio-quiet quasars are rarely found in very rich environments. Furthermore, tidal interaction between quasars and their nearby companions is also the favoured explanation for the presence of large gaseous nebulosities associated with radio-loud quasars and radio galaxies. At high redshift, searches for Lyman-alpha quasar companions and emission-line nebulosities show strong similarities with those seen at lower redshift, although the detection rate is lower. ESO PR Photo 48f/98 ESO PR Photo 48f/98 [Preview - JPEG: 800 x 977 pix - 184k] [High-Res - JPEG: 3000 x 3662 pix - 1.1Mb] ESO PR Photo 48g/98 ESO PR Photo 48g/98 [Preview - JPEG: 800 x 966 pix - 328k] [High-Res - JPEG: 3000 x 3621 pix - 1.8Mb] PR Photo 48f/98 (left) is a false-colour reproduction of a B-band image of the field around the radio-weak quasar J2233-606 in the Hubble Deep Field South (HDF-S) . Photo 48g/98 (right) represents emission from the same direction at a wavelength that corresponds to Lyman-alpha emission at the redshift ( z = 2.2 ) of the quasar. Three Lyman-alpha candidate companions are indicated with arrows. Note also the extended nebulosity around the quasar. A search for Lyman-alpha companions to the radio-weak quasar J2233-606 in the Hubble Deep Field South (HDF-S) was conducted during the VLT UT1 SV programme in a small field of 1.2 x 1.3 arcmin 2 , centered on the quasar. Candidate Lyman-alpha companions were identified by subtracting a broad-band B (blue) image, that traces the galaxy stellar populations, from a narrow-band image, spectrally centered on the redshifted, narrow Lyman-alpha emission line of the quasar ( z = 2.2 ). Three Lyman-alpha candidate companions were discovered at angular distances of 15 to 23 arcsec, or 200 to 300 kpc (650,000 to 1,000,000 light-years) at the distance corresponding to the quasar redshift. The emission lines are very strong, relative to the continuum emission of the galaxies - this could be a consequence of the strong ionizing radiation field of the quasar. These companions to the quasar may trace a large-scale structure which would extend over larger distances beyond the observed, small field. Even more striking is the presence of a very extended nebulosity whose size (120 kpc x 160 kpc) and Lyman-alpha luminosity (3 x 10 44 erg/cm 2 /s) are among the largest observed around radio galaxies and radio-loud quasars, but rarely seen around a radio-weak quasar. Tidal interaction between the northern, very nearby companion and the quasar is clearly present: the companion is embedded in the quasar nebulosity, most of its gas has been stripped and lies in a tail westwards of the galaxy. The scientists involved in this study are: Jacqueline Bergeron (ESO), Stefano Cristiani, Stephane Arnouts, Gianni Fasano (Padova, Italy) and Patrick Petitjean (Institut d'Astrophysique, Paris, France). Very Distant Galaxy Clusters During the past years, it has become possible to detect and subsequently study progressively more distant clusters of galaxies. For this research programme, UT1 Science Verification data were used, in combination with data obtained with the SOFI instrument at the ESO New Technology Telescope (NTT) at La Silla, to confirm the existence of two very distant galaxy clusters at redshift z ~ 1 , that had originally been detected in the ESO Imaging Survey. This redshift corresponds to an epoch when the age of the Universe was only two-thirds of the present. ESO PR Photo 48h/98 ESO PR Photo 48h/98 [Preview - JPEG: 800 x 917 pix - 896k] [High-Res - JPEG: 3000 x 3438 pix - 6.0Mb] PR Photo 48h/98 (left) is a colour composite that shows the now confirmed cluster EIS0046-2930 . The image has been produced by combining the V (green-yellow), R (red) and I (Near-IR) exposures with the Test Camera obtained during the VLT-UT1 Science Verification. The yellow-orange galaxies are the cluster members and the bluer objects are galaxies belonging to the general field population. The cluster center is at the location of the largest (yellow-orange) cluster galaxy to the left of the center of the image. The field measures 90 x 90 arcsec. This was achieved by the detection of a spatial excess density of galaxies, with measured colour equal to that of elliptical galaxies at this redshift, as established by counts in the respective sky areas. The field of one these clusters is shown in PR Photo 48h/98 . These new data show that the VLT will most certainly play a major role in the studies of the cluster galaxy population in such distant systems. This will contribute to shed important new light on the evolution of galaxies. Furthermore, the VLT clearly has the potential to identify and confirm the reality of many more such clusters and thereby to increase considerably the number of known objects. This will be important in order to determine more accurate values of the basic cosmological constants, and thus for our understanding of the evolution of the Universe as a whole. The presentation was made by Lisbeth Fogh Olsen (Copenhagen Observatory, Denmark, and ESO) on behalf of the scientists involved in this study. Icy Planets in the Outer Solar System Observations with large optical telescopes during the past years have begun to cast more light on the still very little known, distant icy planets in the outer solar system. Until November 1998, about 70 of these have been discovered outside the orbit of Neptune (between 30 and 50 AU, or 4,500 to 7,500 million km, from the Sun). They are accordingly referred to as Trans-Neptunian Objects (TNOs) . Those found so far are believed to represent the "tip of the iceberg" of a large population of such objects belonging to the so-called Kuiper Belt . This is a roughly disk-shaped region between about 50 and 120 AU (about 7,500 to 18,000 million km) from the Sun, in which remnant bodies from the formation of the solar system are thought to be present. From their measured brightness and the distance, it is found that most known TNOs have diameters of the order of a few hundred kilometres. About half of those known move in elongated Pluto-like orbits, the others move somewhat further out in stable, circular orbits. During the two-week Science Verification programme, approximately 200 minutes were spent on a small observing programme aimed at obtaining images of some TNOs in different wavebands (B, V, R and I). Since this programme was primarily designed as a back-up to be executed during less favourable atmospheric conditions, some of the observations could not be used. However, images of three faint TNOs were recorded during an excellent series of 1-10 min exposures. From these data, it was possible to measure quite accurate magnitudes (and thus approximate sizes) and to determine their colours. One of them, 1996 TL66, was among the bluest TNOs ever observed. It is believed that this is because its surface has undergone recent transformation, possibly due to collisions with other objects or the breaking-off of small pieces from the surface, in both cases revealing "fresh" layers below. The combination of all available exposures made it possible to look for faint and tenous atmospheres around these TNOs, but none were found. These results show that it is possible, with little effort and even under quite unfavourable observing conditions, to obtain valuable information with the VLT about icy objects in the outer solar system. Of even greater interest will be future spectroscopic observations with FORS and ISAAC that will allow to study the surface composition in some detail, with the potential of providing direct information about (nearly?) pristine material from the early phases of the solar system. The scientists involved in this study are: Olivier Hainaut, Hermann Boehnhardt, Catherine Delahodde and Richard West (ESO) and Karen Meech (Institute of Astronomy, Hawaii, USA). How to obtain ESO Press Information ESO Press Information is made available on the World-Wide Web (URL: http://www.eso.org ). ESO Press Photos may be reproduced, if credit is given to the European Southern Observatory.
Leveraging Metadata to Create Interactive Images... Today!
NASA Astrophysics Data System (ADS)
Hurt, Robert L.; Squires, G. K.; Llamas, J.; Rosenthal, C.; Brinkworth, C.; Fay, J.
2011-01-01
The image gallery for NASA's Spitzer Space Telescope has been newly rebuilt to fully support the Astronomy Visualization Metadata (AVM) standard to create a new user experience both on the website and in other applications. We encapsulate all the key descriptive information for a public image, including color representations and astronomical and sky coordinates and make it accessible in a user-friendly form on the website, but also embed the same metadata within the image files themselves. Thus, images downloaded from the site will carry with them all their descriptive information. Real-world benefits include display of general metadata when such images are imported into image editing software (e.g. Photoshop) or image catalog software (e.g. iPhoto). More advanced support in Microsoft's WorldWide Telescope can open a tagged image after it has been downloaded and display it in its correct sky position, allowing comparison with observations from other observatories. An increasing number of software developers are implementing AVM support in applications and an online image archive for tagged images is under development at the Spitzer Science Center. Tagging images following the AVM offers ever-increasing benefits to public-friendly imagery in all its standard forms (JPEG, TIFF, PNG). The AVM standard is one part of the Virtual Astronomy Multimedia Project (VAMP); http://www.communicatingastronomy.org
Alaskan Auroral All-Sky Images on the World Wide Web
NASA Technical Reports Server (NTRS)
Stenbaek-Nielsen, H. C.
1997-01-01
In response to a 1995 NASA SPDS announcement of support for preservation and distribution of important data sets online, the Geophysical Institute, University of Alaska Fairbanks, Alaska, proposed to provide World Wide Web access to the Poker Flat Auroral All-sky Camera images in real time. The Poker auroral all-sky camera is located in the Davis Science Operation Center at Poker Flat Rocket Range about 30 miles north-east of Fairbanks, Alaska, and is connected, through a microwave link, with the Geophysical Institute where we maintain the data base linked to the Web. To protect the low light-level all-sky TV camera from damage due to excessive light, we only operate during the winter season when the moon is down. The camera and data acquisition is now fully computer controlled. Digital images are transmitted each minute to the Web linked data base where the data are available in a number of different presentations: (1) Individual JPEG compressed images (1 minute resolution); (2) Time lapse MPEG movie of the stored images; and (3) A meridional plot of the entire night activity.
High Performance Compression of Science Data
NASA Technical Reports Server (NTRS)
Storer, James A.; Carpentieri, Bruno; Cohn, Martin
1994-01-01
Two papers make up the body of this report. One presents a single-pass adaptive vector quantization algorithm that learns a codebook of variable size and shape entries; the authors present experiments on a set of test images showing that with no training or prior knowledge of the data, for a given fidelity, the compression achieved typically equals or exceeds that of the JPEG standard. The second paper addresses motion compensation, one of the most effective techniques used in interframe data compression. A parallel block-matching algorithm for estimating interframe displacement of blocks with minimum error is presented. The algorithm is designed for a simple parallel architecture to process video in real time.
Development and evaluation of vision rehabilitation devices.
Luo, Gang; Peli, Eli
2011-01-01
We have developed a range of vision rehabilitation devices and techniques for people with impaired vision due to either central vision loss or severely restricted peripheral visual field. We have conducted evaluation studies with patients to test the utilities of these techniques in an effort to document their advantages as well as their limitations. Here we describe our work on a visual field expander based on a head mounted display (HMD) for tunnel vision, a vision enhancement device for central vision loss, and a frequency domain JPEG/MPEG based image enhancement technique. All the evaluation studies included visual search paradigms that are suitable for conducting indoor controllable experiments.
NASA Astrophysics Data System (ADS)
Sokolov, R. I.; Abdullin, R. R.
2017-11-01
The use of nonlinear Markov process filtering makes it possible to restore both video stream frames and static photos at the stage of preprocessing. The present paper reflects the results of research in comparison of these types image filtering quality by means of special algorithm when Gaussian or non-Gaussian noises acting. Examples of filter operation at different values of signal-to-noise ratio are presented. A comparative analysis has been performed, and the best filtered kind of noise has been defined. It has been shown the quality of developed algorithm is much better than quality of adaptive one for RGB signal filtering at the same a priori information about the signal. Also, an advantage over median filter takes a place when both fluctuation and pulse noise filtering.
Wavelet-based compression of pathological images for telemedicine applications
NASA Astrophysics Data System (ADS)
Chen, Chang W.; Jiang, Jianfei; Zheng, Zhiyong; Wu, Xue G.; Yu, Lun
2000-05-01
In this paper, we present the performance evaluation of wavelet-based coding techniques as applied to the compression of pathological images for application in an Internet-based telemedicine system. We first study how well suited the wavelet-based coding is as it applies to the compression of pathological images, since these images often contain fine textures that are often critical to the diagnosis of potential diseases. We compare the wavelet-based compression with the DCT-based JPEG compression in the DICOM standard for medical imaging applications. Both objective and subjective measures have been studied in the evaluation of compression performance. These studies are performed in close collaboration with expert pathologists who have conducted the evaluation of the compressed pathological images and communication engineers and information scientists who designed the proposed telemedicine system. These performance evaluations have shown that the wavelet-based coding is suitable for the compression of various pathological images and can be integrated well with the Internet-based telemedicine systems. A prototype of the proposed telemedicine system has been developed in which the wavelet-based coding is adopted for the compression to achieve bandwidth efficient transmission and therefore speed up the communications between the remote terminal and the central server of the telemedicine system.
New Fast Lane towards Discoveries of Clusters of Galaxies Inaugurated
NASA Astrophysics Data System (ADS)
2003-07-01
Space and Ground-Based Telescopes Cooperate to Gain Deep Cosmological Insights Summary Using the ESA XMM-Newton satellite, a team of European and Chilean astronomers [2] has obtained the world's deepest "wide-field" X-ray image of the cosmos to date. This penetrating view, when complemented with observations by some of the largest and most efficient ground-based optical telescopes, including the ESO Very Large Telescope (VLT), has resulted in the discovery of several large clusters of galaxies. These early results from an ambitious research programme are extremely promising and pave the way for a very comprehensive and thorough census of clusters of galaxies at various epochs. Relying on the foremost astronomical technology and with an unequalled observational efficiency, this project is set to provide new insights into the structure and evolution of the distant Universe. PR Photo 19a/03: First image from the XMM-LSS survey. PR Photo 19b/03: Zoom-in on PR Photo 19b/03. PR Photo 19c/03: XMM-Newton contour map of the probable extent of a cluster of galaxies, superimposed upon a CHFT I-band image. PR Photo 19d/03: Velocity distribution in the cluster field shown in PR Photo 19c/03. The universal web Unlike grains of sand on a beach, matter is not uniformly spread throughout the Universe. Instead, it is concentrated into galaxies which themselves congregate into clusters (and even clusters of clusters). These clusters are "strung" throughout the Universe in a web-like structure, cf. ESO PR 11/01. Our Galaxy, the Milky Way, for example, belongs to the so-called Local Group which also comprises "Messier 31", the Andromeda Galaxy. The Local Group contains about 30 galaxies and measures a few million light-years across. Other clusters are much larger. The Coma cluster contains thousands of galaxies and measures more than 20 million light-years. Another well known example is the Virgo cluster, covering no less than 10 degrees on the sky ! Clusters of galaxies are the most massive bound structures in the Universe. They have masses of the order of one thousand million million times the mass of our Sun. Their three-dimensional space distribution and number density change with cosmic time and provide information about the main cosmological parameters in a unique way. About one fifth of the optically invisible mass of a cluster is in the form of a diffuse hot gas in between the galaxies. This gas has a temperature of the order of several tens of million degrees and a density of the order of one atom per liter. At such high temperatures, it produces powerful X-ray emission. Observing this intergalactic gas and not just the individual galaxies is like seeing the buildings of a city in daytime, not just the lighted windows at night. This is why clusters of galaxies are best discovered using X-ray satellites. Using previous X-ray satellites, astronomers have performed limited studies of the large-scale structure of the nearby Universe. However, they so far lacked the instruments to extend the search to large volumes of the distant Universe. The XMM-Newton wide-field observations ESO PR Photo 19a/03 ESO PR Photo 19a/03 [Preview - JPEG: 575 x 400 pix - 52k [Normal - JPEG: 1130 x 800 pix - 420k] ESO PR Photo 19b/03 ESO PR Photo 19b/03 [Preview - JPEG: 400 x 489 pix - 52k [Normal - JPEG: 800 x 978 pix - 464k] Captions: PR Photo 19a/03 is the first image from the XMM-LSS X-Ray survey. It is actually a combination of fourteen separate "pointings" of this space observatory. It represents a region of the sky eight times larger than the full Moon and contains around 25 clusters. The circles represent the X-Ray sources previously known from the 1991 ROSAT All-Sky Survey. PR Photo 19b/03 zooms in on a particularly interesting region of the image shown in ESO PR Photo 19a/03 with a possible cluster identified (in box). Each point on this graph represents a single X-ray photon detected by XMM-Newton. Marguerite Pierre (CEA Saclay, France), with a European/Chilean team of astronomers known as the XMM-LSS consortium [2], used the large field-of-view and the high sensitivity of ESA's X-ray observatory XMM-Newton to search for remote clusters of galaxies and map out their distribution in space. They could see back about 7,000 million years to a cosmological era when the Universe was about half its present size and age, when clusters of galaxies were more tightly packed. Tracking down the clusters is a painstaking, multi-step process, requiring both space and ground-based telescopes. Indeed, from X-ray images with XMM, it was possible to select several tens of cluster candidate objects, identified as areas of enhanced X-radiation (cf PR Photo 19b/03). But having candidates is not enough ! They must be confirmed and further studied with ground-based telescopes. In tandem with XMM-Newton, Pierre uses the very-wide-field imager attached to the 4-m Canada-France-Hawaii Telescope, on Mauna Kea, Hawaii, to take an optical snapshot of the same region of space. A tailor-made computer programme then combs the XMM-Newton data looking for concentrations of X-rays that suggest large, extended structures. These are the clusters and represent only about 10% of the detected X-ray sources. The others are mostly distant active galaxies. Back to the Ground ESO PR Photo 19c/03 ESO PR Photo 19c/03 [Preview - JPEG: 400 x 481 pix - 84k [Normal - JPEG: 800 x 961 pix - 1M] ESO PR Photo 19d/03 ESO PR Photo 19d/03 [Preview - JPEG: 400 x 488 pix - 44k [Normal - JPEG: 800 x 976 pix - 520k] Captions: PR Photo 19c/03 represents the XMM-Newton X-ray contour map of the cluster's probable extent superimposed upon the CFHT I-band image. A concentration of distant galaxies is conspicuous, thus confirming the X-ray detection. The symbols indicate the galaxies which have been subject to a subsequent spectroscopic measurement and found to be cluster members (triangles flag emission line galaxies). The individual galaxies in the cluster can then be targeted for further observations with ESO's VLT, in order to measure its distance and locate the cluster in the universe. Following the X-ray discovery and the optical cluster identification, galaxies in the cluster field shown in ESO PR Photo 19c/03 have been spectroscopically observed at the ESO VLT using the FORS2 instrument in order to determine the cluster redshift [3]. Using two masks, each of them observed during one hour, allowing to take the spectra of 16 emission-line galaxies at a time, the cluster was found to have a redshift of 0.84, corresponding to a distance of 8,000 million light-years, and a velocity dispersion of 750 km/s. PR Photo 19d/03 shows the measured velocity distribution. This is one of the most distant known clusters of galaxies for which a velocity dispersion has been measured. When the programme finds a cluster, it zooms in on that region and converts the XMM-Newton data into a contour map of X-ray intensity, which is then superimposed upon the CFHT optical image (PR Photo 19c/03). The astronomers use this to check if anything is visible within the area of extented X-ray emission. If something is seen, the work then shifts to one of the world's prime optical/infrared telescopes, the European Southern Observatory's Very Large Telescope (VLT) at Paranal (Chile). By means of the FORS multi-mode instruments, the astronomers zoom-in on the individual galaxies in the field, taking spectral measurements that reveal their overall characteristics, in particular their redshift and hence, distance. Cluster galaxies have similar distances and these measurement ultimately provide, by averaging, the cluster's distance as well as the velocity dispersion in the cluster. The FORS instruments are among the most efficient and versatile for this type of work, taking on the average spectra of 30 galaxies at a time. The first spectroscopic observations dedicated to the identification and redshift measurement of the XMM-LSS galaxy clusters took place during three nights in the fall of 2002. As of March 2003, there were only 5 known clusters in the literature at such a large redshift with enough spectroscopically measured redshifts to allow an estimate of the velocity dispersion. But the VLT allowed obtaining the dispersion in a distant cluster in 2 hours only, raising great expectations for future work. 700 spectra... Marguerite Pierre is extremely content : Weather and working conditions at the VLT were optimal. In three nights only, 12 cluster fields were observed, yielding no less than 700 spectra of galaxies. The overall strategy proved very successful. The high observing efficiency of the VLT and FORS support our plan to perform follow-up studies of large numbers of distant clusters with relatively little observing time. This represents a most substantial increase in efficiency compared to former searches. The present research programme has begun well, clearly demonstrating the feasibility of this new multi-telescope approach and its very high efficiency. And Marguerite Pierre and her colleagues are already seeing the first tantalising results: it seems to confirm that the number of clusters 7,000 million years ago is little different from that of today. This particular behaviour is predicted by models of the Universe that expand forever, driving the galaxy clusters further and further apart. Equally important, this multi-wavelength, multi-telescope approach developed by the XMM-LSS consortium to locate clusters of galaxies also constitutes a decisive next step in the fertile synergy between space and ground-based observatories and is therefore a basic building block of the forthcoming Virtual Observatory. More information This work is based on two papers to be published in the professional astronomy journal, Astronomy and Astrophysics (The XMM-LSS survey : I. Scientific motivations, design and first results by Marguerite Pierre et al., astro-ph/0305191 and The XMM-LSS survey : II. First high redshift galaxy clusters: relaxed and collapsing systems by Ivan Valtchanov et al., astro-ph/0305192). Dr. M. Pierre will give an invited talk on this subject at the IAU Symposium 216 - Maps of the Cosmos - this Thursday July 17, 2003 during the IAU General Assembly 2003 in Sydney, Australia.
Morgan, Karen L.M.
2016-06-27
The U.S. Geological Survey (USGS), as part of the National Assessment of Coastal Change Hazards project, conducts baseline and storm-response photography missions to document and understand the changes in vulnerability of the Nation's coasts to extreme storms (Morgan, 2009). On October 7–9, 2015, the USGS conducted an oblique aerial photographic survey of the coast from the South Carolina/North Carolina border to Montauk Point, New York (fig. 1), aboard a Cessna 182 (aircraft) at an altitude of 500 feet (ft) and approximately 1,200 ft offshore fig. 2. This mission was conducted to collect post-Hurricane Joaquin data for assessing incremental changes in the beach and nearshore area since the last surveys, mission flown in September 2014 (Virginia to New York: Morgan, 2015), November 2012 (northern North Carolina: Morgan and others, 2014) and May 2008 (southern North Carolina: unpublished report), and the data can be used to assess of future coastal change.The photographs in this report are Joint Photographic Experts Group (JPEG) images. ExifTool was used to add the following to the header of each photo: time of collection, Global Positioning System (GPS) latitude, GPS longitude, keywords, credit, artist (photographer), caption, copyright, and contact information. The photograph locations are an estimate of the position of the aircraft at the time the photograph was taken and do not indicate the location of any feature in the images (see the Navigation Data page). These photographs document the state of the barrier islands and other coastal features at the time of the survey. Pages containing thumbnail images of the photographs, referred to as contact sheets, were created in 5-minute segments of flight time. These segments can be found on the Photos and Maps page. Photographs can be opened directly with any JPEG-compatible image viewer by clicking on a thumbnail on the contact sheet.In addition to the photographs, a Google Earth Keyhole Markup Language (KML) file is provided and can be used to view the images by clicking on the marker and then clicking on either the thumbnail or the link above the thumbnail. The KML file was created using the photographic navigation files. This KML file can be found in the kml folder.
VLTI First Fringes with Two Auxiliary Telescopes at Paranal
NASA Astrophysics Data System (ADS)
2005-03-01
World's Largest Interferometer with Moving Optical Telescopes on Track Summary The Very Large Telescope Interferometer (VLTI) at Paranal Observatory has just seen another extension of its already impressive capabilities by combining interferometrically the light from two relocatable 1.8-m Auxiliary Telescopes. Following the installation of the first Auxiliary Telescope (AT) in January 2004 (see ESO PR 01/04), the second AT arrived at the VLT platform by the end of 2004. Shortly thereafter, during the night of February 2 to 3, 2005, the two high-tech telescopes teamed up and quickly succeeded in performing interferometric observations. This achievement heralds an era of new scientific discoveries. Both Auxiliary Telescopes will be offered from October 1, 2005 to the community of astronomers for routine observations, together with the MIDI instrument. By the end of 2006, Paranal will be home to four operational ATs that may be placed at 30 different positions and thus be combined in a very large number of ways ("baselines"). This will enable the VLTI to operate with enormous flexibility and, in particular, to obtain extremely detailed (sharp) images of celestial objects - ultimately with a resolution that corresponds to detecting an astronaut on the Moon. PR Photo 07a/05: Paranal Observing Platform with AT1 and AT2 PR Photo 07b/05: AT1 and AT2 with Open Domes PR Photo 07c/05: Evening at Paranal with AT1 and AT2 PR Photo 07d/05: AT1 and AT2 under the Southern Sky PR Photo 07e/05: First Fringes with AT1 and AT2 PR Video Clip 01/05: Two ATs at Paranal (Extract from ESO Newsreel 15) A Most Advanced Device ESO PR Video 01/05 ESO PR Video 01/05 Two Auxiliary Telescopes at Paranal [QuickTime: 160 x 120 pix - 37Mb - 4:30 min] [QuickTime: 320 x 240 pix - 64Mb - 4:30 min] ESO PR Photo 07a/05 ESO PR Photo 07a/05 [Preview - JPEG: 493 x400 pix - 44k] [Normal - JPEG: 985 x 800 pix - 727k] [HiRes - JPEG: 5000 x 4060 pix - 13.8M] Captions: ESO PR Video Clip 01/05 is an extract from ESO Video Newsreel 15, released on March 14, 2005. It provides an introduction to the VLT Interferometer (VLTI) and the two Auxiliary Telescopes (ATs) now installed at Paranal. ESO PR Photo 07a/05 shows the impressive ensemble at the summit of Paranal. From left to right, the enclosure of VLT Antu, Kueyen and Melipal, AT1, the VLT Survey Telescope (VST) in the background, AT2 and VLT Yepun. Located at the summit of the 2,600-m high Cerro Paranal in the Atacama Desert (Chile), ESO's Very Large Telescope (VLT) is at the forefront of astronomical technology and is one of the premier facilities in the world for optical and near-infrared observations. The VLT is composed of four 8.2-m Unit Telescope (Antu, Kueyen, Melipal and Yepun). They have been progressively put into service together with a vast suite of the most advanced astronomical instruments and are operated every night in the year. Contrary to other large astronomical telescopes, the VLT was designed from the beginning with the use of interferometry as a major goal. The href="/instruments/vlti">VLT Interferometer (VLTI) combines starlight captured by two 8.2- VLT Unit Telescopes, dramatically increasing the spatial resolution and showing fine details of a large variety of celestial objects. The VLTI is arguably the world's most advanced optical device of this type. It has already demonstrated its powerful capabilities by addressing several key scientific issues, such as determining the size and the shape of a variety of stars (ESO PR 22/02, PR 14/03 and PR 31/03), measuring distances to stars (ESO PR 25/04), probing the innermost regions of the proto-planetary discs around young stars (ESO PR 27/04) or making the first detection by infrared interferometry of an extragalactic object (ESO PR 17/03). "Little Brothers" ESO PR Photo 07b/05 ESO PR Photo 07b/05 [Preview - JPEG: 597 x 400 pix - 47k] [Normal - JPEG: 1193 x 800 pix - 330k] [HiRes - JPEG: 5000 x 3354 pix - 10.0M] ESO PR Photo 07c/05 ESO PR Photo 07c/05 [Preview - JPEG: 537 x 400 pix - 31k] [Normal - JPEG: 1074 x 800 pix - 555k] [HiRes - JPEG: 3000 x 2235 pix - 6.0M] ESO PR Photo 07d/05 ESO PR Photo 07d/05 [Preview - JPEG: 400 x 550 pix - 60k] [Normal - JPEG: 800 x 1099 pix - 946k] [HiRes - JPEG: 2414 x 3316 pix - 11.0M] Captions: ESO PR Photo 07b/05 shows VLTI Auxiliary Telescopes 1 and 2 (AT1 and AT2) in the early evening light, with the spherical domes opened and ready for observations. In ESO PR Photo 07c/05, the same scene is repeated later in the evening, with three of the large telescope enclosures in the background. This photo and ESO PR Photo 07c/05 which is a time-exposure with AT1 and AT2 under the beautiful night sky with the southern Milky Way band were obtained by ESO staff member Frédéric Gomté. However, most of the time the large telescopes are used for other research purposes. They are therefore only available for interferometric observations during a limited number of nights every year. Thus, in order to exploit the VLTI each night and to achieve the full potential of this unique setup, some other (smaller), dedicated telescopes were included into the overall VLT concept. These telescopes, known as the VLTI Auxiliary Telescopes (ATs), are mounted on tracks and can be placed at precisely defined "parking" observing positions on the observatory platform. From these positions, their light beams are fed into the same common focal point via a complex system of reflecting mirrors mounted in an underground system of tunnels. The Auxiliary Telescopes are real technological jewels. They are placed in ultra-compact enclosures, complete with all necessary electronics, an air conditioning system and cooling liquid for thermal control, compressed air for enclosure seals, a hydraulic plant for opening the dome shells, etc. Each AT is also fitted with a transporter that lifts the telescope and relocates it from one station to another. It moves around with its own housing on the top of Paranal, almost like a snail. Moreover, these moving ultra-high precision telescopes, each weighing 33 tonnes, fulfill very stringent mechanical stability requirements: "The telescopes are unique in the world", says Bertrand Koehler, the VLTI AT Project Manager. "After being relocated to a new position, the telescope is repositioned to a precision better than one tenth of a millimetre - that is, the size of a human hair! The image of the star is stabilized to better than thirty milli-arcsec - this is how we would see an object of the same size as one of the VLT enclosures on the Moon. Finally, the path followed by the light inside the telescope after bouncing on ten mirrors is stable to better than a few nanometres, which is the size of about one hundred atoms." A World Premiere ESO PR Photo 07e/05 ESO PR Photo 07e/05 "First Fringes" with two ATs [Preview - JPEG: 400 x 559 pix - 61k] [Normal - JPEG: 800 x 1134 pix - 357k] Caption: ESO PR Photo 07e/05 The "First Fringes" obtained with the first two VLTI Auxiliary Telescopes, as seen on the computer screen during the observation. The fringe pattern arises when the light beams from the two 1.8-m telescopes are brought together inside the VINCI instrument. The pattern itself contains information about the angular extension of the observed object, here the 6th-magnitude star HD62082. The fringes are acquired by moving a mirror back and forth around the position of equal path length for the two telescopes. One such scan can be seen in the third row window. This pattern results from the raw interferometric signals (the last two rows) after calibration and filtering using the photometric signals (the 4th and 5th row). The first two rows show the spectrum of the fringe pattern signal. More details about the interpretation of this pattern is given in Appendix A of PR 06/01. The possibility to move the ATs around and thus to perform observations with a large number of different telescope configurations ensures a great degree of flexibility, unique for an optical interferometric installation of this size and crucial for its exceptional performance. The ATs may be placed at 30 different positions and thus be combined in a very large number of ways. If the 8.2-m VLT Unit Telescopes are also taken into account, no less than 254 independent pairings of two telescopes ("baselines"), different in length and/or orientation, are available. Moreover, while the largest possible distance between two 8.2-m telescopes (ANTU and YEPUN) is about 130 metres, the maximal distance between two ATs may reach 200 metres. As the achievable image sharpness increases with telescope separation, interferometric observations with the ATs positioned at the extreme positions will therefore yield sharper images than is possible by combining light from the large telescopes alone. All of this will enable the VLTI to obtain exceedingly detailed (sharp) and very complete images of celestial objects - ultimately with a resolution that corresponds to detecting an astronaut on the Moon. Auxiliary Telescope no. 1 (AT1) was installed on the observatory's platform in January 2004. Now, one year later, the second of the four to be delivered, has been integrated into the VLTI. The installation period lasted two months and ended around midnight during the night of February 2-3, 2005. With extensive experience from the installation of AT1, the team of engineers and astronomers were able to combine the light from the two Auxiliary Telescopes in a very short time. In fact, following the necessary preparations, it took them only five minutes to adjust this extremely complex optical system and successfully capture the "First Fringes" with the VINCI test instrument! The star which was observed is named HD62082 and is just at the limit of what can be observed with the unaided eye (its visual magnitude is 6.2). The fringes were as clear as ever, and the VLTI control system kept them stable for more than one hour. Four nights later this exercise was repeated successfully with the mid-infrared science instrument MIDI. Fringes on the star Alphard (Alpha Hydrae) were acquired on February 7 at 4:05 local time. For Roberto Gilmozzi, Director of ESO's La Silla Paranal Observatory, "this is a very important new milestone. The introduction of the Auxiliary Telescopes in the development of the VLT Interferometer will bring interferometry out of the specialist experiment and into the domain of common user instrumentation for every astronomer in Europe. Without doubt, it will enormously increase the potentiality of the VLTI." With two more telescopes to be delivered within a year to the Paranal Observatory, ESO cements its position as world-leader in ground-based optical astronomy, providing Europe's scientists with the tools they need to stay at the forefront in this exciting science. The VLT Interferometer will, for example, allow astronomers to study details on the surface of stars or to probe proto-planetary discs and other objects for which ultra-high precision imaging is required. It is premature to speculate on what the Very Large Telescope Interferometer will soon discover, but it is easy to imagine that there may be quite some surprises in store for all of us.
NASA Astrophysics Data System (ADS)
Yao, Juncai; Liu, Guizhong
2017-03-01
In order to achieve higher image compression ratio and improve visual perception of the decompressed image, a novel color image compression scheme based on the contrast sensitivity characteristics of the human visual system (HVS) is proposed. In the proposed scheme, firstly the image is converted into the YCrCb color space and divided into sub-blocks. Afterwards, the discrete cosine transform is carried out for each sub-block, and three quantization matrices are built to quantize the frequency spectrum coefficients of the images by combining the contrast sensitivity characteristics of HVS. The Huffman algorithm is used to encode the quantized data. The inverse process involves decompression and matching to reconstruct the decompressed color image. And simulations are carried out for two color images. The results show that the average structural similarity index measurement (SSIM) and peak signal to noise ratio (PSNR) under the approximate compression ratio could be increased by 2.78% and 5.48%, respectively, compared with the joint photographic experts group (JPEG) compression. The results indicate that the proposed compression algorithm in the text is feasible and effective to achieve higher compression ratio under ensuring the encoding and image quality, which can fully meet the needs of storage and transmission of color images in daily life.
Interhospital network system using the worldwide web and the common gateway interface.
Oka, A; Harima, Y; Nakano, Y; Tanaka, Y; Watanabe, A; Kihara, H; Sawada, S
1999-05-01
We constructed an interhospital network system using the worldwide web (WWW) and the Common Gateway Interface (CGI). Original clinical images are digitized and stored as a database for educational and research purposes. Personal computers (PCs) are available for data treatment and browsing. Our system is simple, as digitized images are stored into a Unix server machine. Images of important and interesting clinical cases are selected and registered into the image database using CGI. The main image format is 8- or 12-bit Joint Photographic Experts Group (JPEG) image. Original clinical images are finally stored in CD-ROM using a CD recorder. The image viewer can browse all of the images for one case at once as thumbnail pictures; image quality can be selected depending on the user's purpose. Using the network system, clinical images of interesting cases can be rapidly transmitted and discussed with other related hospitals. Data transmission from relational hospitals takes 1 to 2 minutes per 500 Kbyte of data. More distant hospitals (e.g., Rakusai Hospital, Kyoto) takes 1 minute more. The mean number of accesses our image database in a recent 3-month period was 470. There is a total about 200 cases in our image database, acquired over the past 2 years. Our system is useful for communication and image treatment between hospitals and we will describe the elements of our system and image database.
The Atacama Large Millimeter Array (ALMA)
NASA Astrophysics Data System (ADS)
1999-06-01
The Atacama Large Millimeter Array (ALMA) is the new name [2] for a giant millimeter-wavelength telescope project. As described in the accompanying joint press release by ESO and the U.S. National Science Foundation , the present design and development phase is now a Europe-U.S. collaboration, and may soon include Japan. ALMA may become the largest ground-based astronomy project of the next decade after VLT/VLTI, and one of the major new facilities for world astronomy. ALMA will make it possible to study the origins of galaxies, stars and planets. As presently envisaged, ALMA will be comprised of up to 64 12-meter diameter antennas distributed over an area 10 km across. ESO PR Photo 24a/99 shows an artist's concept of a portion of the array in a compact configuration. ESO PR Video Clip 03/99 illustrates how all the antennas will move in unison to point to a single astronomical object and follow it as it traverses the sky. In this way the combined telescope will produce astronomical images of great sharpness and sensitivity [3]. An exceptional site For such observations to be possible the atmosphere above the telescope must be transparent at millimeter and submillimeter wavelengths. This requires a site that is high and dry, and a high plateau in the Atacama desert of Chile, probably the world's driest, is ideal - the next best thing to outer space for these observations. ESO PR Photo 24b/99 shows the location of the chosen site at Chajnantor, at 5000 meters altitude and 60 kilometers east of the village of San Pedro de Atacama, as seen from the Space Shuttle during a servicing mission of the Hubble Space Telescope. ESO PR Photo 24c/99 and ESO PR Photo 24d/99 show a satellite image of the immediate vicinity and the site marked on a map of northern Chile. ALMA will be the highest continuously operated observatory in the world. The stark nature of this extreme site is well illustrated by the panoramic view in ESO PR Photo 24e/99. High sensitivity and sharp images ALMA will be extremely sensitive to radiation at milllimeter and submillimeter wavelengths. The large number of antennas gives a total collecting area of over 7000 square meters, larger than a football field. At the same time, the shape of the surface of each antenna must be extremely precise under all conditions; the overall accuracy over the entire 12-m diameter must be better than 0.025 millimeters (25µm), or one-third of the diameter of a human hair. The combination of large collecting area and high precision results in extremely high sensitivity to faint cosmic signals. The telescope must also be able to resolve the fine details of the objects it detects. In order to do this at millimeter wavelengths the effective diameter of the overall telescope must be very large - about 10 km. As it is impossible to build a single antenna with this diameter, an array of antennas is used instead, with the outermost antennas being 10 km apart. By combining the signals from all antennas together in a large central computer, it is possible to synthesize the effect of a single dish 10 km across. The resulting angular resolution is about 10 milli-arcseconds, less than one-thousandth the angular size of Saturn. Exciting research perspectives The scientific case for this revolutionary telescope is overwhelming. ALMA will make it possible to witness the formation of the earliest and most distant galaxies. It will also look deep into the dust-obscured regions where stars are born, to examine the details of star and planet formation. But ALMA will go far beyond these main science drivers, and will have a major impact on virtually all areas of astronomy. It will be a millimeter-wave counterpart to the most powerful optical/infrared telescopes such as ESO's Very Large Telescope (VLT) and the Hubble Space Telescope, with the additional advantage of being unhindered by cosmic dust opacity. The first galaxies in the Universe are expected to become rapidly enshrouded in the dust produced by the first stars. The dust can dim the galaxies at optical wavelengths, but the same dust radiates brightly at longer wavelengths. In addition, the expansion of the Universe causes the radiation from distant galaxies to be shifted to longer wavelengths. For both reasons, the earliest galaxies at the epoch of first light can be found with ALMA, and the subsequent evolution of galaxies can be mapped over cosmic time. ALMA will be of great importance for our understanding of the origins of stars and planetary systems. Stellar nurseries are completely obscured at optical wavelengths by dense "cocoons" of dust and gas, but ALMA can probe deep into these regions and study the fundamental processes by which stars are assembled. Moreover, it can observe the major reservoirs of biogenic elements (carbon, oxygen, nitrogen) and follow their incorporation into new planetary systems. A particularly exciting prospect for ALMA is to use its exceptionally sharp images to obtain evidence for planet formation by the presence of gaps in dusty disks around young stars, cleared by large bodies coalescing around the stars. Equally fundamental are observations of the dying gasps of stars at the other end of the stellar lifecycle, when they are often surrounded by shells of molecules and dust enriched in heavy elements produced by the nuclear fires now slowly dying. ALMA will offer exciting new views of our solar system. Studies of the molecular content of planetary atmospheres with ALMA's high resolving power will provide detailed weather maps of Mars, Jupiter, and the other planets and even their satellites. Studies of comets with ALMA will be particularly interesting. The molecular ices of these visitors from the outer reaches of the solar system have a composition that is preserved from ages when the solar system was forming. They evaporate when the comet comes close to the sun, and studies of the resulting gases with ALMA will allow accurate analysis of the chemistry of the presolar nebula. The road ahead The three-year design and development phase of the project is now underway as a collaboration between Europe and the U.S., and Japan may also join in this effort. Assuming the construction phase begins about two years from now, limited operations of the array may begin in 2005 and the full array may become operational by 2009. Notes [1] Press Releases about this event have also been issued by some of the other organisations participating in this project: * CNRS (in French) * MPG (in German) * NOVA (in Dutch) * NRAO * NSF (ASCII and HTML versions) * PPARC [2] "ALMA" means "soul" in Spanish. [3] Additional information about ALMA is available on the web: * Articles in the ESO Messenger - "The Large Southern Array" (March 1998), "European Site Testing at Chajnantor" (December 1998) and "The ALMA Project" (June 1999), cf. http://www.eso.org/gen-fac/pubs/messenger/ * ALMA website at ESO at http://www.eso.org/projects/alma/ * ALMA website at the U.S. National Radio Astronomy Observatory (NRAO) at http://www.mma.nrao.edu/ * ALMA website in The Netherlands about the detectors at http://www.sron.rug.nl/alma/ ALMA/Chajnantor Video Clip and Photos ESO PR Video Clip 03/99 [MPEG-version] ESO PR Video Clip 03/99 (2450 frames/1:38 min) [MPEG Video; 160x120 pix; 2.1Mb] [MPEG Video; 320x240 pix; 10.0Mb] [RealMedia; streaming; 700k] [RealMedia; streaming; 2.3M] About ESO Video Clip 03/99 : This video clip about the ALMA project contains two sequences. The first shows a panoramic scan of the Chajnantor plain from approx. north-east to north-west. The Chajnantor mountain passes through the field-of-view and the perfect cone of the Licancabur volcano (5900 m) on the Bolivian border is seen at the end (compare also with ESO PR 24e/99 below. The second is a 52-sec animation with a change of viewing perspective of the array and during which the antennas move in unison. For convenience, the clip is available in four versions: two MPEG files of different sizes and two streamer-versions of different quality that require RealPlayer software. There is no audio. Note that ESO Video News Reel No. 5 with more related scenes and in professional format with complete shot list is also available. ESO PR Photo 24b/99 ESO PR Photo 24b/99 [Preview - JPEG: 400 x 446 pix - 184k] [Normal - JPEG: 800 x 892 pix - 588k] [High-Res - JPEG: 3000 x 3345 pix - 5.4M] Caption to ESO PR Photo 24b/99 : View of Northern Chile, as seen from the NASA Space Shuttle during a servicing mission to the Hubble Space Telescope (partly visible to the left). The Atacama Desert, site of the ESO VLT at Paranal Observatory and the proposed location for ALMA at Chajnantor, is seen from North (foreground) to South. The two sites are only a few hundred km distant from each other. Few clouds are seen in this extremely dry area, due to the influence of the cold Humboldt Stream along the Chilean Pacific coast (right) and the high Andes mountains (left) that act as a barrier. Photo courtesy ESA astronaut Claude Nicollier. ESO PR Photo 24c/99 ESO PR Photo 24c/99 [Preview - JPEG: 400 x 318 pix - 212k] [Normal - JPEG: 800 x 635 pix - 700k] [High-Res - JPEG: 3000 x 2382 pix - 5.9M] Caption to ESO PR Photo 24c/99 : This satellite image of the Chajnantor area was produced in 1998 at Cornell University (USA), by Jennifer Yu, Jeremy Darling and Riccardo Giovanelli, using the Thematic Mapper data base maintained at the Geology Department laboratory directed by Bryan Isacks. It is a composite of three exposures in spectral bands at 1.6 µm (rendered as red), 1.0 µm (green) and 0.5 µm (blue). The horizontal resolution of the false-colour image is about 30 meters. North is at the top of the photo. ESO PR Photo 24d/99 ESO PR Photo 24d/99 [Preview - JPEG: 400 x 381 pix - 108k] [Normal - JPEG: 800 x 762 pix - 240k] [High-Res - JPEG: 2300 x 2191 pix - 984k] Caption to ESO PR Photo 24d/99 : Geographical map with the sites of the VLT and ALMA indicated. ESO PR Photo 24e/99 ESO PR Photo 24e/99 [Preview - JPEG: 400 x 238 pix - 93k] [Normal - JPEG: 800 x 475 pix - 279k] [High-Res - JPEG: 2862 x 1701 pix - 4.2M] Caption to ESO PR Photo 24e/99 : Panoramic view of the proposed site for ALMA at Chajnantor. This high-altitude plain (elevation 5000 m) in the Chilean Andes mountains is an ideal site for ALMA. In this view towards the north, the Chajnantor mountain (5600 m) is in the foreground, left of the centre. The perfect cone of the Licancabur volcano (5900 m) on the Bolivian border is in the background further to the left. This image is a wide-angle composite (140° x 70°) of three photos (Hasselblad 6x6 with SWC 1:4.5/38 mm Biogon), obtained in December 1998. How to obtain ESO Press Information ESO Press Information is made available on the World-Wide Web (URL: http://www.eso.org../ ). ESO Press Photos may be reproduced, if credit is given to the European Southern Observatory.
Segmentation-driven compound document coding based on H.264/AVC-INTRA.
Zaghetto, Alexandre; de Queiroz, Ricardo L
2007-07-01
In this paper, we explore H.264/AVC operating in intraframe mode to compress a mixed image, i.e., composed of text, graphics, and pictures. Even though mixed contents (compound) documents usually require the use of multiple compressors, we apply a single compressor for both text and pictures. For that, distortion is taken into account differently between text and picture regions. Our approach is to use a segmentation-driven adaptation strategy to change the H.264/AVC quantization parameter on a macroblock by macroblock basis, i.e., we deviate bits from pictorial regions to text in order to keep text edges sharp. We show results of a segmentation driven quantizer adaptation method applied to compress documents. Our reconstructed images have better text sharpness compared to straight unadapted coding, at negligible visual losses on pictorial regions. Our results also highlight the fact that H.264/AVC-INTRA outperforms coders such as JPEG-2000 as a single coder for compound images.
IfA Catalogs of Solar Data Products
NASA Astrophysics Data System (ADS)
Habbal, Shadia R.; Scholl, I.; Morgan, H.
2009-05-01
This paper presents a new set of online catalogs of solar data products. The IfA Catalogs of Solar Data Products were developed to enhance the scientific output of coronal images acquired from ground and space, starting with the SoHO era. Image processing tools have played a significant role in the production of these catalogs [Morgan et al. 2006, 2008, Scholl and Habbal 2008]. Two catalogs are currently available at http://alshamess.ifa.hawaii.edu/ : 1) Catalog of daily coronal images: One coronal image per day from EIT, MLSO and LASCO/C2 and C3 have been processed using the Normalizing Radial-Graded-Filter (NRGF) image processing tool. These images are available individually or as composite images. 2) Catalog of LASCO data: The whole LASCO dataset has been re-processed using the same method. The user can search files by dates and instruments, and images can be retrieved as JPEG or FITS files. An option to make on-line GIF movies from selected images is also available. In addition, the LASCO data set can be searched from existing CME catalogs (CDAW and Cactus). By browsing one of the two CME catalogs, the user can refine the query and access LASCO data covering the time frame of a CME. The catalogs will be continually updated as more data become publicly available.
Escott, Edward J; Rubinstein, David
2004-01-01
It is often necessary for radiologists to use digital images in presentations and conferences. Most imaging modalities produce images in the Digital Imaging and Communications in Medicine (DICOM) format. The image files tend to be large and thus cannot be directly imported into most presentation software, such as Microsoft PowerPoint; the large files also consume storage space. There are many free programs that allow viewing and processing of these files on a personal computer, including conversion to more common file formats such as the Joint Photographic Experts Group (JPEG) format. Free DICOM image viewing and processing software for computers running on the Microsoft Windows operating system has already been evaluated. However, many people use the Macintosh (Apple Computer) platform, and a number of programs are available for these users. The World Wide Web was searched for free DICOM image viewing or processing software that was designed for the Macintosh platform or is written in Java and is therefore platform independent. The features of these programs and their usability were evaluated. There are many free programs for the Macintosh platform that enable viewing and processing of DICOM images. (c) RSNA, 2004.
Exploring the feasibility of traditional image querying tasks for industrial radiographs
NASA Astrophysics Data System (ADS)
Bray, Iliana E.; Tsai, Stephany J.; Jimenez, Edward S.
2015-08-01
Although there have been great strides in object recognition with optical images (photographs), there has been comparatively little research into object recognition for X-ray radiographs. Our exploratory work contributes to this area by creating an object recognition system designed to recognize components from a related database of radiographs. Object recognition for radiographs must be approached differently than for optical images, because radiographs have much less color-based information to distinguish objects, and they exhibit transmission overlap that alters perceived object shapes. The dataset used in this work contained more than 55,000 intermixed radiographs and photographs, all in a compressed JPEG form and with multiple ways of describing pixel information. For this work, a robust and efficient system is needed to combat problems presented by properties of the X-ray imaging modality, the large size of the given database, and the quality of the images contained in said database. We have explored various pre-processing techniques to clean the cluttered and low-quality images in the database, and we have developed our object recognition system by combining multiple object detection and feature extraction methods. We present the preliminary results of the still-evolving hybrid object recognition system.
Wu, Xiaolin; Zhang, Xiangjun; Wang, Xiaohan
2009-03-01
Recently, many researchers started to challenge a long-standing practice of digital photography: oversampling followed by compression and pursuing more intelligent sparse sampling techniques. In this paper, we propose a practical approach of uniform down sampling in image space and yet making the sampling adaptive by spatially varying, directional low-pass prefiltering. The resulting down-sampled prefiltered image remains a conventional square sample grid, and, thus, it can be compressed and transmitted without any change to current image coding standards and systems. The decoder first decompresses the low-resolution image and then upconverts it to the original resolution in a constrained least squares restoration process, using a 2-D piecewise autoregressive model and the knowledge of directional low-pass prefiltering. The proposed compression approach of collaborative adaptive down-sampling and upconversion (CADU) outperforms JPEG 2000 in PSNR measure at low to medium bit rates and achieves superior visual quality, as well. The superior low bit-rate performance of the CADU approach seems to suggest that oversampling not only wastes hardware resources and energy, and it could be counterproductive to image quality given a tight bit budget.
A New Color Image Encryption Scheme Using CML and a Fractional-Order Chaotic System
Wu, Xiangjun; Li, Yang; Kurths, Jürgen
2015-01-01
The chaos-based image cryptosystems have been widely investigated in recent years to provide real-time encryption and transmission. In this paper, a novel color image encryption algorithm by using coupled-map lattices (CML) and a fractional-order chaotic system is proposed to enhance the security and robustness of the encryption algorithms with a permutation-diffusion structure. To make the encryption procedure more confusing and complex, an image division-shuffling process is put forward, where the plain-image is first divided into four sub-images, and then the position of the pixels in the whole image is shuffled. In order to generate initial conditions and parameters of two chaotic systems, a 280-bit long external secret key is employed. The key space analysis, various statistical analysis, information entropy analysis, differential analysis and key sensitivity analysis are introduced to test the security of the new image encryption algorithm. The cryptosystem speed is analyzed and tested as well. Experimental results confirm that, in comparison to other image encryption schemes, the new algorithm has higher security and is fast for practical image encryption. Moreover, an extensive tolerance analysis of some common image processing operations such as noise adding, cropping, JPEG compression, rotation, brightening and darkening, has been performed on the proposed image encryption technique. Corresponding results reveal that the proposed image encryption method has good robustness against some image processing operations and geometric attacks. PMID:25826602
JPEG XS-based frame buffer compression inside HEVC for power-aware video compression
NASA Astrophysics Data System (ADS)
Willème, Alexandre; Descampe, Antonin; Rouvroy, Gaël.; Pellegrin, Pascal; Macq, Benoit
2017-09-01
With the emergence of Ultra-High Definition video, reference frame buffers (FBs) inside HEVC-like encoders and decoders have to sustain huge bandwidth. The power consumed by these external memory accesses accounts for a significant share of the codec's total consumption. This paper describes a solution to significantly decrease the FB's bandwidth, making HEVC encoder more suitable for use in power-aware applications. The proposed prototype consists in integrating an embedded lightweight, low-latency and visually lossless codec at the FB interface inside HEVC in order to store each reference frame as several compressed bitstreams. As opposed to previous works, our solution compresses large picture areas (ranging from a CTU to a frame stripe) independently in order to better exploit the spatial redundancy found in the reference frame. This work investigates two data reuse schemes namely Level-C and Level-D. Our approach is made possible thanks to simplified motion estimation mechanisms further reducing the FB's bandwidth and inducing very low quality degradation. In this work, we integrated JPEG XS, the upcoming standard for lightweight low-latency video compression, inside HEVC. In practice, the proposed implementation is based on HM 16.8 and on XSM 1.1.2 (JPEG XS Test Model). Through this paper, the architecture of our HEVC with JPEG XS-based frame buffer compression is described. Then its performance is compared to HM encoder. Compared to previous works, our prototype provides significant external memory bandwidth reduction. Depending on the reuse scheme, one can expect bandwidth and FB size reduction ranging from 50% to 83.3% without significant quality degradation.
First Light with a 67-Million-Pixel WFI Camera
NASA Astrophysics Data System (ADS)
1999-01-01
The newest astronomical instrument at the La Silla observatory is a super-camera with no less than sixty-seven million image elements. It represents the outcome of a joint project between the European Southern Observatory (ESO) , the Max-Planck-Institut für Astronomie (MPI-A) in Heidelberg (Germany) and the Osservatorio Astronomico di Capodimonte (OAC) near Naples (Italy), and was installed at the 2.2-m MPG/ESO telescope in December 1998. Following careful adjustment and testing, it has now produced the first spectacular test images. With a field size larger than the Full Moon, the new digital Wide Field Imager is able to obtain detailed views of extended celestial objects to very faint magnitudes. It is the first of a new generation of survey facilities at ESO with which a variety of large-scale searches will soon be made over extended regions of the southern sky. These programmes will lead to the discovery of particularly interesting and unusual (rare) celestial objects that may then be studied with large telescopes like the VLT at Paranal. This will in turn allow astronomers to penetrate deeper and deeper into the many secrets of the Universe. More light + larger fields = more information! The larger a telescope is, the more light - and hence information about the Universe and its constituents - it can collect. This simple truth represents the main reason for building ESO's Very Large Telescope (VLT) at the Paranal Observatory. However, the information-gathering power of astronomical equipment can also be increased by using a larger detector with more image elements (pixels) , thus permitting the simultaneous recording of images of larger sky fields (or more details in the same field). It is for similar reasons that many professional photographers prefer larger-format cameras and/or wide-angle lenses to the more conventional ones. The Wide Field Imager at the 2.2-m telescope Because of technological limitations, the sizes of detectors most commonly in use in optical astronomical instruments - the "Charge-Coupled Devices (CCD's)" - are currently restricted to about 4000 x 4000 pixels. For the time being, the only possible way towards even larger detector areas is by assembling mosaics of CCD's. ESO , MPI-A and OAC have therefore undertaken a joint project to build a new and large astronomical camera with a mosaic of CCD's. This new Wide Field Imager (WFI) comprises eight CCD's with high sensitivity from the ultraviolet to the infrared spectral domain, each with 2046 x 4098 pixels. Mounted behind an advanced optical system at the Cassegrain focus of the 2.2-m telescope of the Max-Planck-Gesellschaft (MPG) at ESO's La Silla Observatory in Chile, the combined 8184 x 8196 = 67,076,064 pixels cover a square field-of-view with an edge of more than half a degree (over 30 arcmin) [1]. Compared to the viewing field of the human eye, this may still appear small, but in the domain of astronomical instrumentation, it is indeed a large step forward. For comparison, the largest field-of-view with the FORS1 instrument at the VLT is about 7 arcmin. Moreover, the level of detail detectable with the WFI (theoretical image sharpness) exceeds what is possible with the naked eye by a factor of about 10,000. The WFI project was completed in only two years in response to a recommendation to ESO by the "La Silla 2000" Working Group and the Scientific-Technical Committee (STC) to offer this type of instrument to the community. The MPI-A proposed to build such an instrument for the MPG/ESO 2.2-m telescope and a joint project was soon established. A team of astronomers from the three institutions is responsible for the initial work with the WFI at La Silla. A few other Cameras of this size are available, e.g. at Hawaii, Kitt Peak (USA) and Cerro Tololo (Chile), but this is the first time that a telescope this large has been fully dedicated to wide-field imaging with an 8kx8k CCD. The first WFI images Various exposures were obtained during the early tests with the WFI in order to arrive at the optimum adjustment of the camera at the telescope. We show here two of these that illustrate the great potential of this new facility. Spiral Galaxy NGC 253 ESO PR Photo 02a/99 ESO PR Photo 02a/99 [Preview - JPEG: 800x850 pix - 205k] [High-Res - JPEG: 4000 x 4252 pix - 3.0Mb] ESO PR Photo 02b/99 ESO PR Photo 02b/99 [Preview - JPEG: 800x870 pix - 353k] [High-Res - JPEG: 2200 x 2393 pix - 2.0Mb] Caption to PR Photos 02a/99 and 02b/99 : These photos show a sky field around the Spiral Galaxy NGC 253 (Type Sc) seen nearly edge-on. It is located in the southern constellation Sculptor at a distance of about 8 million light-years. The image is the sum of five 5-min exposures through a blue (B-band) optical filtre. They were slightly offset with respect to each other so that the small gaps between the eight CCD's of the mosaic are no longer visible. This image also shows the faint trails of 2 artificial satellites. In PR Photo 02a/99 , the full WFI field-of-view is reproduced, while the sub-field in PR Photo 02b/99 contains some fainter and smaller background galaxies. Many of the quite numerous and small, slightly fuzzy objects are undoubtedly globular clusters of NGC 253. Technical information: The image processing consisted of de-biassing, flat-fielding, and removal (by interpolation) of some bad columns. The full-width-half-maximum (FWHM) of stellar images is about 1.0 arcsec. PR Photo 02a/99 was rebinned (2x2) to 4kx4k size and sampling 0.48 arcsec/pixel. PR Photo 02b/99 is a subimage of the former, but at the full original sampling of 0.24 arcsec/pixel. It covers about 2kx2k, or about 1/16 of the full field. North is up and East is left. The observations were made on December 17, 1998. The Waning Moon ESO PR Photo 02c/99 ESO PR Photo 02c/99 [Preview - JPEG: 800 x 1245 pix - 242k] [High-Res - JPEG: 3000 x 4667 pix - 2.3Mb] ESO PR Photo 02d/99 ESO PR Photo 02d/99 [Preview - JPEG: 800 x 1003 pix - 394k] [High-Res - JPEG: 3000 x 3760 pix - 2.1Mb] ESO PR Photo 02e/99 ESO PR Photo 02e/99 [Preview - JPEG: 800 x 706 pix - 274k] [High-Res - JPEG: 3000 x 2648 pix - 1.5Mb] Caption to PR Photos 02c-e/99 : A series of short exposures through a near-infrared filtre was obtained of the waning Moon at sunrise on January 12 (at about 10 hrs UT), i.e. about 5 days before New Moon (24.3 days "old"). As can be seen in PR Photo 02c/99 , the edge of the full field-of-view is about the size of the diameter of the Moon. In addition, two impressive views were extracted from this frame and are here shown at full resolution; 1 pixel is about 470 metres on the surface of the Moon at a distance of just over 400,000 km. PR Photo 02d/99 displays the Mare Humorum area in the south-east quadrant with the crater Gassendi overlapping the northern rim. PR Photo 02d/99 is a view of the plains near the Moon's north-east rim, just eastwards of Sinus Iridum (the large crater in the shadows at the upper right), on the rim of which the crater Bianchini is located. The crater just below the centre is Mairan and the one about halfway between these two and of about the same size is Sharp . Technical information: Several 0.1 sec exposures were made through a near-infrared filtre (856 nm; FWHM 14 nm) with small offsets were recombined (to cover the gaps between the individual CCD's); otherwise, the image is raw. PR Photo 02c/99 was rebinned (2x2) to 4kx4k size and sampling 0.48 arcsec/pixel. The right-hand side of the picture was cropped in this reproduction to reduce the file size. PR Photos 02d/99 and 02e/99 are subimages of the former, but at the full original sampling of 0.24 arcsec/pixel; they covers about 1000x800 and 900x1050 pixels, or about 1/80 and 1/70 of the full field, respectively. North is up and East is left. The virtues of wide-angle imaging Wide-angle imaging is one of the most fundamental applications of observational astronomy. Only from (multi-band) observations over large areas of the sky can large-scale structures and rare objects be detected and put in a proper statistical perspective with other objects. Some typical examples of future survey work: very distant quasars and galaxies, clusters of galaxies, small bodies orbiting the Sun, brown dwarfs, low-surface brightness galaxies, peculiar stars, objects with emission-line spectra, gravitational lenses, etc. Other important applications include the search for supernovae in distant clusters of galaxies and the optical identification of the rapidly fading gamma-ray bursters which are detected by space observatories, but for which only very crude positional determinations are available. Once "promising objects" have been found and accurately located on the sky by the WFI, the enormous light collecting power of the VLT is then available to study them at much higher spectral and spatial detail and over a much wider range of wavelengths. In particular, the continuation of the ESO Imaging Survey (EIS) depends heavily on use of the WFI and will identify and classify all objects seen in a number of selected sky fields. The resulting database is made available as a special service to the community for dedicated follow-up work with the VLT. The advantage of modern digital detectors Traditionally, wide-field observations were made with Schmidt telescopes which, by means of to special optics, are able to image sharply a field with a diameter of 5-15 deg. These telescopes use photographic plates that, however, detect no more than about 3% of all incoming photons. In comparison, the photon detecting efficiency of the CCD's in the WFI exceeds 90%. Moreover, these CCD's supply digital data ready for computer analysis, whereas photographic plates must be digitized with a sophisticated scanning engine in a laborious and expensive manner which nevertheless cannot fully extract all the information. The price to be paid, until even larger CCD's become available, is the smaller field. The field, however, will not exceed 1-2 square degrees with the currently planned, new wide-field telescopes. The FIERA CCD controller The entire detector array of the WFI can be read out in only 27 seconds. Since one WFI image contains 0.14 Gbytes of data, this corresponds to the reading of a book at a rate of almost 1000 pages per second! Even for the most powerful PC's presently available, this can be a real challenge. However, much more remarkable is that FIERA , the high-tech CCD controller developed by ESO engineers, sustains this speed without adding noise or artifacts that exceed the extremely faint signal from the night-sky background on a moonless night at a completely dark site such as La Silla. In addition to the eight large CCD's of the mosaic, FIERA simultaneously commands a ninth CCD of the same type in which a small window centered on a bright star is read out continuously, up to 2 times every second. The fast-rate measurement of the instantaneous position of the star enables the telescope control system to track very accurately the apparent motion of the observed field in the sky so that the images remain perfectly sharp, even during long exposures. Future survey work at ESO In terms of bytes, it is expected that the WFI alone will acquire more observational data than all the rest of the La Silla Observatory and the UT1 of the VLT on Paranal together! This impressively illustrates the ever-accelerating pace at which astronomical facilities are developing. In the meantime, a Dutch/German/Italian consortium is preparing for the construction of the successor to WFI camera. The OmegaCam will have no less than 16,000 x 16,000 pixels and the field-of-view is four times as large, one square degree. It will be attached to the 2.6-m VLT Survey Telescope (VST) to be installed jointly by OAC and ESO on Paranal at the end of the year 2001. Note: [1]: Some technical details of the new camera: The WFI field-of-view measures 0.54 x 0.54 deg 2 (32.4 x 32.4 arcmin 2 ) and the image scale is 0.24 arcsec/pixel. An advanced optical system is indispensible to focus correctly a field of this large size - 0.8 degree diameter - on the flat CCD mosaic (12 x 12 cm 2 ). The WFI achromatic corrector consists of 6 lenses of up to 28 cm diameter and is able to concentrate 80% of the light of a point source into the area of one pixel in a flat focal plane. Up to 50 filters will be permanently mounted in the camera. A unique facility is provided by a set of 26 interference filters which cover the entire optical range from 380 - 930 nm and thus allows a rough analysis of the spectra of the typically 100,000 objects that are recorded in one field of view. The CCD's possess a very high sensitivity to ultraviolet light and the WFI is only the second UV-sensitive wide-field imager in service in the world. The camera mechanics was designed and built at the MPI-A which also provided the filters. How to obtain ESO Press Information ESO Press Information is made available on the World-Wide Web (URL: http://www.eso.org../ ). ESO Press Photos may be reproduced, if credit is given to the European Southern Observatory.
Lossless Data Embedding—New Paradigm in Digital Watermarking
NASA Astrophysics Data System (ADS)
Fridrich, Jessica; Goljan, Miroslav; Du, Rui
2002-12-01
One common drawback of virtually all current data embedding methods is the fact that the original image is inevitably distorted due to data embedding itself. This distortion typically cannot be removed completely due to quantization, bit-replacement, or truncation at the grayscales 0 and 255. Although the distortion is often quite small and perceptual models are used to minimize its visibility, the distortion may not be acceptable for medical imagery (for legal reasons) or for military images inspected under nonstandard viewing conditions (after enhancement or extreme zoom). In this paper, we introduce a new paradigm for data embedding in images (lossless data embedding) that has the property that the distortion due to embedding can be completely removed from the watermarked image after the embedded data has been extracted. We present lossless embedding methods for the uncompressed formats (BMP, TIFF) and for the JPEG format. We also show how the concept of lossless data embedding can be used as a powerful tool to achieve a variety of nontrivial tasks, including lossless authentication using fragile watermarks, steganalysis of LSB embedding, and distortion-free robust watermarking.
Color constancy in dermatoscopy with smartphone
NASA Astrophysics Data System (ADS)
Cugmas, Blaž; Pernuš, Franjo; Likar, Boštjan
2017-12-01
The recent spread of cheap dermatoscopes for smartphones can empower patients to acquire images of skin lesions on their own and send them to dermatologists. Since images are acquired by different smartphone cameras under unique illumination conditions, the variability in colors is expected. Therefore, the mobile dermatoscopic systems should be calibrated in order to ensure the color constancy in skin images. In this study, we have tested a dermatoscope DermLite DL1 basic, attached to Samsung Galaxy S4 smartphone. Under the controlled conditions, jpeg images of standard color patches were acquired and a model between an unknown device-dependent RGB and a deviceindependent Lab color space has been built. Results showed that median and the best color error was 7.77 and 3.94, respectively. Results are in the range of a human eye detection capability (color error ≈ 4) and video and printing industry standards (color error is expected to be between 5 and 6). It can be concluded that a calibrated smartphone dermatoscope can provide sufficient color constancy and can serve as an interesting opportunity to bring dermatologists closer to the patients.
Pantanowitz, Liron; Liu, Chi; Huang, Yue; Guo, Huazhang; Rohde, Gustavo K
2017-01-01
The quality of data obtained from image analysis can be directly affected by several preanalytical (e.g., staining, image acquisition), analytical (e.g., algorithm, region of interest [ROI]), and postanalytical (e.g., computer processing) variables. Whole-slide scanners generate digital images that may vary depending on the type of scanner and device settings. Our goal was to evaluate the impact of altering brightness, contrast, compression, and blurring on image analysis data quality. Slides from 55 patients with invasive breast carcinoma were digitized to include a spectrum of human epidermal growth factor receptor 2 (HER2) scores analyzed with Visiopharm (30 cases with score 0, 10 with 1+, 5 with 2+, and 10 with 3+). For all images, an ROI was selected and four parameters (brightness, contrast, JPEG2000 compression, out-of-focus blurring) then serially adjusted. HER2 scores were obtained for each altered image. HER2 scores decreased with increased illumination, higher compression ratios, and increased blurring. HER2 scores increased with greater contrast. Cases with HER2 score 0 were least affected by image adjustments. This experiment shows that variations in image brightness, contrast, compression, and blurring can have major influences on image analysis results. Such changes can result in under- or over-scoring with image algorithms. Standardization of image analysis is recommended to minimize the undesirable impact such variations may have on data output.
The Orion Nebula: The Jewel in the Sword
NASA Astrophysics Data System (ADS)
2001-01-01
Orion the Hunter is perhaps the best known constellation in the sky, well placed in the evening at this time of the year for observers in both the northern and southern hemispheres, and instantly recognisable. And for astronomers, Orion is surely one of the most important constellations, as it contains one of the nearest and most active stellar nurseries in the Milky Way, the galaxy in which we live. Here tens of thousands of new stars have formed within the past ten million years or so - a very short span of time in astronomical terms. For comparison: our own Sun is now 4,600 million years old and has not yet reached half-age. Reduced to a human time-scale, star formation in Orion would have been going on for just one month as compared to the Sun's 40 years. Just below Orion's belt, the hilt of his sword holds a great jewel in the sky, the beautiful Orion Nebula . Bright enough to be seen with the naked eye, a small telescope or even binoculars show the nebula to be a few tens of light-years' wide complex of gas and dust, illuminated by several massive and hot stars at its core, the famous Trapezium stars . However, the heart of this nebula also conceals a secret from the casual observer. There are in fact about one thousand very young stars about one million years old within the so-called Trapezium Cluster , crowded into a space less than the distance between the Sun and its nearest neighbour stars. The cluster is very hard to observe in visible light, but is clearly seen in the above spectacular image of this area ( ESO PR 03a/01 ), obtained in December 1999 by Mark McCaughrean (Astrophysical Institute Potsdam, Germany) and his collaborators [1] with the infrared multi-mode ISAAC instrument on the ESO Very Large Telescope (VLT) at Paranal (Chile). Many details are seen in the new ISAAC image ESO PR Photo 03b/01 ESO PR Photo 03b/01 [Preview - JPEG: 400 x 589 pix - 62k] [Normal - JPEG: 800 x 1178 pix - 648k] [Hires - JPEG: 1957 x 2881 pix - 2.7M] ESO PR Photo 03c/01 ESO PR Photo 03c/01 [Preview - JPEG: 400 x 452 pix - 57k] [Normal - JPEG: 800 x 904 pix - 488k] [Hires - JPEG: 2300 x 2600 pix - 3.3M] Caption : PR Photo 03b/01 and PR Photo 03c/01 show smaller, particularly interesting areas of PR Photo 03a/01 . Photo 03b/01 shows the traces of a massive outflow of gas from a very young object embedded in the dense molecular cloud behind the Orion Nebula. Shards of gas from the explosion create shocks and leave bow-waves as they move at speeds of up to 200 km/sec from the source. Photo 03c/01 shows the delicate tracery created at the so-called Bright Bar , as the intense UV-light and strong winds from the hot Trapezium stars eat their way into the surrounding molecular cloud. Also visible are a number of very young red objects partly hidden in the cloud, waiting to be revealed as new members of the Trapezium Cluster . Technical information about these photos is available below. Indeed, at visible wavelengths, the dense cluster of stars at the centre is drowned out by the light from the nebula and obscured by remnants of the dust in the gas from which they were formed. However, at longer wavelengths, these obscuring effects are reduced, and the cluster is revealed. In the past couple of years, several of the world's premier ground- and space-based telescopes have made new detailed infrared studies of the Orion Nebula and the Trapezium Cluster , but the VLT image shown here is the "deepest" wide-field image obtained so far. The large collecting area of the VLT and the excellent seeing of the Paranal site combined to yield this beautiful image, packed full of striking details. Powerful explosions and winds from the most massive stars in the region are evident, as well as the contours of gas sculpted by these stars, and more finely focused jets of gas flowing from the smaller stars. Sharper images from the VLT ESO PR Photo 03d/01 ESO PR Photo 03d/01 [Preview - JPEG: 400 x 490 pix - 28k] [Normal - JPEG: 800 x 980 pix - 192k] [Hi-Res - JPEG: 2273 x 2784 pix - 976k] Caption : PR Photo 03d/01 shows a small section of the observational data (in one infrared spectral band only, here reproduced in B/W) on which PR Photo 03a/01 is based. The field is centred on one of the famous Orion silhouette disks (Orion 114-426) (it is located approximately halfway between the centre and the right edge of PR Photo 03c/01 ). The dusty disk itself is seen edge-on as a dark streak against the background emission of the Orion Nebula, while the bright fuzzy patches on either side betray the presence of the embedded parent star that illuminates tenuous collections of dust above its north and south poles to create these small reflection nebulae. Recent HST studies suggest that the very young Orion 114-426 disk - that is thirty times bigger than our present-day Solar System - may already be showing signs of forming its own proto-planetary system. Technical information about this photo is available below. It is even possible to see disks of dust and gas surrounding a few of the young stars, as silhouettes in projection against the bright background of the nebula. Many of these disks are very small and usually only seen on images obtained with the Hubble Space Telescope (HST) [2]. However, under the best seeing conditions on Paranal, the sharpness of VLT images at infrared wavelengths approaches that of the HST in this spectral band, revealing some of these disks, as shown in PR Photo 03d/01 . Indeed, the theoretical image sharpness of the 8.2-m VLT is more than three times better than that of the 2.4-m HST. Thus, the VLT will soon yield images of small regions with even higher resolution by means of the High-Resolution Near-Infrared Camera (CONICA) and the Nasmyth Adaptive Optics System (NAOS) that will compensate the smearing effect introduced by the turbulence in the atmosphere. Later on, extremely sharp images will be obtained when all four VLT telescopes are combined to form the Very Large Telescope Interferometer (VLTI). With these new facilities, astronomers will be able to make very detailed studies - among others, they will be looking for evidence that the dust and gas in these disks might be agglomerating to form planets. Free-floating planets in Orion? Recently, research teams working at other telescopes have claimed to have already seen planets in the Orion Nebula, as very dim objects, apparently floating freely between the brighter stars in the cluster. They calculated that if those objects are of the same age as the other stars, if they are located in the cluster, and if present theoretical predictions of the brightness of young stars and planets are correct, then they should have masses somewhere between 5 and 15 times that of planet Jupiter. Astronomer Mark McCaughrean is rather sceptical about this: " Calling these objects "planets" of course sounds exciting, but that interpretation is based on a number of assumptions. To me it seems equally probable that they are somewhat older, higher-mass objects of the "brown dwarf" type from a previous generation of star formation in Orion, which just happen to lie near the younger Trapezium Cluster today. Even if these objects were confirmed to have very low masses, many astronomers would disagree with them being called planets, since the common idea of a planet is that it should be in orbit around a star ". He explains: " While planets form in circumstellar disks, current thinking is that these Orion Nebula objects probably formed in the same way as do stars and brown dwarfs, and so perhaps we'd be better off talking about them just as low-mass brown dwarfs " and also notes that " similar claims of "free-floating planets" found in another cluster associated with the star Sigma Orionis have also been met with some scepticism ". Here, as in other branches of science, claim, counter-claim, scepticism and amicable controversy are typical elements of the scientific search for the truth. Thus the goal must now be to look at these objects in much more detail, and to try to determine their real properties and formation history. Comprehensive VLT study of Orion well underway This is indeed one of the main aims of the present major VLT study, of which the image shown here is decidedly a good start and a great "appetizer"! In fact, even the present photo - that is based on quite short exposures with a total of only 13.5 min at each image point (4.5 min in each of the three bands) - is already of sufficient quality to raise questions about some of the "very low-mass objects". McCaughrean acknowledges that " some of these very faint objects were right at the limit of earlier studies and hence the determination of their brightnesses was less precise. The new, more accurate VLT data show several of them to be intrinsically brighter than previously thought and thus more massive; also some other objects seem not to be there at all ". Clearly, the answer is to look even deeper in order to get more accurate data and to discover more of these objects. More infrared images were obtained for the present programme in December 2000 by the VLT team. They will now be combined with the earlier data shown here to create a very deep survey of the central area of the Orion Nebula. One of the great strengths of the VLT is its comprehensive instrumentation programme, and the team intends to carry out a detailed spectral analysis of the very faintest objects in the cluster, using the VLT VIMOS and NIRMOS multiobject spectrometers, as these become available. Only then, by analysing all these data, will it become possible to determine the masses, ages, and motions of the very faintest members of the Trapezium Cluster , and to provide a solid answer to the tantalising question of their origin. The beautiful infrared image shown here may just be a first "finding chart" made at the beginning of a long-term research project, but it already carries plenty of new astrophysical information. For the astronomers, images like these and the follow-up studies will help to solve some of the fascinating and perplexing questions about the birth and early lives of stars and their planetary systems. Note [1] The new VLT data covering the Orion Nebula and Trapezium Cluster were obtained as part of a long-term project by Mark McCaughrean (Principal Investigator, Astrophysical Institute Potsdam [AIP], Germany), João Alves (ESO, Garching, Germany), Hans Zinnecker (AIP) and Francesco Palla (Arcetri Observatory, Florence, Italy). The data also form part of the collaborative research being undertaken by the European Commission-sponsored Research Training Network on "The Formation and Evolution of Young Star Clusters" (RTN1-1999-00436), led by the Astrophysical Institute Potsdam, and including the Arcetri Observatory in Florence (Italy), the University of Cambridge (UK), the University of Cardiff (UK), the University of Grenoble (France), the University of Lisbon (Portugal) and the CEA Saclay (France). [2] To compare the present VLT infrared image with the more familiar view of the Orion Nebula in optical light, the ST-ECF has prepared an image covering a similar field from data taken with the NASA/ESA Hubble Space Telescope WFPC2 camera and extracted and processed by Jeremy Walsh from the ESO/ST-ECF archive. This 4-colour composite emphasises the light from the gaseous nebula rather than from the stars, and there is dramatic difference from the infrared view which sees much deeper into the region. The HST image is available at http://www.stecf.org/epo/support/orion/. Technical information about the photos PR Photo 03a/01 of the Orion Nebula and the Trapezium Cluster was made using the near-infrared camera ISAAC on the ESO 8.2-m VLT ANTU telescope on December 20 - 21, 1999. The full field measures approx. 7 x 7 arcmin, covering roughly 3 x 3 light-years (0.9 x 0.9 pc) at the distance of the nebula (about 1500 light-years, or 450 pc). This required a 9-position mosaic (3 x 3 grid) of ISAAC pointings; at each pointing, a series of images were taken in each of the near-infrared J s - (centred at 1.24 µm wavelength), H- (1.65 µm), and K s - (2.16 µm) bands. North is up and East left. The total integration time for each pixel in the mosaic was 4.5 min in each band. The seeing FWHM (full width at half maximum) was excellent, between 0.35 and 0.50 arcsec throughout. Point sources are detected at the 3-sigma level (central pixel above background noise) of 20.5, 19.2, and 18.8 magnitude in the J s -, H-, and K s -bands, respectively, mainly limited by the bright background emission of the nebula. After removal of instrumental signatures and the bright infrared sky background, all frames in a given band were carefully aligned and adjusted to form a seamless mosaic. The three monochromatic mosaics were then unsharp-masked and scaled logarithmically to reduce the enormous dynamic range and enhance the faint features of the outer nebula. The mosaics were then combined to create this colour-coded image, with the J s -band being rendered as blue, the H-band as green, and the K s -band as red. A total of 81 individual ISAAC images were merged to form this mosaic. PR Photos 03b-c/01 show smaller sections of the large image; the areas are 2.6 x 3.2 and 4.2 x 3.8 arcmin (1.1 x 1.4 and 1.8 x 1.6 light-years), respectively. PR Photo 03d/01 is based on J s band data only, to ensure good visibility (maximum contrast) of the Orion 114-426 silhouette disk against the background nebula. The three highest spatial resolution images covering this region were accurately aligned to form a mosaic with a resolution of 0.4 arcsec FWHM (180 Astronomical Units [AU]) in the vicinity of the disk. A 29 x 29 arcsec (0.2 x 0.2 light-year) section of this smaller mosaic was cut out and the square root of the intensity taken to enhance the disk. The disk is roughly 2 arcsec or 900 AU in diameter. North is up, East left.
Applying image quality in cell phone cameras: lens distortion
NASA Astrophysics Data System (ADS)
Baxter, Donald; Goma, Sergio R.; Aleksic, Milivoje
2009-01-01
This paper describes the framework used in one of the pilot studies run under the I3A CPIQ initiative to quantify overall image quality in cell-phone cameras. The framework is based on a multivariate formalism which tries to predict overall image quality from individual image quality attributes and was validated in a CPIQ pilot program. The pilot study focuses on image quality distortions introduced in the optical path of a cell-phone camera, which may or may not be corrected in the image processing path. The assumption is that the captured image used is JPEG compressed and the cellphone camera is set to 'auto' mode. As the used framework requires that the individual attributes to be relatively perceptually orthogonal, in the pilot study, the attributes used are lens geometric distortion (LGD) and lateral chromatic aberrations (LCA). The goal of this paper is to present the framework of this pilot project starting with the definition of the individual attributes, up to their quantification in JNDs of quality, a requirement of the multivariate formalism, therefore both objective and subjective evaluations were used. A major distinction in the objective part from the 'DSC imaging world' is that the LCA/LGD distortions found in cell-phone cameras, rarely exhibit radial behavior, therefore a radial mapping/modeling cannot be used in this case.
New Image of Comet Halley in the Cold
NASA Astrophysics Data System (ADS)
2003-09-01
VLT Observes Famous Traveller at Record Distance Summary Seventeen years after the last passage of Comet Halley , the ESO Very Large Telescope at Paranal (Chile) has captured a unique image of this famous object as it cruises through the outer solar system. It is completely inactive in this cold environment. No other comet has ever been observed this far - 4200 million km from the Sun - or that faint - nearly 1000 million times fainter than what can be perceived with the unaided eye. This observation is a byproduct of a dedicated search [1] for small Trans-Neptunian Objects, a population of icy bodies of which more than 600 have been found during the past decade. PR Photo 27a/03 : VLT image (cleaned) of Comet Halley PR Photo 27b/03 : Sky field in which Comet Halley was observed PR Photo 27c/03 : Combined VLT image with star trails and Comet Halley The Halley image ESO PR Photo 27a/03 ESO PR Photo 27a/03 [Preview - JPEG: 546 x 400 pix - 207k] [Normal - JPEG: 1092 x 800 pix - 614k] [FullRes - JPEG: 1502 x 1100 pix - 1.1M] Caption : PR Photo 27a/03 shows the faint, star-like image of Comet Halley (centre), observed with the ESO Very Large Telescope (VLT) at the Paranal Observatory on March 6-8, 2003. 81 individual exposures from three of the four 8.2-m VLT telescopes with a total exposure time of about 9 hours were combined to show the magnitude 28.2 object. At this time, Comet Halley was about 4200 million km from the Sun (28.06 AU) and 4080 million km (27.26 AU) from the Earth. All images of stars and galaxies in the field were removed during the extensive image processing needed to produce this unique image. Due to the remaining, unavoidable "background noise", it is best to view the comet image from some distance. The field measures 60 x 40 arcsec 2 ; North is up and East is left. Remember Comet Halley - the famous "haired star" that has been observed with great regularity - about once every 76 years - during more than two millennia? Which was visited by an international spacecraft armada when it last passed through the inner solar system in 1986? And which put on a fine display in the sky at that time? Now, 17 years after that passage, this cosmic traveller has again been observed at the European Southern Observatory. Moving outward along its elongated orbit into the deep-freeze outer regions of the solar system, it is now almost as far away as Neptune, the most distant giant planet in our system. At 4,200 million km from the Sun, Comet Halley has now completed four-fifths of its travel towards the most distant point of this orbit. As the motion is getting ever slower, it will reach that turning point in December 2023, after which it begins its long return towards the next passage through the inner solar system in 2062. The new image of Halley was taken with the Very Large Telescope (VLT) at Paranal (Chile); a "cleaned" version is shown in PR Photo 27a/03 . It was obtained as a byproduct of an observing program aimed at studying the population of icy bodies at the rim of the solar system. The image shows the raven-black, 10-km cometary nucleus of ice and dust as an unresolved faint point of light, without any signs of activity. A cold and inactive "dirty snowball" The brightness of the comet was measured as visual magnitude V = 28.2, or nearly 1000 million times fainter than the faintest objects that can be perceived in a dark sky with the unaided eye. The pitch black nucleus of Halley reflects about 4% of the sunlight; it is a very "dirty" snowball indeed. We know from the images obtained by the ESA Giotto spacecraft in 1986 that it is avocado-shaped and on the average measures about 10 km diameter across. The VLT observation is therefore equivalent to seeing a 5-cm piece of coal at a distance of 20,500 km (about the distance between the Earth's poles) and to do so in the evening twilight. This is because at the large distance of Comet Halley, the infalling sunlight is 800 times fainter than here on Earth. The measured brightness of the cometary image perfectly matches that expected for the nucleus alone, taking into account the distance, the solar illumination and the reflectivity of the surface. This shows that all cometary activity has now ceased. The nucleus is now an inert ball of ice and dust, and is likely to remain so until it again returns to the solar neighbourhood, more than half a century from now. A record observation At 28.06 AU heliocentric distance (1 AU = 149,600,000 km - the mean distance between the Earth and the Sun), this is by far the most distant observation ever made of a comet [2]. It is also the faintest comet ever detected (by a factor of about 5); the previous record, magnitude 26.5, was co-held by comet Halley at 18.8 AU (with the ESO New Technology Telescope in 1994) and Comet Sanguin at 8.5 AU (with the Keck II telescope in 1997). Interestingly, when Comet Halley reaches its largest distance from the Sun in December 2023, about 35 AU, it will only be 2.5 times fainter than it is now. The comet would still have been detected within the present exposure time. This means that with the VLT, for the first time in the long history of this comet, the astronomers now possess the means to observe it at any point in its 76-year orbit! A census of faint Transneptunian Objects The image of Halley was obtained by combining a series of exposures obtained simultaneously with three of the 8.2-m telescopes (ANTU, MELIPAL and YEPUN) during 3 consecutive nights with the main goal to count the number of small icy bodies orbiting the Sun beyond Neptune, known as Transneptunian Objects (TNOs). Since the discovery of the first TNO in 1992, more than 600 have been found, most of these measuring several hundred km across. The VLT observations aim at a census of smaller TNOs - the incorporation of the sky field with Comet Halley allows verification of the associated, extensive data processing. Similar TNO-surveys have been performed before, but this is the first time that several very large telescopes are used simultaneously in order to observe extremely faint, hitherto inaccessible objects. The VLT observations will provide very useful information about the frequency of (smaller) TNOs of different sizes and thereby, indirectly, about the rate of collisions they have suffered since their formation. This study will also cast more light on the mystery of the apparent "emptiness" of the very distant solar system. Why are so few objects found beyond 45 AU? It is not known whether this is because there are no objects out there or if they are simply too small or too dark, or both, to have been detected so far. How to extract a very faint comet image ESO PR Photo 27b/03 ESO PR Photo 27b/03 [Preview - JPEG: 546 x 400 pix - 211k] [Normal - JPEG: 1092 x 800 pix - 649k] [FullRes - JPEG: 1502 x 1100 pix - 1.1M] ESO PR Photo 27c/03 ESO PR Photo 27c/03 [Preview - JPEG: 530 x 400 pix - 184k] [Normal - JPEG: 1059 x 800 pix - 573k] [FullRes - JPEG: 1515 x 1145 pix - 983k] Caption : PR Photo 27b/03 shows the sky field in which Comet Halley was observed with the ESO Very Large Telescope (VLT) at the Paranal Observatory on March 6-8, 2003. 81 individual exposures with a total exposure time of 32284 sec (almost 9 hours) from three of the four 8.2-m telescopes were cleaned and combined to produce this composite photo, displaying numerous faint stars and galaxies in the field. The predicted motion of Comet Halley during the three nights is indicated by short red lines. The long straight lines at the top and to the right were caused by artificial satellites in orbit around the Earth that passed through the field during the exposure. The field measures 300 x 180 arcsec 2. PR Photo 27c/03 was produced by adding the same frames, however, while shifting their positions according to the motion of the comet. The faint, star-like image of Comet Halley is now visible (in circle, at centre); all other objects (stars, galaxies) in the field are "trailed". A satellite trail is visible at the very top. The field measures 60 x 40 arcsec 2 ; North is up and East is left in both photos. The combination of the images from three 8.2-m telescopes obtained during three consecutive nights is not straightforward. The individual characteristics of the imaging instruments (FORS1 on ANTU, VIMOS on MELIPAL and FORS2 on YEPUN) must be taken into account and corrected. Moreover, the motion of the very faint moving objects has to be compensated for, even though they are too faint to be seen on individual exposures; they only reveal themselves when several (many!) frames are combined during the final steps of the process. It is for this reason that the presence of a known, faint object like Comet Halley in the field-of-view provides a powerful control of the data processing. If Halley is visible at the end, it has been done properly. The extensive data processing is now under way and the intensive search for new Transneptunian objects has started. The field with Comet Halley was observed with the giant telescopes during each of three consecutive nights, yielding 81 individual exposures with a total exposure time of almost 9 hours. The faint comet is completely invisible on the individual images. On PR Photo 27b/03 , these frames have been added directly, showing very faint stars and galaxies. Also this photo does not show the moving comet, but by shifting the frames before they are added in such a way that the comet remains fixed, a faint image does emerge among the stellar trails, cf. PR Photo 27c/03 . A better, but much more cumbersome method is to "subtract" the images of all stars and galaxies from the individual exposures, before they are added. PR Photo 27a/03 has been produced in this way and shows the image of Comet Halley more clearly. In total, about 20,000 photons were detected from the comet, i.e. about one photon per 8.2-m telescope every 1.6 second. However, during the same time, the telescopes collected about one thousand times more photons from molecular emission in the Earth's atmosphere within the sky area covered by the comet's image. The presence of this considerable "noise" calls for very careful image processing in order to detect the faint comet signal. The identity of the comet is beyond doubt: the image is faintly visible on composite photos obtained during a single night, demonstrating that the direction and rate of motion of the detected object perfectly matches that predicted for Comet Halley from its well-known orbit. Moreover, the image is located within 1 arcsec from the predicted position in the sky. Outlook After its passage in 1910, Comet Halley was again seen in 1982, when David Jewitt first observed its faint image with the 5-m Palomar telescope at a time when it was 11 AU from the Sun, a little further than planet Saturn. It was observed from La Silla two months later. As the comet approached, the ice in the nucleus began to evaporate (sublimate), and the comet soon became surrounded by a cloud of dust and gas (the "coma"). It developed the tail that is typical of comets and was extensively observed, also from several spacecraft passing close to its nucleus in early 1986. Observations have since been made of Comet Halley as it moves away from the Sun, documenting a steady decrease of activity. When it reached the distance of Saturn, the tail and coma had disappeared completely, leaving only the 5 x 5 x 15 km avocado-shaped "dirty snowball" nucleus. However, Halley was still good for a major surprise: in 1991, a gigantic explosion happened, providing it with an expanding, extensive cloud of dust for several months. It is not known whether this event was caused by a collision with an unknown piece of rock or by internal processes (a last "sigh" on the way out). Until now, the most recent observation of Comet Halley was done in 1994 with the New Technology Telescope (NTT) at La Silla, at that time the most powerful ESO telescope. It showed the comet to be completely inactive. Nine years later, so does the present VLT observation. It is unlikely that any activity will be seen until this famous object again approaches the Sun, more than 50 years from now.
Recovering DC coefficients in block-based DCT.
Uehara, Takeyuki; Safavi-Naini, Reihaneh; Ogunbona, Philip
2006-11-01
It is a common approach for JPEG and MPEG encryption systems to provide higher protection for dc coefficients and less protection for ac coefficients. Some authors have employed a cryptographic encryption algorithm for the dc coefficients and left the ac coefficients to techniques based on random permutation lists which are known to be weak against known-plaintext and chosen-ciphertext attacks. In this paper we show that in block-based DCT, it is possible to recover dc coefficients from ac coefficients with reasonable image quality and show the insecurity of image encryption methods which rely on the encryption of dc values using a cryptoalgorithm. The method proposed in this paper combines dc recovery from ac coefficients and the fact that ac coefficients can be recovered using a chosen ciphertext attack. We demonstrate that a method proposed by Tang to encrypt and decrypt MPEG video can be completely broken.
DCTune Perceptual Optimization of Compressed Dental X-Rays
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Null, Cynthia H. (Technical Monitor)
1997-01-01
In current dental practice, x-rays of completed dental work are often sent to the insurer for verification. It is faster and cheaper to transmit instead digital scans of the x-rays. Further economies result if the images are sent in compressed form. DCtune is a technology for optimizing DCT quantization matrices to yield maximum perceptual quality for a given bit-rate, or minimum bit-rate for a given perceptual quality. In addition, the technology provides a means of setting the perceptual quality of compressed imagery in a systematic way. The purpose of this research was, with respect to dental x-rays: (1) to verify the advantage of DCTune over standard JPEG; (2) to verify the quality control feature of DCTune; and (3) to discover regularities in the optimized matrices of a set of images. Additional information is contained in the original extended abstract.
A visual detection model for DCT coefficient quantization
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.; Peterson, Heidi A.
1993-01-01
The discrete cosine transform (DCT) is widely used in image compression, and is part of the JPEG and MPEG compression standards. The degree of compression, and the amount of distortion in the decompressed image are determined by the quantization of the transform coefficients. The standards do not specify how the DCT coefficients should be quantized. Our approach is to set the quantization level for each coefficient so that the quantization error is at the threshold of visibility. Here we combine results from our previous work to form our current best detection model for DCT coefficient quantization noise. This model predicts sensitivity as a function of display parameters, enabling quantization matrices to be designed for display situations varying in luminance, veiling light, and spatial frequency related conditions (pixel size, viewing distance, and aspect ratio). It also allows arbitrary color space directions for the representation of color.
NASA Astrophysics Data System (ADS)
Wantuch, Andrew C.; Vita, Joshua A.; Jimenez, Edward S.; Bray, Iliana E.
2016-10-01
Despite object detection, recognition, and identification being very active areas of computer vision research, many of the available tools to aid in these processes are designed with only photographs in mind. Although some algorithms used specifically for feature detection and identification may not take explicit advantage of the colors available in the image, they still under-perform on radiographs, which are grayscale images. We are especially interested in the robustness of these algorithms, specifically their performance on a preexisting database of X-ray radiographs in compressed JPEG form, with multiple ways of describing pixel information. We will review various aspects of the performance of available feature detection and identification systems, including MATLABs Computer Vision toolbox, VLFeat, and OpenCV on our non-ideal database. In the process, we will explore possible reasons for the algorithms' lessened ability to detect and identify features from the X-ray radiographs.
Calderon, Karynna; Dadisman, S.V.; Kindinger, J.L.; Flocks, J.G.; Wiese, D.S.; Kulp, Mark; Penland, Shea; Britsch, L.D.; Brooks, G.R.
2003-01-01
This archive consists of two-dimensional marine seismic reflection profile data collected in the Barataria Basin of southern Louisiana. These data were acquired in May, June, and July of 2000 aboard the R/V G.K. Gilbert. Included here are data in a variety of formats including binary, American Standard Code for Information Interchange (ASCII), Hyper-Text Markup Language (HTML), shapefiles, and Graphics Interchange Format (GIF) and Joint Photographic Experts Group (JPEG) images. Binary data are in Society of Exploration Geophysicists (SEG) SEG-Y format and may be downloaded for further processing or display. Reference maps and GIF images of the profiles may be viewed with a web browser. The Geographic Information Systems (GIS) information provided here is compatible with Environmental Systems Research Institute (ESRI) GIS software.
Concurrent access to a virtual microscope using a web service oriented architecture
NASA Astrophysics Data System (ADS)
Corredor, Germán.; Iregui, Marcela; Arias, Viviana; Romero, Eduardo
2013-11-01
Virtual microscopy (VM) facilitates visualization and deployment of histopathological virtual slides (VS), a useful tool for education, research and diagnosis. In recent years, it has become popular, yet its use is still limited basically because of the very large sizes of VS, typically of the order of gigabytes. Such volume of data requires efficacious and efficient strategies to access the VS content. In an educative or research scenario, several users may require to access and interact with VS at the same time, so, due to large data size, a very expensive and powerful infrastructure is usually required. This article introduces a novel JPEG2000-based service oriented architecture for streaming and visualizing very large images under scalable strategies, which in addition need not require very specialized infrastructure. Results suggest that the proposed architecture enables transmission and simultaneous visualization of large images, while it is efficient using resources and offering users proper response times.
Human visual system-based color image steganography using the contourlet transform
NASA Astrophysics Data System (ADS)
Abdul, W.; Carré, P.; Gaborit, P.
2010-01-01
We present a steganographic scheme based on the contourlet transform which uses the contrast sensitivity function (CSF) to control the force of insertion of the hidden information in a perceptually uniform color space. The CIELAB color space is used as it is well suited for steganographic applications because any change in the CIELAB color space has a corresponding effect on the human visual system as is very important for steganographic schemes to be undetectable by the human visual system (HVS). The perceptual decomposition of the contourlet transform gives it a natural advantage over other decompositions as it can be molded with respect to the human perception of different frequencies in an image. The evaluation of the imperceptibility of the steganographic scheme with respect to the color perception of the HVS is done using standard methods such as the structural similarity (SSIM) and CIEDE2000. The robustness of the inserted watermark is tested against JPEG compression.
High performance compression of science data
NASA Technical Reports Server (NTRS)
Storer, James A.; Cohn, Martin
1994-01-01
Two papers make up the body of this report. One presents a single-pass adaptive vector quantization algorithm that learns a codebook of variable size and shape entries; the authors present experiments on a set of test images showing that with no training or prior knowledge of the data, for a given fidelity, the compression achieved typically equals or exceeds that of the JPEG standard. The second paper addresses motion compensation, one of the most effective techniques used in the interframe data compression. A parallel block-matching algorithm for estimating interframe displacement of blocks with minimum error is presented. The algorithm is designed for a simple parallel architecture to process video in real time.
Fitterman, David V.; Deszcz-Pan, Maria
2002-01-01
This report describes helicopter electromagnetic (HEM) data that were collected over portion of Everglades National Park and surrounding areas in south Florida. The survey was flown 9-14 December 1994. The original data set processed by the contractor, Dighem, are provided as an ASCII, xyz flight-line file. Apparent resistivity grids of the generated from the original data set and JPEG images of these grids are also provided. The data have been corrected by the U.S. Geological Survey to remove the effects of calibration errors and bird-height uncertainty. The corrected data set is included in this report as flight-line data only.
A deep learning method for early screening of lung cancer
NASA Astrophysics Data System (ADS)
Zhang, Kunpeng; Jiang, Huiqin; Ma, Ling; Gao, Jianbo; Yang, Xiaopeng
2018-04-01
Lung cancer is the leading cause of cancer-related deaths among men. In this paper, we propose a pulmonary nodule detection method for early screening of lung cancer based on the improved AlexNet model. In order to maintain the same image quality as the existing B/S architecture PACS system, we convert the original CT image into JPEG format image by analyzing the DICOM file firstly. Secondly, in view of the large size and complex background of CT chest images, we design the convolution neural network on basis of AlexNet model and sparse convolution structure. At last we train our models on the software named DIGITS which is provided by NVIDIA. The main contribution of this paper is to apply the convolutional neural network for the early screening of lung cancer and improve the screening accuracy by combining the AlexNet model with the sparse convolution structure. We make a series of experiments on the chest CT images using the proposed method, of which the sensitivity and specificity indicates that the method presented in this paper can effectively improve the accuracy of early screening of lung cancer and it has certain clinical significance at the same time.
NASA Astrophysics Data System (ADS)
Hayat, Khizar; Puech, William; Gesquière, Gilles
2010-04-01
We propose an adaptively synchronous scalable spread spectrum (A4S) data-hiding strategy to integrate disparate data, needed for a typical 3-D visualization, into a single JPEG2000 format file. JPEG2000 encoding provides a standard format on one hand and the needed multiresolution for scalability on the other. The method has the potential of being imperceptible and robust at the same time. While the spread spectrum (SS) methods are known for the high robustness they offer, our data-hiding strategy is removable at the same time, which ensures highest possible visualization quality. The SS embedding of the discrete wavelet transform (DWT)-domain depth map is carried out in transform domain YCrCb components from the JPEG2000 coding stream just after the DWT stage. To maintain synchronization, the embedding is carried out while taking into account the correspondence of subbands. Since security is not the immediate concern, we are at liberty with the strength of embedding. This permits us to increase the robustness and bring the reversibility of our method. To estimate the maximum tolerable error in the depth map according to a given viewpoint, a human visual system (HVS)-based psychovisual analysis is also presented.
Ames Stereo Pipeline for Operation IceBridge
NASA Astrophysics Data System (ADS)
Beyer, R. A.; Alexandrov, O.; McMichael, S.; Fong, T.
2017-12-01
We are using the NASA Ames Stereo Pipeline to process Operation IceBridge Digital Mapping System (DMS) images into terrain models and to align them with the simultaneously acquired LIDAR data (ATM and LVIS). The expected outcome is to create a contiguous, high resolution terrain model for each flight that Operation IceBridge has flown during its eight year history of Arctic and Antarctic flights. There are some existing terrain models in the NSIDC repository that cover 2011 and 2012 (out of the total period of 2009 to 2017), which were made with the Agisoft Photoscan commercial software. Our open-source stereo suite has been verified to create terrains of similar quality. The total number of images we expect to process is around 5 million. There are numerous challenges with these data: accurate determination and refinement of camera pose when the images were acquired based on data logged during the flights and/or using information from existing orthoimages, aligning terrains with little or no features, images containing clouds, JPEG artifacts in input imagery, inconsistencies in how data was acquired/archived over the entire period, not fully reliable camera calibration files, and the sheer amount of data. We will create the majority of terrain models at 40 cm/pixel with a vertical precision of 10 to 20 cm. In some circumstances when the aircraft was flying higher than usual, those values will get coarser. We will create orthoimages at 10 cm/pixel (with the same caveat that some flights are at higher altitudes). These will differ from existing orthoimages by using the underlying terrain we generate rather than some pre-existing very low-resolution terrain model that may differ significantly from what is on the ground at the time of IceBridge acquisition.The results of this massive processing will be submitted to the NSIDC so that cryosphere researchers will be able to use these data for their investigations.
NASA Tech Briefs, September 2008
NASA Technical Reports Server (NTRS)
2008-01-01
Topics covered include: Nanotip Carpets as Antireflection Surfaces; Nano-Engineered Catalysts for Direct Methanol Fuel Cells; Capillography of Mats of Nanofibers; Directed Growth of Carbon Nanotubes Across Gaps; High-Voltage, Asymmetric-Waveform Generator; Magic-T Junction Using Microstrip/Slotline Transitions; On-Wafer Measurement of a Silicon-Based CMOS VCO at 324 GHz; Group-III Nitride Field Emitters; HEMT Amplifiers and Equipment for their On-Wafer Testing; Thermal Spray Formation of Polymer Coatings; Improved Gas Filling and Sealing of an HC-PCF; Making More-Complex Molecules Using Superthermal Atom/Molecule Collisions; Nematic Cells for Digital Light Deflection; Improved Silica Aerogel Composite Materials; Microgravity, Mesh-Crawling Legged Robots; Advanced Active-Magnetic-Bearing Thrust- Measurement System; Thermally Actuated Hydraulic Pumps; A New, Highly Improved Two-Cycle Engine; Flexible Structural-Health-Monitoring Sheets; Alignment Pins for Assembling and Disassembling Structures; Purifying Nucleic Acids from Samples of Extremely Low Biomass; Adjustable-Viewing-Angle Endoscopic Tool for Skull Base and Brain Surgery; UV-Resistant Non-Spore-Forming Bacteria From Spacecraft-Assembly Facilities; Hard-X-Ray/Soft-Gamma-Ray Imaging Sensor Assembly for Astronomy; Simplified Modeling of Oxidation of Hydrocarbons; Near-Field Spectroscopy with Nanoparticles Deposited by AFM; Light Collimator and Monitor for a Spectroradiometer; Hyperspectral Fluorescence and Reflectance Imaging Instrument; Improving the Optical Quality Factor of the WGM Resonator; Ultra-Stable Beacon Source for Laboratory Testing of Optical Tracking; Transmissive Diffractive Optical Element Solar Concentrators; Delaying Trains of Short Light Pulses in WGM Resonators; Toward Better Modeling of Supercritical Turbulent Mixing; JPEG 2000 Encoding with Perceptual Distortion Control; Intelligent Integrated Health Management for a System of Systems; Delay Banking for Managing Air Traffic; and Spline-Based Smoothing of Airfoil Curvatures.
Chen, Yunjin; Pock, Thomas
2017-06-01
Image restoration is a long-standing problem in low-level computer vision with many interesting applications. We describe a flexible learning framework based on the concept of nonlinear reaction diffusion models for various image restoration problems. By embodying recent improvements in nonlinear diffusion models, we propose a dynamic nonlinear reaction diffusion model with time-dependent parameters (i.e., linear filters and influence functions). In contrast to previous nonlinear diffusion models, all the parameters, including the filters and the influence functions, are simultaneously learned from training data through a loss based approach. We call this approach TNRD-Trainable Nonlinear Reaction Diffusion. The TNRD approach is applicable for a variety of image restoration tasks by incorporating appropriate reaction force. We demonstrate its capabilities with three representative applications, Gaussian image denoising, single image super resolution and JPEG deblocking. Experiments show that our trained nonlinear diffusion models largely benefit from the training of the parameters and finally lead to the best reported performance on common test datasets for the tested applications. Our trained models preserve the structural simplicity of diffusion models and take only a small number of diffusion steps, thus are highly efficient. Moreover, they are also well-suited for parallel computation on GPUs, which makes the inference procedure extremely fast.
Pantanowitz, Liron; Liu, Chi; Huang, Yue; Guo, Huazhang; Rohde, Gustavo K.
2017-01-01
Introduction: The quality of data obtained from image analysis can be directly affected by several preanalytical (e.g., staining, image acquisition), analytical (e.g., algorithm, region of interest [ROI]), and postanalytical (e.g., computer processing) variables. Whole-slide scanners generate digital images that may vary depending on the type of scanner and device settings. Our goal was to evaluate the impact of altering brightness, contrast, compression, and blurring on image analysis data quality. Methods: Slides from 55 patients with invasive breast carcinoma were digitized to include a spectrum of human epidermal growth factor receptor 2 (HER2) scores analyzed with Visiopharm (30 cases with score 0, 10 with 1+, 5 with 2+, and 10 with 3+). For all images, an ROI was selected and four parameters (brightness, contrast, JPEG2000 compression, out-of-focus blurring) then serially adjusted. HER2 scores were obtained for each altered image. Results: HER2 scores decreased with increased illumination, higher compression ratios, and increased blurring. HER2 scores increased with greater contrast. Cases with HER2 score 0 were least affected by image adjustments. Conclusion: This experiment shows that variations in image brightness, contrast, compression, and blurring can have major influences on image analysis results. Such changes can result in under- or over-scoring with image algorithms. Standardization of image analysis is recommended to minimize the undesirable impact such variations may have on data output. PMID:28966838
Second Harmonic Imaging improves Echocardiograph Quality on board the International Space Station
NASA Technical Reports Server (NTRS)
Garcia, Kathleen; Sargsyan, Ashot; Hamilton, Douglas; Martin, David; Ebert, Douglas; Melton, Shannon; Dulchavsky, Scott
2008-01-01
Ultrasound (US) capabilities have been part of the Human Research Facility (HRF) on board the International Space Station (ISS) since 2001. The US equipment on board the ISS includes a first-generation Tissue Harmonic Imaging (THI) option. Harmonic imaging (HI) is the second harmonic response of the tissue to the ultrasound beam and produces robust tissue detail and signal. Since this is a first-generation THI, there are inherent limitations in tissue penetration. As a breakthrough technology, HI extensively advanced the field of ultrasound. In cardiac applications, it drastically improves endocardial border detection and has become a common imaging modality. U.S. images were captured and stored as JPEG stills from the ISS video downlink. US images with and without harmonic imaging option were randomized and provided to volunteers without medical education or US skills for identification of endocardial border. The results were processed and analyzed using applicable statistical calculations. The measurements in US images using HI improved measurement consistency and reproducibility among observers when compared to fundamental imaging. HI has been embraced by the imaging community at large as it improves the quality and data validity of US studies, especially in difficult-to-image cases. Even with the limitations of the first generation THI, HI improved the quality and measurability of many of the downlinked images from the ISS and should be an option utilized with cardiac imaging on board the ISS in all future space missions.
Acquisition and Post-Processing of Immunohistochemical Images.
Sedgewick, Jerry
2017-01-01
Augmentation of digital images is almost always a necessity in order to obtain a reproduction that matches the appearance of the original. However, that augmentation can mislead if it is done incorrectly and not within reasonable limits. When procedures are in place for insuring that originals are archived, and image manipulation steps reported, scientists not only follow good laboratory practices, but avoid ethical issues associated with post processing, and protect their labs from any future allegations of scientific misconduct. Also, when procedures are in place for correct acquisition of images, the extent of post processing is minimized or eliminated. These procedures include white balancing (for brightfield images), keeping tonal values within the dynamic range of the detector, frame averaging to eliminate noise (typically in fluorescence imaging), use of the highest bit depth when a choice is available, flatfield correction, and archiving of the image in a non-lossy format (not JPEG).When post-processing is necessary, the commonly used applications for correction include Photoshop, and ImageJ, but a free program (GIMP) can also be used. Corrections to images include scaling the bit depth to higher and lower ranges, removing color casts from brightfield images, setting brightness and contrast, reducing color noise, reducing "grainy" noise, conversion of pure colors to grayscale, conversion of grayscale to colors typically used in fluorescence imaging, correction of uneven illumination (flatfield correction), merging color images (fluorescence), and extending the depth of focus. These corrections are explained in step-by-step procedures in the chapter that follows.
Turuk, Mousami; Dhande, Ashwin
2018-04-01
The recent innovations in information and communication technologies have appreciably changed the panorama of health information system (HIS). These advances provide new means to process, handle, and share medical images and also augment the medical image security issues in terms of confidentiality, reliability, and integrity. Digital watermarking has emerged as new era that offers acceptable solutions to the security issues in HIS. Texture is a significant feature to detect the embedding sites in an image, which further leads to substantial improvement in the robustness. However, considering the perspective of digital watermarking, this feature has received meager attention in the reported literature. This paper exploits the texture property of an image and presents a novel hybrid texture-quantization-based approach for reversible multiple watermarking. The watermarked image quality has been accessed by peak signal to noise ratio (PSNR), structural similarity measure (SSIM), and universal image quality index (UIQI), and the obtained results are superior to the state-of-the-art methods. The algorithm has been evaluated on a variety of medical imaging modalities (CT, MRA, MRI, US) and robustness has been verified, considering various image processing attacks including JPEG compression. The proposed scheme offers additional security using repetitive embedding of BCH encoded watermarks and ADM encrypted ECG signal. Experimental results achieved a maximum of 22,616 bits hiding capacity with PSNR of 53.64 dB.
Realisation and robustness evaluation of a blind spatial domain watermarking technique
NASA Astrophysics Data System (ADS)
Parah, Shabir A.; Sheikh, Javaid A.; Assad, Umer I.; Bhat, Ghulam M.
2017-04-01
A blind digital image watermarking scheme based on spatial domain is presented and investigated in this paper. The watermark has been embedded in intermediate significant bit planes besides the least significant bit plane at the address locations determined by pseudorandom address vector (PAV). The watermark embedding using PAV makes it difficult for an adversary to locate the watermark and hence adds to security of the system. The scheme has been evaluated to ascertain the spatial locations that are robust to various image processing and geometric attacks JPEG compression, additive white Gaussian noise, salt and pepper noise, filtering and rotation. The experimental results obtained, reveal an interesting fact, that, for all the above mentioned attacks, other than rotation, higher the bit plane in which watermark is embedded more robust the system. Further, the perceptual quality of the watermarked images obtained in the proposed system has been compared with some state-of-art watermarking techniques. The proposed technique outperforms the techniques under comparison, even if compared with the worst case peak signal-to-noise ratio obtained in our scheme.
Dimensionality of visual complexity in computer graphics scenes
NASA Astrophysics Data System (ADS)
Ramanarayanan, Ganesh; Bala, Kavita; Ferwerda, James A.; Walter, Bruce
2008-02-01
How do human observers perceive visual complexity in images? This problem is especially relevant for computer graphics, where a better understanding of visual complexity can aid in the development of more advanced rendering algorithms. In this paper, we describe a study of the dimensionality of visual complexity in computer graphics scenes. We conducted an experiment where subjects judged the relative complexity of 21 high-resolution scenes, rendered with photorealistic methods. Scenes were gathered from web archives and varied in theme, number and layout of objects, material properties, and lighting. We analyzed the subject responses using multidimensional scaling of pooled subject responses. This analysis embedded the stimulus images in a two-dimensional space, with axes that roughly corresponded to "numerosity" and "material / lighting complexity". In a follow-up analysis, we derived a one-dimensional complexity ordering of the stimulus images. We compared this ordering with several computable complexity metrics, such as scene polygon count and JPEG compression size, and did not find them to be very correlated. Understanding the differences between these measures can lead to the design of more efficient rendering algorithms in computer graphics.
NASA Astrophysics Data System (ADS)
2004-06-01
Largest Census Of X-Ray Galaxy Clusters Provides New Constraints on Dark Matter [1] Clusters of galaxies Clusters of galaxies are very large building blocks of the Universe. These gigantic structures contain hundreds to thousands of galaxies and, less visible but equally interesting, an additional amount of "dark matter" whose origin still defies the astronomers, with a total mass of thousands of millions of millions times the mass of our Sun. The comparatively nearby Coma cluster, for example, contains thousands of galaxies and measures more than 20 million light-years across. Another well-known example is the Virgo cluster at a distance of about 50 million light-years, and still stretching over an angle of more than 10 degrees in the sky! Clusters of galaxies form in the densest regions of the Universe. As such, they perfectly trace the backbone of the large-scale structures in the Universe, in the same way that lighthouses trace a coastline. Studies of clusters of galaxies therefore tell us about the structure of the enormous space in which we live. The REFLEX survey ESO PR Photo 18a/04 ESO PR Photo 18a/04 Galaxy Cluster RXCJ 1206.2-0848 (Visible and X-ray) [Preview - JPEG: 400 x 478 pix - 70k] [Normal - JPEG: 800 x 956 pix - 1.2Mk] Caption: PR Photo 18a shows the very massive distant cluster of galaxies RXCJ1206.2-0848, newly discovered during the REFLEX project, and located at a redshift of z = 0.44 [3]. The contours indicate the X-ray surface brightness distribution. Most of the yellowish galaxies are cluster members. A gravitationally lensed galaxy with a distorted, very elongated image is seen, just right of the centre. The image was obtained with the EFOSC multi-mode instrument on the ESO 3.6-m telescope at the La Silla Observatory (Chile). ESO PR Photo 18b/04 ESO PR Photo 18b/04 Galaxy cluster RXCJ1131.9-1955 [Preview - JPEG: 400 x 477 pix - 40k] [Normal - JPEG: 800 x 953 pix - 912k] [FullRes - JPEG: 2251 x 2681 pix - 7.7Mk] Caption: PR Photo 18b displays the very massive galaxy cluster RXCJ1131.9-1955 at redshift z = 0.306 [3] in a very rich galaxy field with two major concentrations. It was originally found by George Abell and designated "Abell 1300". The image was obtained with the ESO/MPG 2.2-m telescope and the WFI camera at La Silla. ESO PR Photo 18c/04 ESO PR Photo 18c/04 Galaxy Cluster RXCJ0937.9-2020 [Preview - JPEG: 400 x 746 pix - 60k] [Normal - JPEG: 800 x 1491 pix - 1.3M] [HiRes - JPEG: 2380 x 4437 pix - 14.2M] Caption: PR Photo 18c/04 shows the much smaller, more nearby galaxy group RXCJ0937.9-2020 at a redshift of z = 0.034 [3]. It is dominated by the massive elliptical galaxy seen at the top of the image. The photo covers only the southern part of this group. Such galaxy groups with typical masses of a few 1013 solar masses constitute the smallest objects included in the REFLEX catalogue. This image was obtained with the FORS1 multi-mode instrument on the ESO 8.2-m VLT Antu telescope. ESO PR Video Clip 05/04 ESO PR Video Clip 05/04 Galaxy Clusters in the REFLEX Catalogue (3D-visualization) [MPG - 11.7Mb] Caption: ESO PR Video Clip 05/04 illustrates the three-dimensional distribution of the galaxy clusters identfied in the ROSAT All-Sky survey in the northern and southern sky. In addition to the galaxy clusters in the REFLEX catalogue this movie also contains those identified during the ongoing, deeper search for X-ray clusters: the extension of the southern REFLEX Survey and the northern complementary survey that is conducted by the MPE team at the Calar Alto observatory and at US observatories in collaboration with John Huchra and coworkers at the Harvard-Smithonian Center for Astrophysics. In total, more than 1400 X-ray bright galaxy cluster have been found to date. (Prepared by Ferdinand Jamitzky.) Following this idea, a European team of astronomers [2], under the leadership of Hans Böhringer (MPE, Garching, Germany), Luigi Guzzo (INAF, Milano, Italy), Chris A. Collins (JMU, Liverpool), and Peter Schuecker (MPE, Garching) has embarked on a decade-long study of these gargantuan structures, trying to locate the most massive of clusters of galaxies. Since about one-fifth of the optically invisible mass of a cluster is in the form of a diffuse very hot gas with a temperature of the order of several tens of millions of degrees, clusters of galaxies produce powerful X-ray emission. They are therefore best discovered by means of X-ray satellites. For this fundamental study, the astronomers thus started by selecting candidate objects using data from the X-ray Sky Atlas compiled by the German ROSAT satellite survey mission. This was the beginning only - then followed a lot of tedious work: making the final identification of these objects in visible light and measuring the distance (i.e., redshift [3]) of the cluster candidates. The determination of the redshift was done by means of observations with several telescopes at the ESO La Silla Observatory in Chile, from 1992 to 1999. The brighter objects were observed with the ESO 1.5-m and the ESO/MPG 2.2-m telescopes, while for the more distant and fainter objects, the ESO 3.6-m telescope was used. Carried out at these telescopes, the 12 year-long programme is known to astronomers as the REFLEX (ROSAT-ESO Flux Limited X-ray) Cluster Survey. It has now been concluded with the publication of a unique catalogue with the characteristics of the 447 brightest X-ray clusters of galaxies in the southern sky. Among these, more than half the clusters were discovered during this survey. Constraining the dark matter content ESO PR Photo 18d/04 ESO PR Photo 18d/04 Constraints on Cosmological Parameters [Preview - JPEG: 400 pix x 572 - 37k] [Normal - JPEG: 800 x 1143 pix - 265k] Caption: PR Photo 18d demonstrates the current observational constraints on the cosmic density of all matter including dark matter (Ωm) and the dark energy (ΩΛ) relative to the density of a critical-density Universe (i.e., an expanding Universe which approaches zero expansion asymptotically after an infinite time and has a flat geometry). All three observational tests by means of supernovae (green), the cosmic microwave background (blue) and galaxy clusters converge at a Universe around Ωm ~ 0.3 and ΩΛ ~ 0.7. The dark red region for the galaxy cluster determination corresponds to 95% certainty (2-sigma statistical deviation) when assuming good knowledge of all other cosmological parameters, and the light red region assumes a minimum knowledge. For the supernovae and WMAP results, the inner and outer regions corespond to 68% (1-sigma) and 95% certainty, respectively. References: Schuecker et al. 2003, A&A, 398, 867 (REFLEX); Tonry et al. 2003, ApJ, 594, 1 (supernovae); Riess et al. 2004, ApJ, 607, 665 (supernovae) Galaxy clusters are far from being evenly distributed in the Universe. Instead, they tend to conglomerate into even larger structures, "super-clusters". Thus, from stars which gather in galaxies, galaxies which congregate in clusters and clusters tying together in super-clusters, the Universe shows structuring on all scales, from the smallest to the largest ones. This is a relict of the very early (formation) epoch of the Universe, the so-called "inflationary" period. At that time, only a minuscule fraction of one second after the Big Bang, the tiny density fluctuations were amplified and over the eons, they gave birth to the much larger structures. Because of the link between the first fluctuations and the giant structures now observed, the unique REFLEX catalogue - the largest of its kind - allows astronomers to put considerable constraints on the content of the Universe, and in particular on the amount of dark matter that is believed to pervade it. Rather interestingly, these constraints are totally independent from all other methods so far used to assert the existence of dark matter, such as the study of very distant supernovae (see e.g. ESO PR 21/98) or the analysis of the Cosmic Microwave background (e.g. the WMAP satellite). In fact, the new REFLEX study is very complementary to the above-mentioned methods. The REFLEX team concludes that the mean density of the Universe is in the range 0.27 to 0.43 times the "critical density", providing the strongest constraint on this value up to now. When combined with the latest supernovae study, the REFLEX result implies that, whatever the nature of the dark energy is, it closely mimics a Universe with Einstein's cosmological constant. A giant puzzle The REFLEX catalogue will also serve many other useful purposes. With it, astronomers will be able to better understand the detailed processes that contribute to the heating of the gas in these clusters. It will also be possible to study the effect of the environment of the cluster on each individual galaxy. Moreover, the catalogue is a good starting point to look for giant gravitational lenses, in which a cluster acts as a giant magnifying lens, effectively allowing observations of the faintest and remotest objects that would otherwise escape detection with present-day telescopes. But, as Hans Böhringer says: "Perhaps the most important advantage of this catalogue is that the properties of each single cluster can be compared to the entire sample. This is the main goal of surveys: assembling the pieces of a gigantic puzzle to build the grander view, where every single piece then gains a new, more comprehensive meaning." More information The results presented in this Press Release will appear in the research journal Astronomy and Astrophysics ("The ROSAT-ESO Flux Limited X-ray (REFLEX) Galaxy Cluster Survey. V. The cluster catalogue" by H. Böhringer et al.; astro-ph/0405546). See also the REFLEX website.
Passive forensics for copy-move image forgery using a method based on DCT and SVD.
Zhao, Jie; Guo, Jichang
2013-12-10
As powerful image editing tools are widely used, the demand for identifying the authenticity of an image is much increased. Copy-move forgery is one of the tampering techniques which are frequently used. Most existing techniques to expose this forgery need to improve the robustness for common post-processing operations and fail to precisely locate the tampering region especially when there are large similar or flat regions in the image. In this paper, a robust method based on DCT and SVD is proposed to detect this specific artifact. Firstly, the suspicious image is divided into fixed-size overlapping blocks and 2D-DCT is applied to each block, then the DCT coefficients are quantized by a quantization matrix to obtain a more robust representation of each block. Secondly, each quantized block is divided non-overlapping sub-blocks and SVD is applied to each sub-block, then features are extracted to reduce the dimension of each block using its largest singular value. Finally, the feature vectors are lexicographically sorted, and duplicated image blocks will be matched by predefined shift frequency threshold. Experiment results demonstrate that our proposed method can effectively detect multiple copy-move forgery and precisely locate the duplicated regions, even when an image was distorted by Gaussian blurring, AWGN, JPEG compression and their mixed operations. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Hu, J H; Wang, Y; Cahill, P T
1997-01-01
This paper reports a multispectral code excited linear prediction (MCELP) method for the compression of multispectral images. Different linear prediction models and adaptation schemes have been compared. The method that uses a forward adaptive autoregressive (AR) model has been proven to achieve a good compromise between performance, complexity, and robustness. This approach is referred to as the MFCELP method. Given a set of multispectral images, the linear predictive coefficients are updated over nonoverlapping three-dimensional (3-D) macroblocks. Each macroblock is further divided into several 3-D micro-blocks, and the best excitation signal for each microblock is determined through an analysis-by-synthesis procedure. The MFCELP method has been applied to multispectral magnetic resonance (MR) images. To satisfy the high quality requirement for medical images, the error between the original image set and the synthesized one is further specified using a vector quantizer. This method has been applied to images from 26 clinical MR neuro studies (20 slices/study, three spectral bands/slice, 256x256 pixels/band, 12 b/pixel). The MFCELP method provides a significant visual improvement over the discrete cosine transform (DCT) based Joint Photographers Expert Group (JPEG) method, the wavelet transform based embedded zero-tree wavelet (EZW) coding method, and the vector tree (VT) coding method, as well as the multispectral segmented autoregressive moving average (MSARMA) method we developed previously.
BOREAS Level-0 C-130 Aerial Photography
NASA Technical Reports Server (NTRS)
Newcomer, Jeffrey A.; Dominguez, Roseanne; Hall, Forrest G. (Editor)
2000-01-01
For BOReal Ecosystem-Atmosphere Study (BOREAS), C-130 and other aerial photography was collected to provide finely detailed and spatially extensive documentation of the condition of the primary study sites. The NASA C-130 Earth Resources aircraft can accommodate two mapping cameras during flight, each of which can be fitted with 6- or 12-inch focal-length lenses and black-and-white, natural-color, or color-IR film, depending upon requirements. Both cameras were often in operation simultaneously, although sometimes only the lower resolution camera was deployed. When both cameras were in operation, the higher resolution camera was often used in a more limited fashion. The acquired photography covers the period of April to September 1994. The aerial photography was delivered as rolls of large format (9 x 9 inch) color transparency prints, with imagery from multiple missions (hundreds of prints) often contained within a single roll. A total of 1533 frames were collected from the C-130 platform for BOREAS in 1994. Note that the level-0 C-130 transparencies are not contained on the BOREAS CD-ROM set. An inventory file is supplied on the CD-ROM to inform users of all the data that were collected. Some photographic prints were made from the transparencies. In addition, BORIS staff digitized a subset of the tranparencies and stored the images in JPEG format. The CD-ROM set contains a small subset of the collected aerial photography that were the digitally scanned and stored as JPEG files for most tower and auxiliary sites in the NSA and SSA. See Section 15 for information about how to acquire additional imagery.
DCT-based cyber defense techniques
NASA Astrophysics Data System (ADS)
Amsalem, Yaron; Puzanov, Anton; Bedinerman, Anton; Kutcher, Maxim; Hadar, Ofer
2015-09-01
With the increasing popularity of video streaming services and multimedia sharing via social networks, there is a need to protect the multimedia from malicious use. An attacker may use steganography and watermarking techniques to embed malicious content, in order to attack the end user. Most of the attack algorithms are robust to basic image processing techniques such as filtering, compression, noise addition, etc. Hence, in this article two novel, real-time, defense techniques are proposed: Smart threshold and anomaly correction. Both techniques operate at the DCT domain, and are applicable for JPEG images and H.264 I-Frames. The defense performance was evaluated against a highly robust attack, and the perceptual quality degradation was measured by the well-known PSNR and SSIM quality assessment metrics. A set of defense techniques is suggested for improving the defense efficiency. For the most aggressive attack configuration, the combination of all the defense techniques results in 80% protection against cyber-attacks with PSNR of 25.74 db.
Comparative performance between compressed and uncompressed airborne imagery
NASA Astrophysics Data System (ADS)
Phan, Chung; Rupp, Ronald; Agarwal, Sanjeev; Trang, Anh; Nair, Sumesh
2008-04-01
The US Army's RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD), Countermine Division is evaluating the compressibility of airborne multi-spectral imagery for mine and minefield detection application. Of particular interest is to assess the highest image data compression rate that can be afforded without the loss of image quality for war fighters in the loop and performance of near real time mine detection algorithm. The JPEG-2000 compression standard is used to perform data compression. Both lossless and lossy compressions are considered. A multi-spectral anomaly detector such as RX (Reed & Xiaoli), which is widely used as a core algorithm baseline in airborne mine and minefield detection on different mine types, minefields, and terrains to identify potential individual targets, is used to compare the mine detection performance. This paper presents the compression scheme and compares detection performance results between compressed and uncompressed imagery for various level of compressions. The compression efficiency is evaluated and its dependence upon different backgrounds and other factors are documented and presented using multi-spectral data.
Calderon, Karynna; Dadisman, Shawn V.; Kindinger, Jack G.; Flocks, James G.; Wiese, Dana S.
2003-01-01
This archive consists of marine seismic reflection profile data collected in four survey areas from southeast of Charleston Harbor to the mouth of the North Edisto River of South Carolina. These data were acquired June 26 - July 1, 1996, aboard the R/V G.K. Gilbert. Included here are data in a variety of formats including binary, American Standard Code for Information Interchange (ASCII), Hyper Text Markup Language (HTML), Portable Document Format (PDF), Rich Text Format (RTF), Graphics Interchange Format (GIF) and Joint Photographic Experts Group (JPEG) images, and shapefiles. Binary data are in Society of Exploration Geophysicists (SEG) SEG-Y format and may be downloaded for further processing or display. Reference maps and GIF images of the profiles may be viewed with a web browser. The Geographic Information Systems (GIS) map documents provided were created with Environmental Systems Research Institute (ESRI) GIS software ArcView 3.2 and 8.1.
NASA Astrophysics Data System (ADS)
2001-12-01
VLT ISAAC Looks for Young Stars in the Famous "Pillars of Creation" Summary Through imaging at infrared wavelengths, evidence has been found for recent star formation in the so-called "Pillars of Creation" in the Eagle Nebula (also known as Messier 16 ), made famous when the NASA/ESA Hubble Space Telescope (HST) obtained spectacular visible-wavelength images of this object in 1995. Those huge pillars of gas and dust are being sculpted and illuminated by bright and powerful high-mass stars in the nearby NGC 6611 young stellar cluster . The Hubble astronomers suggested that perhaps even younger stars were forming inside. Using the ISAAC instrument on the VLT 8.2-m ANTU telescope at the ESO Paranal Observatory , European astronomers have now made a wide-field infrared image of the Messier 16 region with excellent spatial resolution, enabling them to penetrate the obscuring dust and search for light from newly born stars . Two of the three pillars are seen to have very young, relatively massive stars in their tips. Another dozen or so lower-mass stars seem to be associated with the small "evaporating gaseous globules (EGGs)" that the Hubble astronomers had discovered scattered over the surface of the pillars. These findings bring new evidence to several key questions about how stars are born . Was the formation of these new stars triggered as the intense ultraviolet radiation from the NGC 6611 stars swept over the pillars, or were they already there? Will the new stars be prematurely cut off from surrounding gas cloud, thus stunting their growth? If the new stars have disks of gas and dust around them, will they be destroyed before they have time to form planetary systems? PR Photo 37a/01 : Full wide-field ISAAC image of the Eagle Nebula. PR Photo 37b/01 : Close-up view of the ISAAC image , showing the famous "Pillars of Creation". PR Photo 37c/01 : Enlargement of the head of Column 1 . PR Photo 37d/01 : Enlargement of the head of Column 2 . PR Photo 37e/01 : Enlargement of the head of Column 4 . PR Video Clip 08a/01 : A "dissolve" between the Hubble visible wavelength and VLT infrared views of the pillars. PR Video Clip 08b/01 : Hubble and VLT views of the head of Column 1 . The famous "Pillars of Creation" Hundreds of millions of people all over the world have admired those towering "Pillars of Creation" in Messier 16 (M16) , also known as the Eagle Nebula , and located in the southern constellation of Serpens. It is one of the most famous NASA/ESA Hubble Space Telescope (HST) images - released in 1995, it has become an icon of modern astronomy, giving the viewer an extraordinary three-dimensional impression of scuba-diving through some leviathan undersea forest. These light-years long columns of gas and dust are being simultaneously sculpted, illuminated, and destroyed by the intense ultraviolet light from massive stars in the adjacent NGC 6611 young stellar cluster . Within a few million years, a mere twinkling of the universal eye, they will be gone forever. But before they are, they have a chance to leave a longer-lasting legacy: a whole new generation of stars may be forming within them. Their formation may have been triggered by the immense power of the NGC 6611 stars, or perhaps they had already started to form quietly earlier on, only to be suddenly subjected to the ravages of an ionising storm front. The real question is then: are there or are there not any new born stars inside those "Pillars of Creation"? The Hubble Space Telescope view When the HST turned to photograph M16 in 1995, it did so using its visible wavelength camera, WFPC-2 . The Hubble astronomers [1] took data through three narrow-bandpass optical filters selecting emission lines from the ionised gas they knew to be present in the region. In doing so, they obtained an extraordinarily sharp view of the well-known pillars of cold gas and dust that are sometimes referred to as "elephant trunks" for obvious reasons. Their image showed the light-years long pillars partly silhouetted against a bright nebular background, and revealed in exquisite detail the surface structure of the pillars as they are being transformed by ultraviolet radiation from massive, hot stars in the NGC 6611 cluster which lies just outside the area covered by the Hubble image. A surprising finding made by the Hubble astronomers was that the pillars are covered with a large number (they counted 73) of small bumps and protrusions which in a few cases are almost completely detached from the pillars. With a typical angular size of only 0.5 arcsec, those objects had not been seen in previous ground-based photographs, and it took the exceptional acuity of Hubble to reveal them. The astronomers dubbed these objects "evaporating gaseous globules" , shortened to "EGGs" . They noted that one or two of these EGGs appeared to have stars right at their tips, and they suggested that perhaps the EGGs are formed as the advancing front of ionised gas driven by the hot NGC 6611 stars is slowed down by the presence of dense knots of gas and dust within the larger pillars. Within those knots then, they hypothesised a population of extremely young stars, still in the womb of their natal cloud but soon to be rudely exposed to a much harsher outside world. However, there was a problem: since their images were taken at visible wavelengths which are relatively easily absorbed by the dust in the EGGs, the Hubble astronomers could not actually see inside the EGGs to test their theory. The VLT looks inside the "Pillars" What was needed then was a survey of the M16 region made at longer wavelengths and penetrating much more deeply through the dense dust. Such a survey should be sensitive enough to detect faint, low-mass young stars deeply embedded in the dusty EGGs. It should have excellent sub-arcsec angular resolution to unambiguously identify an object with a given EGG. And it should cover a wide field-of-view to probe all of the pillars and their surroundings. Over the past twenty years, a number of surveys of M16 have been made at near-infrared, mid-infrared, and millimetre wavelengths. Unfortunately, none of them had this perfect combination of characteristics to answer the crucial question of whether or not there is a population of young stars inside the Eagle's EGGs . However, this past austral autumn (April and May 2001), European astronomers [2] were able to image the Eagle Nebula at near-infrared wavelengths , using the infrared multi-mode ISAAC instrument on the 8.2-m VLT ANTU telescope at ESO's Paranal Observatory in Chile. By specifying that the observations be carried out in so-called "service mode", they ensured that the on-site ESO team could undertake their pre-defined programme under the necessary excellent observing conditions. The results were well worth the effort! The ISAAC near-infrared images cover a 9 x 9 arcmin region, i.e., fourteen times the area seen in the famous Hubble visible image, in three broad-band colours and with sufficient sensitivity to detect young stars of all masses and - most importantly - with an image sharpness as good as 0.35 arcsec. Although this is still some way from the diffraction-limited performance of 0.07 arcsec or better that is now achieved with the adaptive optics system NAOS/CONICA on the VLT telescope (cf. ESO PR 25/01 ), the ISAAC data cover a much wider field-of-view and, vitally, with enough image resolution to probe deep into the individual EGGs . The ISAAC infrared images of Messier 16 ESO PR Photo 37a/01 ESO PR Photo 37a/01 [Preview - JPEG: 400 x 471 pix - 136k] [Normal - JPEG: 800 x 942 pix - 1.2M] [HiRes - JPEG: 3000 x 3532 pix - 12.9M] Caption : ESO PR Photo 37a/01 is a three-colour composite mosaic image of the Eagle Nebula (Messier 16) , based on 144 individual images obtained with the infrared multi-mode instrument ISAAC on the ESO Very Large Telescope (VLT) at the Paranal Observatory. At the centre, the so-called "Pillars of Creation" can be seen. This wide-field infrared image shows not only the central three pillars but also several others in the same star-forming region, as well as a huge number of stars in front of, in, or behind the Eagle Nebula. The cluster of bright blue stars to the upper right is NGC 6611 , home to the massive and hot stars that illuminate the pillars. Technical information about this photo is available below. ESO PR Photo 37b/01 ESO PR Photo 37b/01 [Preview - JPEG: 400 x 553 pix - 160k] [Normal - JPEG: 800 x 1105 pix - 1.2M] [FullRes - JPEG: 1330 x 1837 pix - 2.7M] Caption : ESO PR Photo 37b/01 shows a zoom into the centre of PR Photo 37a/01 , with the infrared view of the columns and their immediate surroundings in more detail. The pillars or columns are numbered 1 to 3 from left to right (east to west). The pillars themselves are less prominent than on the Hubble visible-light image of this region - this because near-infrared light penetrates the thinner parts of the gas and dust clouds and only the heads remain opaque. A number of red objects can be seen associated with the pillars: some of these are just background sources seen through the dust, but some are probably real young stars embedded in the pillars. The purple arc near the bottom of the picture is Herbig-Haro object 216 , a fast-moving clump of heated gas emanating from a young star (see also PR Photo 37e/01 ). Technical information about this photo is available below. HST and VLT images of the Eagle Nebula - PR Video Clip 08a/01] ESO PR Video Clip 08a/01 HST and VLT images of the Eagle Nebula (52 frames/0:02 min) [MPEG Video; 160x120 pix; 3.6Mb] ESO Video Clip 08a/01 shows a sky field similar to that seen in PR Photo 37b/01 , "dissolving" back and forth between the Hubble and VLT views, demonstrating the dramatic changes that occur when changing wavelength from the visible to near-infrared. (It is suggested to play it at reduced speed). The wide-field view of M16 ( Photo 37a/01 ) shows that there is much more to the region than is seen in the Hubble image. The first impression one gets is of an enormous number of stars. Those which are blue in the infrared image are either members of the young NGC 6611 cluster - whose massive stars are concentrated in the upper right (north west) part of the field - or foreground stars which happen to lie along the line of sight towards M16. Most of the stars are fainter and more yellow. They are ordinary stars behind M16, along the line of sight through the galactic bulge, and are seen through the molecular clouds out of which NGC 6611 formed. Some very red stars are also seen: these are either very young and embedded in gas and dust clouds, or just brighter stars in the background shining through them. Zooming in, Photo 37b/01 shows the region of the pillars covered by the Hubble image and its immediate surroundings. The pillars are still obvious, although appearing less prominent in places as one penetrates the thinner parts, getting closer to the goal of probing inside the pillars. Video Clip 08a/01 shows how this appearance changes in a continuous dissolve between the Hubble visible wavelength view and its VLT infrared equivalent. Hunting for new stars in the EGGs ESO PR Photo 37c/01 ESO PR Photo 37c/01 [Preview - JPEG: 400 x 371 pix - 66k] [Normal - JPEG: 800 x 741 pix - 352k] Caption : ESO PR Photo 37c/01 shows an enlarged view of the head of the largest of the three main pillars, Column 1. The head is almost transparent around the edges at near-infrared wavelengths, but there is still a substantial opaque core which even these near-infrared VLT observations cannot penetrate. The complex blueish nebulosity bisected by a dark lane near the tip is being lit up by the bright yellow star just below it, which appears to be very young and rather massive. Several of the much fainter stars to the right of and below this source are found to be associated with EGGs seen in the Hubble image, and these all have much lower masses. Finally, there is a faint streak of blue light emanating from from the tip of EGG 23, one of the darkest parts of Column 1, ending in a blue blob further north. An equal distance to the south of the EGG and off the head, there is another curving blue nebulosity. These features are also seen in the Hubble image, and may be part of a Herbig-Haro jet coming from a young star buried deeply in EGG 23 and invisible in this image. Technical information about this photo is available below. ESO PR Photo 37d/01 ESO PR Photo 37d/01 [Preview - JPEG: 400 x 362 pix - 75k] [Normal - JPEG: 800 x 724 pix - 372k] Caption : ESO PR Photo 37d/01 shows a similarly enlarged view of the head of Column 2. The bright blue-yellow source embedded in nebulosity near the tip is another young star unseen in the Hubble images: although it appears to be double here, it is in fact just one relatively massive young star surrounding by nebulosity. Technical information about this photo is available below. ESO PR Photo 37e/01 ESO PR Photo 37e/01 [Preview - JPEG: 400 x 365 pix - 112k] [Normal - JPEG: 800 x 729 pix - 536k] Caption : ESO PR Photo 37e/01 shows an enlarged view of the head of Column 4, which lies to the lower-left in Photo 37a/01 and was not covered in the Hubble image. This column is similar to the more familiar ones, but thus far less impacted by the massive stars in NGC6611. The two red nebulosities in the head signpost one or more young stars so deeply embedded that they cannot be seen directly in the VLT infrared image, only indirectly as they illuminate dust around them. One of these sources is thought to be the origin of the Herbig-Haro object HH216 seen in Photo 37a/01 and Photo 37b/01 [3]. Technical information about this photo is available below. Pillars of Creation in Eagle Nebula (Column 1) - PR Video Clip 08b/01] ESO PR Video Clip 08b/01 Pillars of Creation in Eagle Nebula (Column 1) (800 frames/0:32 min) [MPEG Video+Audio; 192x144 pix; 4.0M] [MPEG Video+Audio; 384x288 pix; 9.8M] [RealMedia; streaming; 56kps] [RealMedia; streaming; 200kps] ESO PR Video Clip 08b/01 shows the Hubble and VLT views of the head of Column 1 (cf. Photo 37c/01 ), with an additional zoom-in. Note that the bright complex reflection nebulosity and its young, massive energy source are completely unseen at visible wavelengths. Photos 37c-e/01 show even further close-ups of the heads of Columns 1 and 2, plus Column 4, seen in the wide-field ISAAC image ( Photo 37a/01 ) towards the lower left (south east). The young star in the head of Column 1 ( Photo 37c/01 ) is located within a complex reflection nebula, completely unseen at visible wavelengths. From the near-infrared brightness of the star, the astronomers judge it to be more massive than our own sun and very young (in astronomical terms), perhaps only 100,000 years old. Video Clip 08b/01 allows a direct comparison between the Hubble and VLT views of this region. Right at the tip of Column 2 ( Photo 37d/01 ), another young star also illuminates a small reflection nebula, again undetected in the Hubble image. And to the south-east, the head of Column 4 ( Photo 37e/01 ) shows complex red nebulosity which the astronomers take to be the signpost of very young objects, so deeply embedded that they are not directly detected in the VLT images. The present team of astronomers has recently investigated this object [3] and believe it is hiding the driving source of a so-called "Herbig-Haro jet", a speedy outflow of gas that can be seen where it ends in a shock, the bright purple arc at the lower edge of Photo 37b/01 . Turning to smaller scales, the astronomers made a very accurate alignment of the Hubble and VLT images, and then examined the location of each EGG, searching for stars within them. This search had to be carried out very carefully, given the small sizes of the EGGs, and also because, once in a while, a perfectly ordinary background star might seem to be aligned with an EGG purely by chance. After completing their search, they found that 11 of the 73 EGGs clearly have stars associated with them. Only one of these had been previously been seen in the Hubble images, and another five EGGs were noted as possibly containing stars. Judging from their near-infrared brightness, most of these stars seem to be less massive than our Sun. Interestingly, most of the EGGs with stars are located on Column 1, and roughly half of them right at the tip of the head, not far from the more massive star that illuminates the reflection nebula. This may be evidence for a small cluster of young stars associated with Column 1 which will soon be revealed as the column is eaten away. Even though the remaining 57 EGGs appear to be empty, it is important to note that there may nevertheless be more young stars in the M16 pillars. After all, neither of the bright young stars at the tips of Columns 1 and 2 are related to any of the Hubble EGGs. Also, it is clear from the VLT image that parts of the pillars and a few of the EGGs are so dense that they remain completely opaque even at near-infrared wavelengths, and may still be harbouring other new stars. An interesting example is the apparently empty EGG number 23, from which another high-speed Herbig-Haro jet seems to be emerging ( Photo 37c/01 ). Outlook The new VLT infrared image shows that there is now firm evidence for the recent birth of stars in the Eagle Nebula and that at least some of the Eagle's EGGs are fertile, not sterile! A deeper look at even longer wavelengths will be needed to make a complete census of all the star formation in the Eagle Nebula, perhaps using the VLT thermal-infrared camera, VISIR , when it becomes available or, ultimately, less than a decade from now, the infrared-optimised Next Generation Space Telescope (NGST) , the NASA/ESA/CSA successor to the HST. At longer wavelengths, observations with the planned Atacama Large Millimeter Array (ALMA) will also be most useful. From images alone, however, it is not possible to tell which came first: the stars or the EGGs? Were those young stars already forming inside dark clouds before the intense ultraviolet radiation of the nearby massive hot stars swept over the pillars? Or did that radiation compress empty clumps in those clouds and trigger the birth of the stars? In either case, those young stars will soon be exposed to the full fury of the ionisation storm as the columns are evaporated. How will their fate have been affected? Ripped prematurely from the cloud, they will be cut off from the reservoir of material from which they grew, and thus may end up smaller than would otherwise be expected. Also, the dense disks of gas and dust known to girdle young stars will suddenly be heated and boiled away by the ultraviolet radiation, as has been seen happening in the Orion Nebula, perhaps preventing the formation of planets around those stars. Theoreticians studying these problems now have some new data to work with. Nevertheless, to keep things in perspective, it is important to remember that the towering pillars cover only a small fraction of the Eagle Nebula. While a few tens of new stars may be forming in the pillars today, at least a thousand young stars were born in the adjacent NGC 6611 cluster within the last few million years, including the massive stars themselves. The story of the formation of that cluster may be something else altogether, but perhaps just as spectacular. More information The research described in this press release is presented in more detail in a research paper ("The Eagle's EGGs: fertile or sterile?"), to be submitted to the European research journal "Astronomy & Astrophysics Letters". The work has been carried under the auspices of the European Commission Research Training Network "The Formation and Evolution of Young Stellar Clusters" (HPRN-CT-2000-00155) [4]. Notes [1] The Hubble Space Telescope team consisted of Jeff Hester and Paul Scowen (Arizona State University, USA) and 21 collaborators. Their M16 image was made at visible wavelengths using the Wide-Field Planetary Camera 2 (WFPC-2) instrument of the HST, selecting the emission lines of double ionised oxygen [OIII], the hydrogen line H-alpha, and single ionised sulphur [SII] in the visible wavelength interval (from 500 to 671 nm). The image was released by the Space Telescope Science Institute (PR95-44) in 1995 and the scientific data analysis was published by Jeff Hester et al. in the Astronomical Journal in 1996 (Vol. 111, p. 2349). [2] The present team consists of Mark McCaughrean and Morten Andersen , both of the Astrophysical Institute Potsdam (AIP), Germany. [3] A research paper discussing the embedded object in the head of Column 4 and its role in driving the Herbig-Haro jet ending in HH 216 ("Molecular cloud structure and star formation near HH216 in M16", by Morten Andersen, Jens Knude, Bo Reipurth, Alain Castets, Lars-Åke Nyman, Mark McCaughrean and Steve Heathcote) has been submitted for publication in the European research journal "Astronomy & Astrophysics". [4]: Mark McCaughrean would like to dedicate these VLT images of the Eagle Nebula to his own new baby star, Finn, born in Berlin on December 1st, 2001, when his father was working on them and also to Sybille and Catriona, the other stars in his family cluster! Technical information about the photos PR Photo 37a/01 of the Eagle Nebula, M16, and NGC 6611 was made using the near-infrared camera ISAAC on the ESO 8.2-m VLT ANTU telescope on April 8 and May 8 - 10, 2001. The full field measures approximately 9.1 x 9.1 arcmin, covering roughly 17 x 17 light-years (5.3 x 5.3 pc) at the distance to the region (about 6500 light-years or 2 kpc). This required a 16-position mosaic (4 x 4 grid) of ISAAC pointings: at each pointing, a series of images were taken in each of the near-infrared J s - (centred at 1.24 µm wavelength), H- (1.65µm), and K s - (2.16 µm) bands. North is up and East left in this and all subsequent images. The total integration time for each pixel in the mosaic was 1200, 300, and 300 seconds in the central 4.5 x 4.5 arcmin region, and 200, 50, and 50 seconds in the outer part, in J s -, H-, and K s - bands, respectively. The seeing FWHM (full width at half maximum) was excellent, at 0.38, 0.36, and 0.33 arcsec in J s , H, and K s , respectively. Point sources are detected in the central region at the 3-sigma level (brightest pixel above background noise) at 22.6, 21.3, and 20.4 magnitudes in J s , H, and K s , respectively. These limits imply that a 1 million year old, 0.075 solar-mass object on the star/brown dwarf boundary could be detected in M16 through roughly 15, 20, and 30 magnitudes of visual extinction at J s , H, and K s , respectively. After removal of instrumental signatures and the bright infrared sky background, all frames in a given band were carefully aligned and adjusted to form a seamless mosaic. The three monochromatic mosaics were then scaled to the cube root of their intensities to reduce the enormous dynamic range and enhance faint nebular features. The mosaics were then combined to create the colour-coded image, with the J s -band being rendered as blue, the H-band as green, and the K s -band as red. A total of 144 individual 1024 x 1024 pixel ISAAC images were merged to form this mosaic. PR Photo 37b/01 shows an enlarged section of the full mosaic covering 6.2 x 7.5 light-years (1.9 x 2.3 pc) centred on the pillars. PR Photos 37c-e/01 show smaller, enlarged sections covering the head of each of Columns 1, 2, and 4, respectively. In each case, the region shown measures 1.9 x 2.8 light-years (0.6 x 0.9 pc). The intensity scalings have been adjusted to better show the young stars embedded in the head of each column.
"First Light" for the VLT Interferometer
NASA Astrophysics Data System (ADS)
2001-03-01
Excellent Fringes From Bright Stars Prove VLTI Concept Summary Following the "First Light" for the fourth of the 8.2-m telescopes of the VLT Observatory on Paranal in September 2000, ESO scientists and engineers have just successfully accomplished the next major step of this large project. On March 17, 2001, "First Fringes" were obtained with the VLT Interferometer (VLTI) - this important event corresponds to the "First Light" for an astronomical telescope. At the VLTI, it occurred when the infrared light from the bright star Sirius was captured by two small telescopes and the two beams were successfully combined in the subterranean Interferometric Laboratory to form the typical pattern of dark and bright lines known as " interferometric fringes ". This proves the success of the robust VLTI concept, in particular of the "Delay Line". On the next night, the VLTI was used to perform a scientific measurement of the angular diameter of another comparatively bright star, Alpha Hydrae ( Alphard ); it was found to be 0.00929±0.00017 arcsec . This corresponds to the angular distance between the two headlights of a car as seen from a distance of approx. 35,000 kilometres. The excellent result was obtained during a series of observations, each lasting 2 minutes, and fully confirming the impressive predicted abilities of the VLTI . This first observation with the VLTI is a monumental technological achievement, especially in terms of accuracy and stability . It crucially depends on the proper combination and functioning of a large number of individual opto-mechnical and electronic elements. This includes the test telescopes that capture the starlight, continuous and extremely precise adjustment of the various mirrors that deflect the light beams as well as the automatic positioning and motion of the Delay Line carriages and, not least, the optimal tuning of the VLT INterferometer Commissionning Instrument (VINCI). These initial observations prove the overall concept for the VLTI . It was first envisaged in the early 1980's and has been continuously updated, as new technologies and materials became available during the intervening period. The present series of functional tests will go on for some time and involve many different configurations of the small telescopes and the instrument. It is then expected that the first combination of light beams from two of the VLT 8.2-m telescopes will take place in late 2001 . According to current plans, regular science observations will start from 2002, when the European and international astronomical community will have access to the full interferometric facility and the specially developed VLTI instrumentation now under construction. A wide range of scientific investigations will then become possible, from the search for planets around nearby stars, to the study of energetic processes at the cores of distant galaxies. With its superior angular resolution (image sharpness), the VLT is now beginning to open a new era in observational optical and infrared astronomy. The ambition of ESO is to make this type of observations available to all astronomers, not just the interferometry specialists. Video Clip 03/01 : Various video scenes related to the VLTI and the "First Fringes". PR Photo 10a/01 : "First Fringes" from the VLTI on the computer screen. PR Photo 10b/01 : Celebrating the VLTI "First Fringes" . PR Photo 10c/01 : Overview of the VLT Interferometer . PR Photo 10d/01 : Interferometric observations: Fringes from two stars of different angular size . PR Photo 10e/01 : Interferometric observations: Change of fringes with increasing baseline . PR Photo 10f/01 : Aerial view of the installations for the VLTI on the Paranal platform. PR Photo 10g/01 : Stations for the VLTI Auxiliary Telescopes. PR Photo 10h/01 : A test siderostat in place for observations. PR Photo 10i/01 : A test siderostat ( close-up ). PR Photo 10j/01 : One of the Delay Line carriages in the Interferometric Tunnel. PR Photo 10k/01 : The VINCI instrument in the Interferometric Laboratory. PR Photo 10l/01 : The VLTI Control Room . "First Fringes at the VLTI": A great moment! First light of the VLT Interferometer - PR Video Clip 03/01 [MPEG - x.xMb] ESO PR Video Clip 03/01 "First Light of the VLT Interferometer" (March 2001) (5025 frames/3:21x min) [MPEG Video+Audio; 144x112 pix; 6.9Mb] [MPEG Video+Audio; 320x240 pix; 13.7Mb] [RealMedia; streaming; 34kps] [RealMedia; streaming; 200kps] ESO Video Clip 03/01 provides a quick overview of the various elements of the VLT Interferometer and the important achievement of "First Fringes". The sequence is: General view of the Paranal observing platform. The "stations" for the VLTI Auxiliary Telescopes. Statement by the Manager of the VLT project, Massimo Tarenghi . One of the VLTI test telescopes ("siderostats") is being readied for observations. The Delay Line carriages in the Interferometric Tunnel move. The VINCI instrument in the Interferometric Laboratory is adjusted. Platform at sunset, before the observations. Astronomers and engineers prepare for the first observations in the VLTI Control Room in the Interferometric Building. "Interferometric Fringes" on the computer screen. Concluding statements by Andreas Glindemann , VLTI Project Leader, and Massimo Tarenghi . Distant view of the installations at Paranal at sunset (on March 1, 2001). The moment of "First Fringes" at the VLTI occurred in the evening of March 17, 2001 . The bright star Sirius was observed with two small telescopes ("siderostats"), specially constructed for this purpose during the early VLTI test phases. ESO PR Video Clip 03/01 includes related scenes and is based on a more comprehensive documentation, now available as ESO Video News Reel No. 12. The star was tracked by the two telescopes and the light beams were guided via the Delay Lines in the Interferometric Tunnel to the VINCI instrument [1] at the Interferometric Laboratory. The path lengths were continuously adjusted and it was possible to keep them stable to within 1 wavelength (2.2 µm, or 0.0022 mm) over a period of at least 2 min. Next night, several other stars were observed, enabling the ESO astronomers and engineers in the Control Room to obtain stable fringe patterns more routinely. With the special software developed, they also obtained 'on-line' an accurate measurement of the angular diameter of a star. This means that the VLTI delivered its first valid scientific result, already during this first test . First observation with the VLTI ESO PR Photo 10a/01 ESO PR Photo 10a/01 [Preview - JPEG: 400 x 315 pix - 96k] [Normal - JPEG: 800 x 630 pix - 256k] [Hi-Res - JPEG: 3000 x 2400 pix - 1.7k] ESO PR Photo 10b/01 ESO PR Photo 10b/01 [Preview - JPEG: 400 x 218 pix - 80k] [Normal - JPEG: 800 x 436 pix - 204k] Caption : PR Photo 10a/01 The "first fringes" obtained with the VLTI, as seen on the computer screen during the observation (upper right window). The fringe pattern arises when the light beams from two small telescopes are brought together in the VINCI instrument. The pattern itself contains information about the angular extension of the observed object, here the bright star Sirius . More details about the interpretation of this pattern is given in Appendix A. PR Photo 10b/01 : Celebrating the moment of "First Fringes" at the VLTI. At the VLTI control console (left to right): Pierre Kervella , Vincent Coudé du Foresto , Philippe Gitton , Andreas Glindemann , Massimo Tarenghi , Anders Wallander , Roberto Gilmozzi , Markus Schoeller and Bill Cotton . Bertrand Koehler was also present and took the photo. Technical information about PR Photo 10a/01 is available below. Following careful adjustment of all of the various components of the VLTI, the first attempt to perform a real observation was initiated during the night of March 16-17, 2001. "Fringes" were actually acquired during several seconds, leading to further optimization of the Delay Line optics. The next night, March 17-18, stable fringes were obtained on the bright stars Sirius and Lambda Velorum . The following night, the first scientifically valid results were obtained during a series of observations of six stars. One of these, Alpha Hydrae , was measured twice, with an interval of 15 minutes between the 2-min integrations. The measured diameters were highly consistent, with a mean of 0.00929±0.00017 arcsec. This new VLTI measurement is in full agreement with indirect (photometric) estimates of about 0.009 arcsec. The overall performance of the VLTI was excellent already in this early stage. For example, the interferometric efficiency ('contrast' on a stellar point source) was measured to be 87% and stable to within 1.3% over several days. This performance will be further improved following additional tuning. The entire operation of the VLTI was performed remotely from the Control Room, as this will also be the case in the future. Another great advantage of the VLTI concept is the possibility to analyse the data at the control console. This is one of the key features of the VLTI that contributes to make it a very user-friendly facility. Overview of the VLT Interferometer ESO PR Photo 10c/01 ESO PR Photo 10c/01 [Preview - JPEG: 400 x 410 pix - 60k] [Normal - JPEG: 800 x 820 pix - 124k] [Hi-Res - JPEG: 3000 x 3074 pix - 680k] Caption : PR Photo 10c/01 Overview of the VLT Interferometer, with the various elements indicated. In this case, the light beams from two of the 8.2-m telescopes are combined. The VINCI instrument that was used for the present test, is located at the common focus in the Interferometric Laboratory. The interferometric principle is based on the phase-stable combination of light beams from two or more telescopes at a common interferometric focus , cf. PR Photo 10c/01 . The light from a celestial object is captured simultaneously by two or more telescopes. For the first tests, two "siderostats" with 40-cm aperture are used; later on, two or more 8.2-m Unit Telescopes will be used, as well as several moving 1.8-m Auxiliary Telescopes (ATs), now under construction at the AMOS factory in Belgium. Via several mirrors and through the Delay Line, that continuously compensates for changes in the path length introduced by the Earth's rotation as well as by other effects (e.g., atmospheric turbulence), the light beams are guided towards the interferometric instrument VINCI at the common interferometric focus. It is located in the subterranean Interferometric Laboratory , at the centre of the observing platform on the top of the Paranal mountain. Photos of some of the VLTI elements are shown in Appendix B. The interferometric technique allows achieving images, as sharp as those of a telescope with a diameter equivalent to the largest distance between the telescopes in the interferometer. For the VLTI, this distance is about 200 metres, resulting in a resolution of 0.001 arcsec in the near-infrared spectral region (at 1 µm wavelength), or 0.0005 arcsec in visual light (500 nm). The latter measure corresponds to about 2 metres on the surface of the Moon. The VLTI instruments The installation and putting into operation of the VLTI at Paranal is a gradual process that will take several years. While the present "First Fringe" event is of crucial importance, the full potential of the VLTI will only be reached some years from now. This will happen with the successive installation of a number of highly specialised instruments, like the near-infrared/red VLTI focal instrument (AMBER) , the Mid-Infrared interferometric instrument for the VLTI (MIDI) and the instrument for Phase-Referenced Imaging and Microarcsecond Astrometry (PRIMA). Already next year, the three 1.8-m Auxiliary Telescopes that will be fully devoted to interferometric observations, will arrive at Paranal. Ultimately, it will be possible to combine the light beams from all the large and small telescopes. Great research promises Together, they will be able to achieve an unprecedented image sharpness (angular resolution) in the optical/infrared wavelength region, and thanks to the great light-collecting ability of the VLT Unit Telescopes, also for observations of quite faint objects. This will make it possible to carry out many different front-line scientific studies, beyond the reach of other instruments. There are many promising research fields that will profit from VLTI observations, of which the following serve as particularly interesting examples: * The structure and composition of the outer solar system, by studies of individual moons, Trans-Neptunian Objects and comets. * The direct detection and imaging of exoplanets in orbit around other stars. * The formation of star clusters and their evolution, from images and spectra of very young objects. * Direct views of the surface structures of stars other than the Sun. * Measuring accurate distances to the most prominent "stepping stones" in the extragalactic distance scale, e.g., galactic Cepheid stars, the Large Magellanic Cloud and globular clusters. * Direct investigations of the physical mechanisms responsible for stellar pulsation, mass loss and dust formation in stellar envelopes and evolution to the Planetary Nebula and White Dwarf stages. * Close-up studies of interacting binary stars to better understand their mass transfer mechanisms and evolution. * Studies of the structure of the circum-stellar environment of stellar black holes and neutron stars. * The evolution of the expanding shells of unstable stars like novae and supernovae and their interaction with the interstellar medium. * Studying the structure and evolution of stellar and galactic nuclear accretion disks and the associated features, e.g., jets and dust tori. * With images and spectra of the innermost regions of the Milky Way galaxy, to investigate the nature of the nucleus surrounding the central black hole. Clearly, there will be no lack of opportunities for trailblazing research with the VLTI. The "First Fringes" constitute a very important milestone in this direction. Appendix A: How does it work? ESO PR Photo 10d/01 ESO PR Photo 10d/01 [Preview - JPEG: 400 x 290 pix - 24k] [Normal - JPEG: 800 x 579 pix - 68k] [Hi-Res - JPEG: 3000 x 2170 pix - 412k] ESO PR Photo 10e/01 ESO PR Photo 10e/01 [Preview - JPEG: 400 x 219 pix - 32k] [Normal - JPEG: 800 x 438 pix - 64k] [Hi-Res - JPEG: 3000 x 1644 pix - 336k] Caption : PR Photo 10d/01 demonstrates in a schematic way, how the images of two stars of different angular size (left) will look like, with a single telescope (middle) and with an interferometer like the VLTI (right). Whereas there is little difference with one telescope, the fringe patterns at the interferometer are quite different. Conversely, the appearance of this pattern provides a measure of the star's angular diameter. In PR Photo 10e/01 , interferometric observations of a single star are shown, as the distance between the two telescopes is gradually increased. The observed pattern at the focal plane clearly changes, and the "fringes" disappear completely. See the text for more details. The principle behind interferometry is the "coherent optical interference" of light beams from two or more telescopes, due to the wave nature of light. The above illustrations serve to explain what the astronomers observe in the simplest case, that of a single star with a certain angular size, and how this can be translated into a measurement of this size. In PR Photo 10d/01 , the difference between two stars of different diameter is illustrated. While the image of the smaller star displays strong interference effects (i.e., a well visible fringe pattern), those of the larger star are much less prominent. The "visibility" of the fringes is therefore a direct measure of the size; the stronger they appear (the "larger the contrast"), the smaller is the star. If the distance between the two telescopes is increased when a particular star is observed ( PR Photo 10e/01 ), then the fringes become less and less prominent. At a certain distance, the fringe pattern disppears completely. This distance is directly related to the angular size of the star. Appendix B: Elements of the VLT Interferometer Contrary to other large astronomical telescopes, the VLT was designed from the beginning with the use of interferometry as a major goal . For this reason, the four 8.2-m Unit Telescopes were positioned in a quasi-trapezoidal configuration and several moving 1.8-m telescopes were included into the overall VLT concept, cf. PR Photo 10f/01 . The photos below show some of the key elements of the VLT Interferometer during the present observations. They include the siderostats , 40-cm telescopes that serve to capture the light from a comparatively bright star ( Photos 10g-i/01 ), the Delay Lines ( Photo 10j/01 ), and the VINCI instrument ( Photo 10k/01) Earlier information about the development and construction of the individual elements of the VLTI is available as ESO PR 04/98 , ESO PR 14/00 and ESO PR Photos 26a-e/00.
FIREWORKS NEAR A BLACK HOLE IN THE CORE OF SEYFERT GALAXY NGC 4151
NASA Technical Reports Server (NTRS)
2002-01-01
The Space Telescope Imaging Spectrograph (STIS) simultaneously records, in unprecedented detail, the velocities of hundreds of gas knots streaming at hundreds of thousands of miles per hour from the nucleus of NGC 4151, thought to house a supermassive black hole. This is the first time the velocity structure in the heart of this object, or similar objects, has been mapped so vividly this close to its central black hole. The twin cones of gas emission are powered by the energy released from the supermassive black hole believed to reside at the heart of this Seyfert galaxy. The STIS data clearly show that the gas knots illuminated by one of these cones is rapidly moving towards us, while the gas knots illuminated by the other cone are rapidly receding. The images have been rotated to show the same orientation of NGC 4151. The figures show: WFPC2 (upper left) -- A Hubble Wide Field Planetary Camera 2 image of the oxygen emission (5007 Angstroms) from the gas at the heart of NGC 4151. Though the twin cone structure can be seen, the image does not provide any information about the motion of the oxygen gas. STIS OPTICAL (upper right) -- In this STIS spectral image of the oxygen gas, the velocities of the knots are determined by comparing the knots of gas in the stationary WFPC2 image to the horizontal location of the knots in the STIS image. STIS OPTICAL (lower right) -- In this false color image the two emission lines of oxygen gas (the weaker one at 4959 Angstroms and the stronger one at 5007 Angstroms) are clearly visible. The horizontal line passing through the image is from the light generated by the powerful black hole at the center of NGC 4151. STIS ULTRAVIOLET (lower left) -- This STIS spectral image shows the velocity distribution of the carbon emission from the gas in the core of NGC 4151. It requires more energy to make the carbon gas glow (CIV at 1549 Angstroms) than it does to ionize the oxygen gas seen in the other images. This means we expect that the carbon emitting gas is closer to the heart of the energy source. Credit: John Hutchings (Dominion Astrophysical Observatory), Bruce Woodgate (GSFC/NASA), Mary Beth Kaiser (Johns Hopkins University), Steven Kraemer (Catholic University of America), and the STIS Team. Image files in GIF and JPEG format and captions may be accessed on the Internet via anonymous ftp from ftp.stsci.edu in /pubinfo.
Is this a Brown Dwarf or an Exoplanet?
NASA Astrophysics Data System (ADS)
2005-04-01
Since the discovery in 1995 of the first planet orbiting a normal star other than the Sun, there are now more than 150 candidates of these so-called exoplanets known. Most of them are detected by indirect methods, based either on variations of the radial velocity or the dimming of the star as the planet passes in front of it (see ESO PR 06/03, ESO PR 11/04 and ESO PR 22/04). Astronomers would, however, prefer to obtain a direct image of an exoplanet, allowing them to better characterize the object's physical nature. This is an exceedingly difficult task, as the planet is generally hidden in the "glare" of its host star. To partly overcome this problem, astronomers study very young objects. Indeed, sub-stellar objects are much hotter and brighter when young and therefore can be more easily detected than older objects of similar mass. Based on this approach, it might well be that last year's detection of a feeble speck of light next to the young brown dwarf 2M1207 by an international team of astronomers using the ESO Very Large Telescope (ESO PR 23/04) is the long-sought bona-fide image of an exoplanet. A recent report based on data from the Hubble Space Telescope seems to confirm this result. The even more recent observations made with the Spitzer Space Telescope of the warm infrared glows of two previously detected "hot Jupiter" planets is another interesting result in this context. This wealth of new results, obtained in the time span of a few months, illustrates perfectly the dynamic of this field of research. Tiny Companion ESO PR Photo 10a/05 ESO PR Photo 10a/05 The Sub-Stellar Companion to GQ Lupi (NACO/VLT) [Preview - JPEG: 400 x 429 pix - 22k] [Normal - JPEG: 800 x 875 pix - 132k] [Full Res - JPEG: 1042 x 1116 pix - 241k] Caption: ESO PR Photo 10a/05 shows the VLT NACO image, taken in the Ks-band, of GQ Lupi. The feeble point of light to the right of the star is the newly found cold companion. It is 250 times fainter than the star itself and it located 0.73 arcsecond west. At the distance of GQ Lupi, this corresponds to a distance of roughly 100 astronomical units. North is up and East is to the left. Now, a different team of astronomers [1] has possibly made another important breakthrough in this field by finding a tiny companion to a young star. Since several years these scientists have conducted a search for planets and low-mass objects, in particular around stars still in their formation process - so-called T-Tauri stars - using both the direct imaging and the radial velocity techniques. One of the objects on their list is GQ Lupi, a young T-Tauri star, located in the Lupus I (the Wolf) cloud, a region of star formation about 400 or 500 light-years away. The star GQ Lupi is apparently a very young object still surrounded by a disc, with an age between 100,000 and 2 million years. The astronomers observed GQ Lupi on 25 June 2004 with the adaptive optics instrument NACO attached to Yepun, the fourth 8.2-m Unit Telescope of the Very Large Telescope located on top of Cerro Paranal (Chile). The instrument's adaptive optics (AO) overcomes the distortion induced by atmospheric turbulence, producing extremely sharp near-infrared images. As ESO PR Photo 10a/05 shows, the series of NACO exposures clearly reveal the presence of the tiny companion, located in the close vicinity of the star. This newly found object is only 0.7 arcsecond away, and would have been overlooked without the use of the adaptive optics capabilities of NACO. At the distance of GQ Lupi, the separation between the star and its feeble companion is about 100 astronomical units (or 100 times the distance between the Sun and the Earth). This is roughly 2.5 times the distance between Pluto and the Sun. The companion, called GQ Lupi B or GQ Lupi b [2], is roughly 250 times fainter than GQ Lupi A as seen in this series of image. Further images obtained with NACO in August and September confirmed the presence and the position of this companion. Moving in the same direction ESO PR Photo 10b/05 ESO PR Photo 10b/05 Observed Separation between GQ Lupi and its Companion [Preview - JPEG: 400 x 554 pix - 34k] [Normal - JPEG: 800 x 1107 pix - 136k] [Full Res - JPEG: 1560 x 2158 pix - 319k] Caption: ESO PR Photo 10a/05 presents the observed separations between the primary star GQ Lupi and its companion, as deduced from the images taken with HST in 1999 (left), Subaru in 2002 (middle) and NACO on the VLT in 2004 (right). All the observed separations are consistent with no changes in separation, implying the two objects move in the same direction (red line). The curved line shows the change in separation expected if the faint object was a background star, due to the proper motion of GQ Lup. The astronomers then uncovered that the star had been previously observed by the Subaru telescope as well as by the Hubble Space Telescope. They retrieved the corresponding images from the data archives of these facilities for further analysis. The older images, taken in July 2002 and April 1999, respectively, also showed the presence of the companion, giving the astronomers the possibility of precisely measuring the position of the two objects over a period of several years. This in turn allowed them to determine if the stars move together in the sky - as should be expected if they are gravitationally bound together - or if the smaller object is only a background object, just aligned by chance. From their measurements, the astronomers found that the separation between the two objects did not change over the five-year period covered by the observations (see ESO PR Photo 10b/05). For the scientists this is a clear proof that both objects are moving in the same direction in the sky. "If the faint object would be a background object", says Ralph Neuhäuser of the University of Jena (Germany) and leader of the team, "we would see a change in separation as GQ Lup would be moving in the sky. From 1999 to 2004, the separation would have changed by 0.15 arcsec, while we are confident that the change is a least 20 times smaller." Exoplanet or brown dwarf? ESO PR Photo 10c/05 ESO PR Photo 10c/05 Spectrum of the Companion of GQ Lupi (NACO/VLT) [Preview - JPEG: 400 x 554 pix - 53k] [Normal - JPEG: 800 x 1108 pix - 200k] [Full Res - JPEG: 1570 x 2175 pix - 518k] Caption: ESO PR Photo 10c/05 shows the NACO spectrum of the companion of GQ Lupi (thick line, bottom) in the near-infrared (around the Ks-band at 2.2 microns). For comparison, the spectrum of a young M8 brown dwarf (top, in red) and of a L2 brown dwarf (second line, in brown) are shown. Also presented is the spectrum calculated using theoretical models for an object having a temperature of 2,000 degrees. This theoretical spectrum compares well with the observed one. To further probe the physical nature of the newly discovered object, the astronomers used NACO on the VLT to take a series of spectra. These showed the typical signature of a very cool object, in particular the presence of water and CO bands. Taking into account the infrared colours and the spectral data available, atmospheric model calculations point to a temperature between 1,600 and 2,500 degrees and a radius that is twice as large as Jupiter (see PR Photo 10c/05). According to this, GQ Lupi B is thus a cold and rather small object. But what is the nature of this faint object? Is it a bona-fide exoplanet or is it a brown dwarf, those "failed" stars that are not massive enough to centrally produce major nuclear reactions? Although the borderline between the two is still a matter of debate, one way to distinguish between the two is by their mass (as this is also done between brown dwarfs and stars): (giant) planets are lighter than about 13 Jupiter-masses (the critical mass needed to ignite deuterium fusion), brown dwarfs are heavier. What about GQ Lupi b? Unfortunately, the new observations do not provide a direct estimate of the mass of the object. Thus the astronomers must rely on comparison with theoretical models of such objects. But this is not as easy as it sounds. If, as astronomers generally accept, GQ Lupi A and B formed simultaneously, the newly found object is very young. The problem is that for such very young objects, traditional theoretical models are probably not applicable. If they are used, however, they provide an estimate of the mass of the object that lies somewhere between 3 to 42 Jupiter-masses, i.e. encompassing both the planet and the brown dwarf domains. These early phases in brown dwarf and planet formation are essentially unknown territory for models. It is very difficult to model the early collapse of the gas clouds given the conditions around the forming parent star. One set of models, specifically tailored to model the very young objects, provide masses as low as one to two Jupiter-masses. But as Ralph Neuhäuser points out "these new models still need to be calibrated, before the mass of such companions can be determined confidently". The astronomers also stress that from the comparison between their VLT/NACO spectra and the theoretical models of co-author Peter Hauschildt from Hamburg University (Germany), they arrive at the conclusion that the best fit is obtained for an object having roughly 2 Jupiter radii and 2 Jupiter masses. If this result holds, GQ Lupi b would thus be the youngest and lightest exoplanet to have been imaged. Further observations are still required to precisely determine the nature of GQ Lupi B. If the two objects are indeed bound, then the smallest object will need more than 1,000 years to complete an orbit around its host star. This is of course too long to wait but the effect of the orbital motion might possibly be detectable - as a tiny change in the separation between the two objects - in a few years. The team therefore plans to perform regular observations of this object using NACO on the VLT, in order to detect this motion. No doubt that in the mean time, further progress on the theoretical side will be achieved and that many sensational discoveries in this field will be made. More information The research presented in this ESO Press Release is published in a Letter to the Editor accepted for publication by Astronomy and Astrophysics ("Evidence for a co-moving sub-stellar companion of GQ Lup" by R. Neuhäuser et al.) and available in PDF form at http://www.edpsciences.org/articles/aa/pdf/forthpdf/aagj061_forth.pdf.
Cipher image damage and decisions in real time
NASA Astrophysics Data System (ADS)
Silva-García, Victor Manuel; Flores-Carapia, Rolando; Rentería-Márquez, Carlos; Luna-Benoso, Benjamín; Jiménez-Vázquez, Cesar Antonio; González-Ramírez, Marlon David
2015-01-01
This paper proposes a method for constructing permutations on m position arrangements. Our objective is to encrypt color images using advanced encryption standard (AES), using variable permutations means a different one for each 128-bit block in the first round after the x-or operation is applied. Furthermore, this research offers the possibility of knowing the original image when the encrypted figure suffered a failure from either an attack or not. This is achieved by permuting the original image pixel positions before being encrypted with AES variable permutations, which means building a pseudorandom permutation of 250,000 position arrays or more. To this end, an algorithm that defines a bijective function between the nonnegative integer and permutation sets is built. From this algorithm, the way to build permutations on the 0,1,…,m-1 array, knowing m-1 constants, is presented. The transcendental numbers are used to select these m-1 constants in a pseudorandom way. The quality of the proposed encryption according to the following criteria is evaluated: the correlation coefficient, the entropy, and the discrete Fourier transform. A goodness-of-fit test for each basic color image is proposed to measure the bits randomness degree of the encrypted figure. On the other hand, cipher images are obtained in a loss-less encryption way, i.e., no JPEG file formats are used.
Optimizing Cloud Based Image Storage, Dissemination and Processing Through Use of Mrf and Lerc
NASA Astrophysics Data System (ADS)
Becker, Peter; Plesea, Lucian; Maurer, Thomas
2016-06-01
The volume and numbers of geospatial images being collected continue to increase exponentially with the ever increasing number of airborne and satellite imaging platforms, and the increasing rate of data collection. As a result, the cost of fast storage required to provide access to the imagery is a major cost factor in enterprise image management solutions to handle, process and disseminate the imagery and information extracted from the imagery. Cloud based object storage offers to provide significantly lower cost and elastic storage for this imagery, but also adds some disadvantages in terms of greater latency for data access and lack of traditional file access. Although traditional file formats geoTIF, JPEG2000 and NITF can be downloaded from such object storage, their structure and available compression are not optimum and access performance is curtailed. This paper provides details on a solution by utilizing a new open image formats for storage and access to geospatial imagery optimized for cloud storage and processing. MRF (Meta Raster Format) is optimized for large collections of scenes such as those acquired from optical sensors. The format enables optimized data access from cloud storage, along with the use of new compression options which cannot easily be added to existing formats. The paper also provides an overview of LERC a new image compression that can be used with MRF that provides very good lossless and controlled lossy compression.
Can Commercial Digital Cameras Be Used as Multispectral Sensors? A Crop Monitoring Test.
Lebourgeois, Valentine; Bégué, Agnès; Labbé, Sylvain; Mallavan, Benjamin; Prévot, Laurent; Roux, Bruno
2008-11-17
The use of consumer digital cameras or webcams to characterize and monitor different features has become prevalent in various domains, especially in environmental applications. Despite some promising results, such digital camera systems generally suffer from signal aberrations due to the on-board image processing systems and thus offer limited quantitative data acquisition capability. The objective of this study was to test a series of radiometric corrections having the potential to reduce radiometric distortions linked to camera optics and environmental conditions, and to quantify the effects of these corrections on our ability to monitor crop variables. In 2007, we conducted a five-month experiment on sugarcane trial plots using original RGB and modified RGB (Red-Edge and NIR) cameras fitted onto a light aircraft. The camera settings were kept unchanged throughout the acquisition period and the images were recorded in JPEG and RAW formats. These images were corrected to eliminate the vignetting effect, and normalized between acquisition dates. Our results suggest that 1) the use of unprocessed image data did not improve the results of image analyses; 2) vignetting had a significant effect, especially for the modified camera, and 3) normalized vegetation indices calculated with vignetting-corrected images were sufficient to correct for scene illumination conditions. These results are discussed in the light of the experimental protocol and recommendations are made for the use of these versatile systems for quantitative remote sensing of terrestrial surfaces.
Recce imagery compression options
NASA Astrophysics Data System (ADS)
Healy, Donald J.
1995-09-01
The errors introduced into reconstructed RECCE imagery by ATARS DPCM compression are compared to those introduced by the more modern DCT-based JPEG compression algorithm. For storage applications in which uncompressed sensor data is available JPEG provides better mean-square-error performance while also providing more flexibility in the selection of compressed data rates. When ATARS DPCM compression has already been performed, lossless encoding techniques may be applied to the DPCM deltas to achieve further compression without introducing additional errors. The abilities of several lossless compression algorithms including Huffman, Lempel-Ziv, Lempel-Ziv-Welch, and Rice encoding to provide this additional compression of ATARS DPCM deltas are compared. It is shown that the amount of noise in the original imagery significantly affects these comparisons.
Successful "First Light" for VLT High-Resolution Spectrograph
NASA Astrophysics Data System (ADS)
1999-10-01
Great Research Prospects with UVES at KUEYEN A major new astronomical instrument for the ESO Very Large Telescope at Paranal (Chile), the UVES high-resolution spectrograph, has just made its first observations of astronomical objects. The astronomers are delighted with the quality of the spectra obtained at this moment of "First Light". Although much fine-tuning still has to be done, this early success promises well for new and exciting science projects with this large European research facility. Astronomical instruments at VLT KUEYEN The second VLT 8.2-m Unit Telescope, KUEYEN ("The Moon" in the Mapuche language), is in the process of being tuned to perfection before it will be "handed" over to the astronomers on April 1, 2000. The testing of the new giant telescope has been successfully completed. The latest pointing tests were very positive and, from real performance measurements covering the entire operating range of the telescope, the overall accuracy on the sky was found to be 0.85 arcsec (the RMS-value). This is an excellent result for any telescope and implies that KUEYEN (as is already the case for ANTU) will be able to acquire its future target objects securely and efficiently, thus saving precious observing time. This work has paved the way for the installation of large astronomical instruments at its three focal positions, all prototype facilities that are capable of catching the light from even very faint and distant celestial objects. The three instruments at KUEYEN are referred to by their acronyms UVES , FORS2 and FLAMES. They are all dedicated to the investigation of the spectroscopic properties of faint stars and galaxies in the Universe. The UVES instrument The first to be installed is the Ultraviolet Visual Echelle Spectrograph (UVES) that was built by ESO, with the collaboration of the Trieste Observatory (Italy) for the control software. Complete tests of its optical and mechanical components, as well as of its CCD detectors and of the complex control system, cf. ESO PR Photos 44/98 , were made in the laboratories of the ESO Headquarters in Garching (Germany) before it was fully dismounted and shipped (some parts by air, others by ship) to the ESO Paranal Observatory, 130 km south of Antofagasta (Chile). Here, the different pieces of UVES (with a total weight of 8 tons) were carefully reassembled on the Nasmyth platform of KUEYEN and made ready for real observations (see ESO PR Photos 36p-t/99 ). UVES is a complex two-channel spectrograph that has been built around two giant optical (echelle diffraction) gratings, each ruled on a 84 cm x 21 cm x 12 cm block of the ceramic material Zerodur (the same that is used for the VLT 8.2-m main mirrors) and weighing more than 60 kg. These echelle gratings finely disperse the light from celestial objects collected by the telescope into its constituent wavelengths (colours). UVES' resolving power (an optical term that indicates the ratio between a given wavelength and the smallest wavelength difference between two spectral lines that are clearly separated by the spectrograph) may reach 110,000, a very high value for an astronomical instrument of such a large size. This means for instance that even comparatively small changes in radial velocity (a few km/sec only) can be accurately measured and also that it is possible to detect the faint spectral signatures of very rare elements in celestial objects. One UVES channel is optimized for the ultraviolet and blue, the other for visual and red light. The spectra are digitally recorded by two highly efficient CCD detectors for subsequent analysis and astrophysical interpretation. By optimizing the transmission of the various optical components in its two channels, UVES has a very high efficiency all the way from the UV (wavelength about 300 nm) to the near-infrared (1000 nm or 1 µm). This guarantees that only a minimum of the precious light that is collected by KUEYEN is lost and that detailed spectra can be obtained of even quite faint objects, down to about magnitude 20 (corresponding to nearly one million times fainter than what can be perceived with the unaided eye). The possibility of doing simultaneous observations in the two channels (with a dichroic mirror) ensures a further gain in data gathering efficiency. First Observations with UVES In the evening of September 27, 1999, the ESO astronomers turned the KUEYEN telescope and - for the first time - focussed the light of stars and galaxies on the entrance aperture of the UVES instrument. This is the crucial moment of "First Light" for a new astronomical facility. The following test period will last about three weeks. Much of the time during the first observing nights was spent by functional tests of the various observation modes and by targeting "standard stars" with well-known properties in order to measure the performance of the new instrument. They showed that it is behaving very well. This marks the beginning of a period of progressive fine-tuning that will ultimately bring UVES to peak performance. The astronomers also did a few "scientific" observations during these nights, aimed at exploring the capabilities of their new spectrograph. They were eager to do so, also because UVES is the first spectrograph of this type installed at a telescope of large diameter in the southern hemisphere . Many exciting research possibilities are now opening with UVES . They include a study of the chemical history of many galaxies in the Local Group, e.g. by observing the most metal-poor (oldest) stars in the Milky Way Galaxy and by obtaining the first, extremely detailed spectra of their brightest stars in the Magellanic Clouds. Quasars and distant compact galaxies will also be among the most favoured targets of the first UVES observers, not least because their spectra carry crucial information about the density, physical state and chemical composition of the early Universe. UVES First Light: SN 1987A One of the first spectral test exposures with UVES at KUEYEN was of SN 1987A , the famous supernova that exploded in the Large Magellanic Cloud (LMC) in February 1987, and the brightest supernova of the last 400 years. ESO PR Photo 37a/99 ESO PR Photo 37a/99 [Preview - JPEG: 400 x 455 pix - 87k] [Normal - JPEG: 645 x 733 pix - 166k] Caption to ESO PR Photo 37a/99 : This is a direct image of SN1987A, flanked by two nearby stars. The distance between these two is 4.5 arcsec. The slit (2.0 arcsec wide) through which the echelle spectrum shown in PR Photo 37b/99 was obtained, is outlined. This reproduction is from a 2-min exposure through a R(ed) filter with the FORS1 multi-mode instrument at VLT ANTU, obtained in 0.55 arcsec seeing on September 20, 1998. North is up and East is left. ESO PR Photo 37b/99 ESO PR Photo 37b/99 [Preview - JPEG: 400 x 459 pix - 130k] [Normal - JPEG: 800 x 917 pix - 470k] [High-Res - JPEG: 3000 x 3439 pix - 6.5M] Caption to ESO PR Photo 37b/99 : This shows the raw image, as read from the CCD, with the recorded echelle spectrum of SN1987A. With this technique, the supernova spectrum is divided into many individual parts ( spectral orders , each of which appears as a narrow horizontal line) that together cover the wavelength interval from 479 to 682 nm (from the bottom to the top), i.e. from blue to red light. Many bright emission lines from different elements are visible, e.g. the strong H-alpha line from hydrogen near the centre of the fourth order from the top. Emission lines from the terrestrial atmosphere are seen as vertical bright lines that cover the full width of the individual horizontal bands. Since this exposure was done with the nearly Full Moon above the horizon, an underlying, faint absorption-line spectrum of reflected sunlight is also visible. The exposure time was 30 min and the seeing conditions were excellent (0.5 arcsec). ESO PR Photo 37c/99 ESO PR Photo 37c/99 [Preview - JPEG: 400 x 355 pix - 156k] [Normal - JPEG: 800 x 709 pix - 498k] [High-Res - JPEG: 1074 x 952 pix - 766k] Caption to ESO PR Photo 37c/99 : This false-colour image has been extracted from another UVES echelle spectrum of SN 1987A, similar to the one shown in PR Photo 37b/99 , but with a slit width of 1 arcsec only. The upper part shows the emission lines of nitrogen, sulfur and hydrogen, as recorded in some of the spectral orders. The pixel coordinates (X,Y) in the original frame are indicated; the red colour indicates the highest intensities. Below is a more detailed view of the complex H-alpha emission line, with the corresponding velocities and the position along the spectrograph slit indicated. Several components of this line can be distinguished. The bulk of the emission (here shown in red colour) comes from the ring surrounding the supernova; the elongated shape here is due to the differential velocity exhibited by the near (to us) and far sides of the ring. The two bright spots on either side are emission from two outer rings (not immediately visible in PR Photo 37a/99 ). The extended emission in the velocity direction originates from material inside the ring upon which the fastest moving ejecta from the supernova have impacted (As seen in VLT data obtained previously with the ANTU/ISAAC combination (cf. PR Photo 11/99 ), exciting times now lie ahead for SN 1987A. The ejecta moving at 30,000 km/s (1/10th the speed of light) have now, 12 years after the explosion, reached the ring of material and the predicted "fireworks" are about to be ignited.) Finally, there is a broad emission extending all along the spectrograph slit (here mostly yellow) upon which the ring emission is superimposed. This is not associated with the supernova itself, but is H-alpha emission by diffuse gas in the Large Magellanic Cloud (LMC) in which SN 1987A is located. UVES First Light: QSO HE2217-2818 The power of UVES is demonstrated by this two-hour test exposure of the southern quasar QSO HE2217-2818 with U-magnitude = 16.5 and a redshift of z = 2.4. It was discovered a few years ago during the Hamburg-ESO Quasar Survey , by means of photographic plates taken with the 1-m ESO Schmidt Telescope at La Silla, the other ESO astronomical site in Chile. ESO PR Photo 37d/99 ESO PR Photo 37d/99 [Preview - JPEG: 400 x 309 pix - 92k] [Normal - JPEG: 800x 618 pix - 311k] [High-Res - JPEG: 3000 x 2316 pix - 5.0M] ESO PR Photo 37e/99 ESO PR Photo 37e/99 [Preview - JPEG: 400 x 310 pix - 43k] [Normal - JPEG: 800 x 619 pix - 100k] [High-Res - JPEG: 3003 x 2324 pix - 436k] Caption to ESO PR Photo 37d/99 : This UVES echelle spectrum QSO HE2217-2818 (U-magnitude = 16.5) is recorded in different orders (the individual horizontal lines) and altogether covers the wavelength interval between 330 - 450 nm (from the bottom to the top). It illustrates the excellent capability of UVES to work in the UV-band on even faint targets. Simultaneously with this observation, UVES also recorded the adjacent spectral region 465 - 660 nm in its other channel. The broad Lyman-alpha emission from ionized hydrogen associated with the powerful energy source of the QSO is seen in the upper half of the spectrum at wavelength 413 nm. At shorter wavelengths, the dark regions in the spectrum are Lyman-alpha absorption lines from intervening, neutral hydrogen gas located along the line-of-sight at different redshifts (the so-called Lyman-alpha forest ) in the redshift interval z = 1.7 - 2.4. Note that since this exposure was done with the nearly Full Moon above the horizon, an underlying, faint absorption-line spectrum of reflected sunlight is also visible. Caption to ESO PR Photo 37e/99 : A tracing of one spectral order, corresponding to one horizontal line in the echelle spectrum displayed in PR Photo 37d/99 . It shows part of the Lyman-alpha forest in the ultraviolet spectrum of the southern quasar QSO HE2217-2818 . The absorption lines are caused by intervening, neutral hydrogen gas located at different distances along the line-of-sight towards this quasar. How to obtain ESO Press Information ESO Press Information is made available on the World-Wide Web (URL: http://www.eso.org../ ). ESO Press Photos may be reproduced, if credit is given to the European Southern Observatory.
NASA Technical Reports Server (NTRS)
Critchfield, Anna R.; Zepp, Robert H.
2000-01-01
We propose that the user interact with the spacecraft as if the spacecraft were a file server, so that the user can select and receive data as files in standard formats (e.g., tables or images, such as jpeg) via the Internet. Internet technology will be used end-to-end from the spacecraft to authorized users, such as the flight operation team, and project scientists. The proposed solution includes a ground system and spacecraft architecture, mission operations scenarios, and an implementation roadmap showing migration from current practice to the future, where distributed users request and receive files of spacecraft data from archives or spacecraft with equal ease. This solution will provide ground support personnel and scientists easy, direct, secure access to their authorized data without cumbersome processing, and can be extended to support autonomous communications with the spacecraft.
Digital video technologies and their network requirements
DOE Office of Scientific and Technical Information (OSTI.GOV)
R. P. Tsang; H. Y. Chen; J. M. Brandt
1999-11-01
Coded digital video signals are considered to be one of the most difficult data types to transport due to their real-time requirements and high bit rate variability. In this study, the authors discuss the coding mechanisms incorporated by the major compression standards bodies, i.e., JPEG and MPEG, as well as more advanced coding mechanisms such as wavelet and fractal techniques. The relationship between the applications which use these coding schemes and their network requirements are the major focus of this study. Specifically, the authors relate network latency, channel transmission reliability, random access speed, buffering and network bandwidth with the variousmore » coding techniques as a function of the applications which use them. Such applications include High-Definition Television, Video Conferencing, Computer-Supported Collaborative Work (CSCW), and Medical Imaging.« less
Waran, Vicknes; Selladurai, Benedict M; Bahuri, Nor Faizal Ahmad; George, George John K Thomas; Lim, Grace P S; Khine, Myo
2008-02-01
: We present our initial experience using a simple and relatively cost effective system using existing mobile phone network services and conventional handphones with built in cameras to capture carefully selected images from hard copies of scan images and transferring these images from a hospital without neurosurgical services to a university hospital with tertiary neurosurgical service for consultation and management plan. : A total of 14 patients with acute neurosurgical problems admitted to a general hospital in a 6 months period had their images photographed and transferred in JPEG format to a university neurosurgical unit. This was accompanied by a phone conference to discuss the scan and the patients' condition between the neurosurgeon and the referring physician. All images were also reviewed by a second independent neurosurgeon on a separate occasion to asses the agreement on the diagnosis and the management plan. : There were nine patients with acute head injury and five patients with acute nontraumatic neurosurgical problems. In all cases both neurosurgeons were in agreement that a diagnosis could be made on the basis of the images that were transferred. With respect to the management advice there were differences in opinion on three of the patients but these were considered to be minor. : Accurate diagnosis can be made on images of acute neurosurgical problems transferred using a conventional camera phone and meaningful decisions can be made on these images. This method of consultation also proved to be highly convenient and cost effective.
Atmospheric Science Data Center
2013-04-16
article title: Twilight in Antarctica View larger JPEG ... SpectroRadiometer (MISR) instrument on board Terra. The Ross Ice Shelf and Transantarctic Mountains are illuminated by low Sun. MISR was ...
Integration of digital gross pathology images for enterprise-wide access.
Amin, Milon; Sharma, Gaurav; Parwani, Anil V; Anderson, Ralph; Kolowitz, Brian J; Piccoli, Anthony; Shrestha, Rasu B; Lauro, Gonzalo Romero; Pantanowitz, Liron
2012-01-01
Sharing digital pathology images for enterprise- wide use into a picture archiving and communication system (PACS) is not yet widely adopted. We share our solution and 3-year experience of transmitting such images to an enterprise image server (EIS). Gross pathology images acquired by prosectors were integrated with clinical cases into the laboratory information system's image management module, and stored in JPEG2000 format on a networked image server. Automated daily searches for cases with gross images were used to compile an ASCII text file that was forwarded to a separate institutional Enterprise Digital Imaging and Communications in Medicine (DICOM) Wrapper (EDW) server. Concurrently, an HL7-based image order for these cases was generated, containing the locations of images and patient data, and forwarded to the EDW, which combined data in these locations to generate images with patient data, as required by DICOM standards. The image and data were then "wrapped" according to DICOM standards, transferred to the PACS servers, and made accessible on an institution-wide basis. In total, 26,966 gross images from 9,733 cases were transmitted over the 3-year period from the laboratory information system to the EIS. The average process time for cases with successful automatic uploads (n=9,688) to the EIS was 98 seconds. Only 45 cases (0.5%) failed requiring manual intervention. Uploaded images were immediately available to institution- wide PACS users. Since inception, user feedback has been positive. Enterprise- wide PACS- based sharing of pathology images is feasible, provides useful services to clinical staff, and utilizes existing information system and telecommunications infrastructure. PACS-shared pathology images, however, require a "DICOM wrapper" for multisystem compatibility.
Integration of digital gross pathology images for enterprise-wide access
Amin, Milon; Sharma, Gaurav; Parwani, Anil V.; Anderson, Ralph; Kolowitz, Brian J; Piccoli, Anthony; Shrestha, Rasu B.; Lauro, Gonzalo Romero; Pantanowitz, Liron
2012-01-01
Background: Sharing digital pathology images for enterprise- wide use into a picture archiving and communication system (PACS) is not yet widely adopted. We share our solution and 3-year experience of transmitting such images to an enterprise image server (EIS). Methods: Gross pathology images acquired by prosectors were integrated with clinical cases into the laboratory information system's image management module, and stored in JPEG2000 format on a networked image server. Automated daily searches for cases with gross images were used to compile an ASCII text file that was forwarded to a separate institutional Enterprise Digital Imaging and Communications in Medicine (DICOM) Wrapper (EDW) server. Concurrently, an HL7-based image order for these cases was generated, containing the locations of images and patient data, and forwarded to the EDW, which combined data in these locations to generate images with patient data, as required by DICOM standards. The image and data were then “wrapped” according to DICOM standards, transferred to the PACS servers, and made accessible on an institution-wide basis. Results: In total, 26,966 gross images from 9,733 cases were transmitted over the 3-year period from the laboratory information system to the EIS. The average process time for cases with successful automatic uploads (n=9,688) to the EIS was 98 seconds. Only 45 cases (0.5%) failed requiring manual intervention. Uploaded images were immediately available to institution- wide PACS users. Since inception, user feedback has been positive. Conclusions: Enterprise- wide PACS- based sharing of pathology images is feasible, provides useful services to clinical staff, and utilizes existing information system and telecommunications infrastructure. PACS-shared pathology images, however, require a “DICOM wrapper” for multisystem compatibility. PMID:22530178
A new image representation for compact and secure communication
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prasad, Lakshman; Skourikhine, A. N.
In many areas of nuclear materials management there is a need for communication, archival, and retrieval of annotated image data between heterogeneous platforms and devices to effectively implement safety, security, and safeguards of nuclear materials. Current image formats such as JPEG are not ideally suited in such scenarios as they are not scalable to different viewing formats, and do not provide a high-level representation of images that facilitate automatic object/change detection or annotation. The new Scalable Vector Graphics (SVG) open standard for representing graphical information, recommended by the World Wide Web Consortium (W3C) is designed to address issues of imagemore » scalability, portability, and annotation. However, until now there has been no viable technology to efficiently field images of high visual quality under this standard. Recently, LANL has developed a vectorized image representation that is compatible with the SVG standard and preserves visual quality. This is based on a new geometric framework for characterizing complex features in real-world imagery that incorporates perceptual principles of processing visual information known from cognitive psychology and vision science, to obtain a polygonal image representation of high fidelity. This representation can take advantage of all textual compression and encryption routines unavailable to other image formats. Moreover, this vectorized image representation can be exploited to facilitate automated object recognition that can reduce time required for data review. The objects/features of interest in these vectorized images can be annotated via animated graphics to facilitate quick and easy display and comprehension of processed image content.« less
HUBBLE'S 100,000TH EXPOSURE CAPTURES IMAGE OF DISTANT QUASAR
NASA Technical Reports Server (NTRS)
2002-01-01
The Hubble Space Telescope achieved its 100,000th exposure June 22 with a snapshot of a quasar that is about 9 billion light-years from Earth. The Wide Field and Planetary Camera 2 clicked this image of the quasar, the bright object in the center of the photo. The fainter object just above it is an elliptical galaxy. Although the two objects appear to be close to each other, they are actually separated by about 2 billion light-years. Located about 7 billion light-years away, the galaxy is almost directly in front of the quasar. Astronomer Charles Steidel of the California Institute of Technology in Pasadena, Calif., indirectly discovered the galaxy when he examined the quasar's light, which contained information about the galaxy's chemical composition. The reason, Steidel found, was that the galaxy was absorbing the light at certain frequencies. The astronomer is examining other background quasars to determine which kinds of galaxies absorb light at the same frequencies. Steidel also was somewhat surprised to discover that the galaxy is an elliptical, rather than a spiral. Elliptical galaxies are generally believed to contain very little gas. However, this elliptical has a gaseous 'halo' and contains no visible stars. Part of the halo is directly in front of the quasar. The bright object to the right of the quasar is a foreground star. The quasar and star are separated by billions of light-years. The quasar looks as bright as the star because it produces a tremendous amount of light from a compact source. The 'disturbed-looking' double spiral galaxy above the quasar also is in the foreground. Credit: Charles Steidel (California Institute of Technology, Pasadena, CA) and NASA. Image files in GIF and JPEG format and captions may be accessed on Internet via anonymous ftp from ftp.stsci.edu in /pubinfo.
NASA Astrophysics Data System (ADS)
Al-Mansoori, Saeed; Kunhu, Alavi
2013-10-01
This paper proposes a blind multi-watermarking scheme based on designing two back-to-back encoders. The first encoder is implemented to embed a robust watermark into remote sensing imagery by applying a Discrete Cosine Transform (DCT) approach. Such watermark is used in many applications to protect the copyright of the image. However, the second encoder embeds a fragile watermark using `SHA-1' hash function. The purpose behind embedding a fragile watermark is to prove the authenticity of the image (i.e. tamper-proof). Thus, the proposed technique was developed as a result of new challenges with piracy of remote sensing imagery ownership. This led researchers to look for different means to secure the ownership of satellite imagery and prevent the illegal use of these resources. Therefore, Emirates Institution for Advanced Science and Technology (EIAST) proposed utilizing existing data security concept by embedding a digital signature, "watermark", into DubaiSat-1 satellite imagery. In this study, DubaiSat-1 images with 2.5 meter resolution are used as a cover and a colored EIAST logo is used as a watermark. In order to evaluate the robustness of the proposed technique, a couple of attacks are applied such as JPEG compression, rotation and synchronization attacks. Furthermore, tampering attacks are applied to prove image authenticity.
Low-complex energy-aware image communication in visual sensor networks
NASA Astrophysics Data System (ADS)
Phamila, Yesudhas Asnath Victy; Amutha, Ramachandran
2013-10-01
A low-complex, low bit rate, energy-efficient image compression algorithm explicitly designed for resource-constrained visual sensor networks applied for surveillance, battle field, habitat monitoring, etc. is presented, where voluminous amount of image data has to be communicated over a bandwidth-limited wireless medium. The proposed method overcomes the energy limitation of individual nodes and is investigated in terms of image quality, entropy, processing time, overall energy consumption, and system lifetime. This algorithm is highly energy efficient and extremely fast since it applies energy-aware zonal binary discrete cosine transform (DCT) that computes only the few required significant coefficients and codes them using enhanced complementary Golomb Rice code without using any floating point operations. Experiments are performed using the Atmel Atmega128 and MSP430 processors to measure the resultant energy savings. Simulation results show that the proposed energy-aware fast zonal transform consumes only 0.3% of energy needed by conventional DCT. This algorithm consumes only 6% of energy needed by Independent JPEG Group (fast) version, and it suits for embedded systems requiring low power consumption. The proposed scheme is unique since it significantly enhances the lifetime of the camera sensor node and the network without any need for distributed processing as was traditionally required in existing algorithms.
Introducing keytagging, a novel technique for the protection of medical image-based tests.
Rubio, Óscar J; Alesanco, Álvaro; García, José
2015-08-01
This paper introduces keytagging, a novel technique to protect medical image-based tests by implementing image authentication, integrity control and location of tampered areas, private captioning with role-based access control, traceability and copyright protection. It relies on the association of tags (binary data strings) to stable, semistable or volatile features of the image, whose access keys (called keytags) depend on both the image and the tag content. Unlike watermarking, this technique can associate information to the most stable features of the image without distortion. Thus, this method preserves the clinical content of the image without the need for assessment, prevents eavesdropping and collusion attacks, and obtains a substantial capacity-robustness tradeoff with simple operations. The evaluation of this technique, involving images of different sizes from various acquisition modalities and image modifications that are typical in the medical context, demonstrates that all the aforementioned security measures can be implemented simultaneously and that the algorithm presents good scalability. In addition to this, keytags can be protected with standard Cryptographic Message Syntax and the keytagging process can be easily combined with JPEG2000 compression since both share the same wavelet transform. This reduces the delays for associating keytags and retrieving the corresponding tags to implement the aforementioned measures to only ≃30 and ≃90ms respectively. As a result, keytags can be seamlessly integrated within DICOM, reducing delays and bandwidth when the image test is updated and shared in secure architectures where different users cooperate, e.g. physicians who interpret the test, clinicians caring for the patient and researchers. Copyright © 2015 Elsevier Inc. All rights reserved.
Application of content-based image compression to telepathology
NASA Astrophysics Data System (ADS)
Varga, Margaret J.; Ducksbury, Paul G.; Callagy, Grace
2002-05-01
Telepathology is a means of practicing pathology at a distance, viewing images on a computer display rather than directly through a microscope. Without compression, images take too long to transmit to a remote location and are very expensive to store for future examination. However, to date the use of compressed images in pathology remains controversial. This is because commercial image compression algorithms such as JPEG achieve data compression without knowledge of the diagnostic content. Often images are lossily compressed at the expense of corrupting informative content. None of the currently available lossy compression techniques are concerned with what information has been preserved and what data has been discarded. Their sole objective is to compress and transmit the images as fast as possible. By contrast, this paper presents a novel image compression technique, which exploits knowledge of the slide diagnostic content. This 'content based' approach combines visually lossless and lossy compression techniques, judiciously applying each in the appropriate context across an image so as to maintain 'diagnostic' information while still maximising the possible compression. Standard compression algorithms, e.g. wavelets, can still be used, but their use in a context sensitive manner can offer high compression ratios and preservation of diagnostically important information. When compared with lossless compression the novel content-based approach can potentially provide the same degree of information with a smaller amount of data. When compared with lossy compression it can provide more information for a given amount of compression. The precise gain in the compression performance depends on the application (e.g. database archive or second opinion consultation) and the diagnostic content of the images.
NASA Astrophysics Data System (ADS)
Xie, ChengJun; Xu, Lin
2008-03-01
This paper presents a new algorithm based on mixing transform to eliminate redundancy, SHIRCT and subtraction mixing transform is used to eliminate spectral redundancy, 2D-CDF(2,2)DWT to eliminate spatial redundancy, This transform has priority in hardware realization convenience, since it can be fully implemented by add and shift operation. Its redundancy elimination effect is better than (1D+2D)CDF(2,2)DWT. Here improved SPIHT+CABAC mixing compression coding algorithm is used to implement compression coding. The experiment results show that in lossless image compression applications the effect of this method is a little better than the result acquired using (1D+2D)CDF(2,2)DWT+improved SPIHT+CABAC, still it is much better than the results acquired by JPEG-LS, WinZip, ARJ, DPCM, the research achievements of a research team of Chinese Academy of Sciences, NMST and MST. Using hyper-spectral image Canal of American JPL laboratory as the data set for lossless compression test, on the average the compression ratio of this algorithm exceeds the above algorithms by 42%,37%,35%,30%,16%,13%,11% respectively.
DUST DISK AROUND A BLACK HOLE IN GALAXY NGC 4261
NASA Technical Reports Server (NTRS)
2002-01-01
This is a Hubble Space Telescope image of an 800-light-year-wide spiral-shaped disk of dust fueling a massive black hole in the center of galaxy, NGC 4261, located 100 million light-years away in the direction of the constellation Virgo. By measuring the speed of gas swirling around the black hole, astronomers calculate that the object at the center of the disk is 1.2 billion times the mass of our Sun, yet concentrated into a region of space not much larger than our solar system. The strikingly geometric disk -- which contains enough mass to make 100,000 stars like our Sun -- was first identified in Hubble observations made in 1992. These new Hubble images reveal for the first time structure in the disk, which may be produced by waves or instabilities in the disk. Hubble also reveals that the disk and black hole are offset from the center of NGC 4261, implying some sort of dynamical interaction is taking place, that has yet to be fully explained. Credit: L. Ferrarese (Johns Hopkins University) and NASA Image files in GIF and JPEG format, captions, and press release text may be accessed on Internet via anonymous ftp from oposite.stsci.edu in /pubinfo:
[Glossary of terms used by radiologists in image processing].
Rolland, Y; Collorec, R; Bruno, A; Ramée, A; Morcet, N; Haigron, P
1995-01-01
We give the definition of 166 words used in image processing. Adaptivity, aliazing, analog-digital converter, analysis, approximation, arc, artifact, artificial intelligence, attribute, autocorrelation, bandwidth, boundary, brightness, calibration, class, classification, classify, centre, cluster, coding, color, compression, contrast, connectivity, convolution, correlation, data base, decision, decomposition, deconvolution, deduction, descriptor, detection, digitization, dilation, discontinuity, discretization, discrimination, disparity, display, distance, distorsion, distribution dynamic, edge, energy, enhancement, entropy, erosion, estimation, event, extrapolation, feature, file, filter, filter floaters, fitting, Fourier transform, frequency, fusion, fuzzy, Gaussian, gradient, graph, gray level, group, growing, histogram, Hough transform, Houndsfield, image, impulse response, inertia, intensity, interpolation, interpretation, invariance, isotropy, iterative, JPEG, knowledge base, label, laplacian, learning, least squares, likelihood, matching, Markov field, mask, matching, mathematical morphology, merge (to), MIP, median, minimization, model, moiré, moment, MPEG, neural network, neuron, node, noise, norm, normal, operator, optical system, optimization, orthogonal, parametric, pattern recognition, periodicity, photometry, pixel, polygon, polynomial, prediction, pulsation, pyramidal, quantization, raster, reconstruction, recursive, region, rendering, representation space, resolution, restoration, robustness, ROC, thinning, transform, sampling, saturation, scene analysis, segmentation, separable function, sequential, smoothing, spline, split (to), shape, threshold, tree, signal, speckle, spectrum, spline, stationarity, statistical, stochastic, structuring element, support, syntaxic, synthesis, texture, truncation, variance, vision, voxel, windowing.
Progressive data transmission for anatomical landmark detection in a cloud.
Sofka, M; Ralovich, K; Zhang, J; Zhou, S K; Comaniciu, D
2012-01-01
In the concept of cloud-computing-based systems, various authorized users have secure access to patient records from a number of care delivery organizations from any location. This creates a growing need for remote visualization, advanced image processing, state-of-the-art image analysis, and computer aided diagnosis. This paper proposes a system of algorithms for automatic detection of anatomical landmarks in 3D volumes in the cloud computing environment. The system addresses the inherent problem of limited bandwidth between a (thin) client, data center, and data analysis server. The problem of limited bandwidth is solved by a hierarchical sequential detection algorithm that obtains data by progressively transmitting only image regions required for processing. The client sends a request to detect a set of landmarks for region visualization or further analysis. The algorithm running on the data analysis server obtains a coarse level image from the data center and generates landmark location candidates. The candidates are then used to obtain image neighborhood regions at a finer resolution level for further detection. This way, the landmark locations are hierarchically and sequentially detected and refined. Only image regions surrounding landmark location candidates need to be trans- mitted during detection. Furthermore, the image regions are lossy compressed with JPEG 2000. Together, these properties amount to at least 30 times bandwidth reduction while achieving similar accuracy when compared to an algorithm using the original data. The hierarchical sequential algorithm with progressive data transmission considerably reduces bandwidth requirements in cloud-based detection systems.
Han, Ruizhen; He, Yong; Liu, Fei
2012-01-01
This paper presents a feasibility study on a real-time in field pest classification system design based on Blackfin DSP and 3G wireless communication technology. This prototype system is composed of remote on-line classification platform (ROCP), which uses a digital signal processor (DSP) as a core CPU, and a host control platform (HCP). The ROCP is in charge of acquiring the pest image, extracting image features and detecting the class of pest using an Artificial Neural Network (ANN) classifier. It sends the image data, which is encoded using JPEG 2000 in DSP, to the HCP through the 3G network at the same time for further identification. The image transmission and communication are accomplished using 3G technology. Our system transmits the data via a commercial base station. The system can work properly based on the effective coverage of base stations, no matter the distance from the ROCP to the HCP. In the HCP, the image data is decoded and the pest image displayed in real-time for further identification. Authentication and performance tests of the prototype system were conducted. The authentication test showed that the image data were transmitted correctly. Based on the performance test results on six classes of pests, the average accuracy is 82%. Considering the different live pests’ pose and different field lighting conditions, the result is satisfactory. The proposed technique is well suited for implementation in field pest classification on-line for precision agriculture. PMID:22736996
Han, Ruizhen; He, Yong; Liu, Fei
2012-01-01
This paper presents a feasibility study on a real-time in field pest classification system design based on Blackfin DSP and 3G wireless communication technology. This prototype system is composed of remote on-line classification platform (ROCP), which uses a digital signal processor (DSP) as a core CPU, and a host control platform (HCP). The ROCP is in charge of acquiring the pest image, extracting image features and detecting the class of pest using an Artificial Neural Network (ANN) classifier. It sends the image data, which is encoded using JPEG 2000 in DSP, to the HCP through the 3G network at the same time for further identification. The image transmission and communication are accomplished using 3G technology. Our system transmits the data via a commercial base station. The system can work properly based on the effective coverage of base stations, no matter the distance from the ROCP to the HCP. In the HCP, the image data is decoded and the pest image displayed in real-time for further identification. Authentication and performance tests of the prototype system were conducted. The authentication test showed that the image data were transmitted correctly. Based on the performance test results on six classes of pests, the average accuracy is 82%. Considering the different live pests' pose and different field lighting conditions, the result is satisfactory. The proposed technique is well suited for implementation in field pest classification on-line for precision agriculture.
NASA Astrophysics Data System (ADS)
Arvesen, J. C.; Dotson, R. C.
2014-12-01
The DMS (Digital Mapping System) has been a sensor component of all DC-8 and P-3 IceBridge flights since 2009 and has acquired over 3 million JPEG images over Arctic and Antarctic land and sea ice. The DMS imagery is primarily used for identifying and locating open leads for LiDAR sea-ice freeboard measurements and documenting snow and ice surface conditions. The DMS is a COTS Canon SLR camera utilizing a 28mm focal length lens, resulting in a 10cm GSD and swath of ~400 meters from a nominal flight altitude of 500 meters. Exterior orientation is provided by an Applanix IMU/GPS which records a TTL pulse coincident with image acquisition. Notable for virtually all IceBridge flights is that parallel grids are not flown and thus there is no ability to photogrammetrically tie any imagery to adjacent flight lines. Approximately 800,000 Level-3 DMS Surface Model data products have been delivered to NSIDC, each consisting of a Digital Elevation Model (GeoTIFF DEM) and a co-registered Visible Overlay (GeoJPEG). Absolute elevation accuracy for each individual Elevation Model is adjusted to concurrent Airborne Topographic Mapper (ATM) Lidar data, resulting in higher elevation accuracy than can be achieved by photogrammetry alone. The adjustment methodology forces a zero mean difference to the corresponding ATM point cloud integrated over each DMS frame. Statistics are calculated for each DMS Elevation Model frame and show RMS differences are within +/- 10 cm with respect to the ATM point cloud. The DMS Surface Model possesses similar elevation accuracy to the ATM point cloud, but with the following advantages: · Higher and uniform spatial resolution: 40 cm GSD · 45% wider swath: 435 meters vs. 300 meters at 500 meter flight altitude · Visible RGB co-registered overlay at 10 cm GSD · Enhanced visualization through 3-dimensional virtual reality (i.e. video fly-through) Examples will be presented of the utility of these advantages and a novel use of a cell phone camera for aerial photogrammetry will also be presented.
Real-time 3D video compression for tele-immersive environments
NASA Astrophysics Data System (ADS)
Yang, Zhenyu; Cui, Yi; Anwar, Zahid; Bocchino, Robert; Kiyanclar, Nadir; Nahrstedt, Klara; Campbell, Roy H.; Yurcik, William
2006-01-01
Tele-immersive systems can improve productivity and aid communication by allowing distributed parties to exchange information via a shared immersive experience. The TEEVE research project at the University of Illinois at Urbana-Champaign and the University of California at Berkeley seeks to foster the development and use of tele-immersive environments by a holistic integration of existing components that capture, transmit, and render three-dimensional (3D) scenes in real time to convey a sense of immersive space. However, the transmission of 3D video poses significant challenges. First, it is bandwidth-intensive, as it requires the transmission of multiple large-volume 3D video streams. Second, existing schemes for 2D color video compression such as MPEG, JPEG, and H.263 cannot be applied directly because the 3D video data contains depth as well as color information. Our goal is to explore from a different angle of the 3D compression space with factors including complexity, compression ratio, quality, and real-time performance. To investigate these trade-offs, we present and evaluate two simple 3D compression schemes. For the first scheme, we use color reduction to compress the color information, which we then compress along with the depth information using zlib. For the second scheme, we use motion JPEG to compress the color information and run-length encoding followed by Huffman coding to compress the depth information. We apply both schemes to 3D videos captured from a real tele-immersive environment. Our experimental results show that: (1) the compressed data preserves enough information to communicate the 3D images effectively (min. PSNR > 40) and (2) even without inter-frame motion estimation, very high compression ratios (avg. > 15) are achievable at speeds sufficient to allow real-time communication (avg. ~ 13 ms per 3D video frame).
ALMA On the Move - ESO Awards Important Contract for the ALMA Project
NASA Astrophysics Data System (ADS)
2005-12-01
Only two weeks after awarding its largest-ever contract for the procurement of antennas for the Atacama Large Millimeter Array project (ALMA), ESO has signed a contract with Scheuerle Fahrzeugfabrik GmbH, a world-leader in the design and production of custom-built heavy-duty transporters, for the provision of two antenna transporting vehicles. These vehicles are of crucial importance for ALMA. ESO PR Photo 41a/05 ESO PR Photo 41a/05 The ALMA Transporter (Artist's Impression) [Preview - JPEG: 400 x 756 pix - 234k] [Normal - JPEG: 800 x 1512 pix - 700k] [Full Res - JPEG: 1768 x 3265 pix - 2.3M] Caption: Each of the ALMA transporters will be 10 m wide, 4.5 m high and 16 m long. "The timely awarding of this contract is most important to ensure that science operations can commence as planned," said ESO Director General Catherine Cesarsky. "This contract thus marks a further step towards the realization of the ALMA project." "These vehicles will operate in a most unusual environment and must live up to very strict demands regarding performance, reliability and safety. Meeting these requirements is a challenge for us, and we are proud to have been selected by ESO for this task," commented Hans-Jörg Habernegg, President of Scheuerle GmbH. ESO PR Photo 41b/05 ESO PR Photo 41b/05 Signing the Contract [Preview - JPEG: 400 x 572 pix - 234k] [Normal - JPEG: 800 x 1143 pix - 700k] [HiRes - JPEG: 4368 x 3056 pix - 2.3M] Caption: (left to right) Mr Thomas Riek, Vice-President of Scheuerle GmbH, Dr Catherine Cesarsky, ESO Director General and Mr Hans-Jörg Habernegg, President of Scheuerle GmbH. When completed on the high-altitude Chajnantor site in Chile, ALMA is expected to comprise more than 60 antennas, which can be placed in different locations on the plateau but which work together as one giant telescope. Changing the relative positions of the antennas and thus also the configuration of the array allows for different observing modes, comparable to using a zoom lens, offering different degrees of resolution and sky coverage as needed by the astronomers. The ALMA Antenna Transporters allow for moving the antennas between the different pre-defined antenna positions. They will also be used for transporting antennas between the maintenance area at 2900 m elevation and the "high site" at 5000 m above sea level, where the observations are carried out. Given their important functions, both for the scientific work and in transporting high-tech antennas with the required care, the vehicles must live up to very demanding operational requirements. Each transporter has a mass of 150 tonnes and is able to lift and transport antennas of 110 tonnes. They must be able to place the antennas on the docking pads with millimetric precision. At the same time, they must be powerful enough to climb 2000 m reliably and safely with their heavy and valuable load, putting extraordinary demands on the 500 kW diesel engines. This means negotiating a 28 km long high-altitude road with an average slope of 7 %. Finally, as they will be operated at an altitude with significantly reduced oxygen levels, a range of redundant safety devices protect both personnel and equipment from possible mishaps or accidents. The first transporter is scheduled to be delivered in the summer of 2007 to match the delivery of the first antennas to Chajnantor. The ESO contract has a value of approx. 5.5 m Euros.
Johnson, Jeffrey P; Krupinski, Elizabeth A; Yan, Michelle; Roehrig, Hans; Graham, Anna R; Weinstein, Ronald S
2011-02-01
A major issue in telepathology is the extremely large and growing size of digitized "virtual" slides, which can require several gigabytes of storage and cause significant delays in data transmission for remote image interpretation and interactive visualization by pathologists. Compression can reduce this massive amount of virtual slide data, but reversible (lossless) methods limit data reduction to less than 50%, while lossy compression can degrade image quality and diagnostic accuracy. "Visually lossless" compression offers the potential for using higher compression levels without noticeable artifacts, but requires a rate-control strategy that adapts to image content and loss visibility. We investigated the utility of a visual discrimination model (VDM) and other distortion metrics for predicting JPEG 2000 bit rates corresponding to visually lossless compression of virtual slides for breast biopsy specimens. Threshold bit rates were determined experimentally with human observers for a variety of tissue regions cropped from virtual slides. For test images compressed to their visually lossless thresholds, just-noticeable difference (JND) metrics computed by the VDM were nearly constant at the 95th percentile level or higher, and were significantly less variable than peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) metrics. Our results suggest that VDM metrics could be used to guide the compression of virtual slides to achieve visually lossless compression while providing 5-12 times the data reduction of reversible methods.
NASA Astrophysics Data System (ADS)
2003-07-01
Discovery of quadruply lensed quasar with Einstein ring Summary Using the ESO 3.6-m telescope at La Silla (Chile), an international team of astronomers [1] has discovered a complex cosmic mirage in the southern constellation Crater (The Cup). This "gravitational lens" system consists of (at least) four images of the same quasar as well as a ring-shaped image of the galaxy in which the quasar resides - known as an "Einstein ring". The more nearby lensing galaxy that causes this intriguing optical illusion is also well visible. The team obtained spectra of these objects with the new EMMI camera mounted on the ESO 3.5-m New Technology Telescope (NTT), also at the La Silla observatory. They find that the lensed quasar [2] is located at a distance of 6,300 million light-years (its "redshift" is z = 0.66 [3]) while the lensing elliptical galaxy is rougly halfway between the quasar and us, at a distance of 3,500 million light-years (z = 0.3). The system has been designated RXS J1131-1231 - it is the closest gravitationally lensed quasar discovered so far . PR Photo 20a/03 : Image of the gravitational lens system RXS J1131-1231 (ESO 3.6m Telescope). PR Photo 20b/03 : Spectra of two lensed images of the source quasar and the lensing galaxy. Cosmic mirages The physical principle behind a "gravitational lens" (also known as a "cosmic mirage") has been known since 1916 as a consequence of Albert Einstein's Theory of General Relativity . The gravitational field of a massive object curves the local geometry of the Universe, so light rays passing close to the object are bent (like a "straight line" on the surface of the Earth is necessarily curved because of the curvature of the Earth's surface). This effect was first observed by astronomers in 1919 during a total solar eclipse. Accurate positional measurements of stars seen in the dark sky near the eclipsed Sun indicated an apparent displacement in the direction opposite to the Sun, about as much as predicted by Einstein's theory. The effect is due to the gravitational attraction of the stellar photons when they pass near the Sun on their way to us. This was a direct confirmation of an entirely new phenomenon and it represented a milestone in physics. In the 1930's, astronomer Fritz Zwicky (1898 - 1974), of Swiss nationality and working at the Mount Wilson Observatory in California, realised that the same effect may also happen far out in space where galaxies and large galaxy clusters may be sufficiently compact and massive to bend the light from even more distant objects. However, it was only five decades later, in 1979, that his ideas were observationally confirmed when the first example of a cosmic mirage was discovered (as two images of the same distant quasar). Cosmic mirages are generally seen as multiple images of a single quasar [2], lensed by a galaxy located between the quasar and us. The number and the shape of the images of the quasar depends on the relative positions of the quasar, the lensing galaxy and us. Moreover, if the alignment were perfect, we would also see a ring-shaped image around the lensing object. Such "Einstein rings" are very rare, though, and have only been observed in a very few cases. Another particular interest of the gravitational lensing effect is that it may not only result in double or multiple images of the same object, but also that the brightness of these images increase significantly, just as it happens with an ordinary optical lens. Distant galaxies and galaxy clusters may thereby act as "natural telescopes" which allow us to observe more distant objects that would otherwise have been too faint to be detected with currently available astronomical telescopes. Image sharpening techniques resolve the cosmic mirage better ESO PR Photo 20a/03 ESO PR Photo 20a/03 [Preview - JPEG: 613 x 400 pix - 36k [Normal - JPEG: 1226 x 800 pix - 388k] Caption of PR Photo 20a/03 : The left panel displays the image of the newly discovered gravitational lens system RXS J1131-1231 recorded by the EFOSC2 instrument on the ESO 3.6-m telescope. Deconvolution ("image sharpening", right panel) allows a better view of the four star-like components (the four images of the same distant quasar), the Einstein ring (the elongated image of the quasar's host galaxy) and the lensing galaxy (the central bright diffuse image). A new gravitational lens, designated RXS J1131-1231 , was serendipitously discovered in May 2002 by Dominique Sluse , then a PhD student at ESO in Chile, while inspecting quasar images taken with the ESO 3.6-m telescope at the La Silla Observatory. The discovery of this system profited from the good observational conditions prevailing at the time of the observations. From a simple visual inspection of these images, Sluse provisionally concluded that the system had four star-like (the lensed quasar images) and one diffuse (the lensing galaxy) component. Because of the very small separation between the components, of the order of one arcsecond or less, and the unavoidable "blurring" effect caused by turbulence in the terrestrial atmosphere ("seeing"), the astronomers used sophisticated image-sharpening software to produce higher-resolution images on which precise brightness and positional measurements could then be performed (see also ESO PR 09/97). This so-called "deconvolution" technique makes it possible to visualize this complex system much better and, in particular, to confirm and render more conspicuous the associated Einstein ring, cf. PR Photo 20a/03. Identification of the source and of the lens ESO PR Photo 20b/03 ESO PR Photo 20b/03 [Preview - JPEG: 485 x 400 pix - 32k [Normal - JPEG: 970 x 800 pix - 260k] Caption of PR Photo 20b/03 : The top panel demonstrates that the spectra of two of the star-like images (those labeled A and D) are very similar and are therefore from the same object, i.e., the lensed quasar. The emission lines identified in these spectra are typical of a quasar and the redshft is measured as z = 0.66. The bottom panel shows the spectrum of the lensing, elliptical galaxy at redshift z=0.3. The team of astronomers [1] then used the ESO 3.5-m New Technology Telescope (NTT) at La Silla to obtain spectra of the individual image components of this lensing system. This is imperative because, like human fingerprints, the spectra allow unambiguous identification of the observed objects. Nevertheless, this is not an easy task because the different images of the cosmic mirage are located very close to each other in the sky and the best possible conditions are needed to obtain clean and well separated spectra. However, the excellent optical quality of the NTT combined with reasonably good seeing conditions (about 0.7 arcsecond) enabled the astronomers to detect the "spectral fingerprints" of both the source and the object acting as a lens, cf. ESO PR Photo 20b/03. The evaluation of the spectra showed that the background source is a quasar with a redshift of z = 0.66 [3], corresponding to a distance of about 6,300 million light-years. The light from this quasar is lensed by a massive elliptical galaxy with a redshift z=0.3, i.e. at a distance of 3,500 million light-years or about halfway between the quasar and us. It is the nearest gravitationally lensed quasar known to date . Because of the specific geometry of the lens and the position of the lensing galaxy, it is possible to show that the light from the extended galaxy in which the quasar is located should also be lensed and become visible as a ring-shaped image. That this is indeed the case is demonstrated by PR Photo 20a/03 which clearly shows the presence of such an "Einstein ring", surrounding the image of the more nearby lensing galaxy. Micro lensing within macro lensing ? The particular configuration of the individual lensed images observed in this system has enabled the astronomers to produce a detailed model of the system. From this, they can then make predictions about the relative brightness of the various lensed images. Somewhat unexpectedly, they found that the predicted brightnesses of the three brightest star-like images of the quasar are not in agreement with the observed ones - one of them turns out to be one magnitude (that is, a factor of 2.5) brighter than expected . This prediction does not call into question General Relativity but suggests that another effect is at work in this system. The hypothesis advanced by the team is that one of the images is subject to "microlensing" . This effect is of the same nature as the cosmic mirage - multiple amplified images of the object are formed - but in this case, additional light-ray deflection is caused by a single star (or several stars) within the lensing galaxy. The result is that there are additional (unresolved) images of the quasar within one of the macro-lensed images. The outcome is an "over-amplification" of this particular image. Whether this is really so will soon be tested by means of new observations of this gravitational lens system with the ESO Very Large Telescope (VLT) at Paranal (Chile) and also with the Very Large Array (VLA) radio observatory in New Mexico (USA). Outlook Until now, 62 multiple-imaged quasars have been discovered, in most cases showing 2 or 4 images of the same quasar. The presence of elongated images of the quasar and, in particular, of ring-like images is often observed at radio wavelengths. However, this remains a rare phenomenon in the optical domain - only four such systems have been imaged by optical/infrared telecopes until now. The complex and comparatively bright system RXS J1131-1231 now discovered is a unique astrophysical laboratory . Its rare characteristics (e.g., brightness, presence of a ring-shaped image, small redshift, X-ray and radio emission, visible lens,...) will now enable the astronomers to study the properties of the lensing galaxy, including its stellar content, structure and mass distribution in great detail, and to probe the source morphology. These studies will use new observations which are currently being obtained with the VLT at Paranal, with the VLA radio interferometer in New Mexico and with the Hubble Space Telescope. More information The research described in this press release is presented in a Letter to the Editor, soon to appear in the European professional journal Astronomy & Astrophysics ("A quadruply imaged quasar with an optical Einstein ring candidate : 1RXS J113155.4-123155", by Dominique Sluse et al.). More information on gravitational lensing and on this research group can also be found at the URL : http://www.astro.ulg.ac.be/GRech/AEOS/. Notes [1]: The team consists of Dominique Sluse, Damien Hutsemékers, and Thodori Nakos (ESO and Institut d'Astrophysique et de Géophysique de l'Université de Liège - IAGL), Jean-François Claeskens, Frédéric Courbin, Christophe Jean, and Jean Surdej (IAGL), Malvina Billeres (ESO), and Sergiy Khmil (Astronomical Observatory of Shevchentko University). [2]: Quasars are particularly active galaxies, the centres of which emit prodigious amounts of energy and energetic particles. It is believed that they harbour a massive black hole at their centre and that the energy is produced when surrounding matter falls into this black hole. This type of object was first discovered in 1963 by the Dutch-American astronomer Maarten Schmidt at the Palomar Observatory (California, USA) and the name refers to their "star-like" appearance on the images obtained at that time. [3]: In astronomy, the "redshift" denotes the fraction by which the lines in the spectrum of an object are shifted towards longer wavelengths. Since the redshift of a cosmological object increases with distance, the observed redshift of a remote galaxy also provides an estimate of its distance.
NASA Astrophysics Data System (ADS)
2005-09-01
Large Population of Galaxies Found in the Young Universe with ESO's VLT The Universe was a more fertile place soon after it was formed than has previously been suspected. A team of French and Italian astronomers [1] made indeed the surprising discovery of a large and unknown population of distant galaxies observed when the Universe was only 10 to 30% its present age. ESO PR Photo 29a/05 ESO PR Photo 29a/05 New Population of Distant Galaxies [Preview - JPEG: 400 x 424 pix - 191k] [Normal - JPEG: 800 x 847 pix - 449k] [HiRes - JPEG: 2269 x 2402 pix - 2.0M] ESO PR Photo 29b/05 ESO PR Photo 29b/05 Average Spectra of Distant Galaxies [Preview - JPEG: 400 x 506 pix - 141k] [Normal - JPEG: 800 x 1012 pix - 320k] This breakthrough is based on observations made with the Visible Multi-Object Spectrograph (VIMOS) as part of the VIMOS VLT Deep Survey (VVDS). The VVDS started early 2002 on Melipal, one of the 8.2-m telescopes of ESO's Very Large Telescope Array [2]. In a total sample of about 8,000 galaxies selected only on the basis of their observed brightness in red light, almost 1,000 bright and vigorously star forming galaxies were discovered that were formed between 9 and 12 billion years ago (i.e. about 1,500 to 4,500 million years after the Big Bang). "To our surprise, says Olivier Le Fèvre, from the Laboratoire d'Astrophysique de Marseille (France) and co-leader of the VVDS project, "this is two to six times higher than had been found previously. These galaxies had been missed because previous surveys had selected objects in a much more restrictive manner than we did. And they did so to accommodate the much lower efficiency of the previous generation of instruments." While observations and models have consistently indicated that the Universe had not yet formed many stars in the first billion years of cosmic time, the discovery announced today by scientists calls for a significant change in this picture. The astronomers indeed find that stars formed two to three times faster than previously estimated. "These observations will demand a profound reassessment of our theories of the formation and evolution of galaxies in a changing Universe", says Gianpaolo Vettolani, the other co-leader of the VVDS project, working at INAF-IRA in Bologna (Italy). These results are reported in the September 22 issue of the journal Nature (Le Fèvre et al., "A large population of galaxies 9 to 12 billion years back in the life of the Universe").
Genetics algorithm optimization of DWT-DCT based image Watermarking
NASA Astrophysics Data System (ADS)
Budiman, Gelar; Novamizanti, Ledya; Iwut, Iwan
2017-01-01
Data hiding in an image content is mandatory for setting the ownership of the image. Two dimensions discrete wavelet transform (DWT) and discrete cosine transform (DCT) are proposed as transform method in this paper. First, the host image in RGB color space is converted to selected color space. We also can select the layer where the watermark is embedded. Next, 2D-DWT transforms the selected layer obtaining 4 subband. We select only one subband. And then block-based 2D-DCT transforms the selected subband. Binary-based watermark is embedded on the AC coefficients of each block after zigzag movement and range based pixel selection. Delta parameter replacing pixels in each range represents embedded bit. +Delta represents bit “1” and -delta represents bit “0”. Several parameters to be optimized by Genetics Algorithm (GA) are selected color space, layer, selected subband of DWT decomposition, block size, embedding range, and delta. The result of simulation performs that GA is able to determine the exact parameters obtaining optimum imperceptibility and robustness, in any watermarked image condition, either it is not attacked or attacked. DWT process in DCT based image watermarking optimized by GA has improved the performance of image watermarking. By five attacks: JPEG 50%, resize 50%, histogram equalization, salt-pepper and additive noise with variance 0.01, robustness in the proposed method has reached perfect watermark quality with BER=0. And the watermarked image quality by PSNR parameter is also increased about 5 dB than the watermarked image quality from previous method.
NASA Astrophysics Data System (ADS)
2003-07-01
Deeply Embedded Massive Stellar Clusters Discovered in Milky Way Powerhouse Summary Peering into a giant molecular cloud in the Milky Way galaxy - known as W49 - astronomers from the European Southern Observatory (ESO) have discovered a whole new population of very massive newborn stars . This research is being presented today at the International Astronomical Union's 25th General Assembly held in Sydney, Australia, by ESO-scientist João Alves. With the help of infrared images obtained during a period of excellent observing conditions with the ESO 3.5-m New Technology Telescope (NTT) at the La Silla Observatory (Chile), the astronomers looked deep into this molecular cloud and discovered four massive stellar clusters, with hot and energetic stars as massive as 120 solar masses. The exceedingly strong radiation from the stars in the largest of these clusters is "powering" a 20 light-year diameter region of mostly ionized hydrogen gas (a "giant HII region"). W49 is one of the most energetic regions of star formation in the Milky Way. With the present discovery, the true sources of the enormous energy have now been revealed for the first time, finally bringing to an end some decades of astronomical speculations and hypotheses. PR Photo 21a/03 : Colour Composite of W49A (NTT+SOFI). PR Photo 21b/03 : Radio and Near-Infrared Composite of W49A Giant molecular clouds Stars form predominantly inside Giant Molecular Clouds which populate our Galaxy, the Milky Way. One of the most prominent of these is W49 , which has a mass of a million solar masses. It is located some 37,000 light-years away and is the most luminous star-forming region known in our home galaxy: its luminosity is several million times the luminosity of our Sun. A smaller region within this cloud is denoted W49A - this is one of the strongest radio-emitting areas known in the Galaxy . Massive stars are excessive in all ways. Compared to their smaller and ligther brethren, they form at an Olympic speed and have a frantic and relatively short life. Formation sites of massive stars are quite rare and, accordingly, most are many thousands of light-years away. For that reason alone, it is in general much more difficult to observe details of massive-star formation. Moreover, as massive stars are generally formed in the main plane of the Galaxy, in the disc where a lot of dust is present, the first stages of such stars are normally hidden behind very thick curtains. In the case of W49A , less than one millionth of the visible light emitted by a star in this region will find its way through the heavy intervening layers of galactic dust and reach the telescopes on Earth. And finally, because massive stars just formed are still very deeply embedded in their natal clouds, they are anyway not detectable at optical wavelengths. Observations of this early phase of the lives of heavy stars must therefore be done at longer wavelengths (where the dust is more transparent), but even so, such natal dusty clouds still absorb a large proportion of the light emitted by the young stars. Infrared observations of W49 ESO PR Photo 21a/03 ESO PR Photo 21a/03 [Preview - JPEG: 464 x 400 pix - 88k [Normal - JPEG: 928 x 800 pix - 972k] ESO PR Photo 21b/03 ESO PR Photo 21b/03 [Preview - JPEG: 400 x 461 pix - 104k [Normal - JPEG: 800 x 922 pix - 1.1M] Captions : PR Photo 21a/03 presents a composite near-infrared colour image from NTT/SofI. It covers a sky area of 5 x 5 arcmin 2 and the red, green and blue colours correspond to the Ks- (wavelength 2.2 µm), H- (1.65 µm) and J-band (1.2 µm), respectively. North is up and East is to the left. The labels identify known radio sources. The main cluster is seen north-east of the region labelled "O3". The colour of a star in this image is mostly a measure of the amount of dust absorption towards this star. Hence, all blue stars in this image are located in front of the star-forming region. PR Photo 21b/03 shows a three-colour composite of the central region of the star-forming region W49A , based on a radio emission map (wavelength 3.6 cm; here rendered as red) as well as two SofI images in the Ks- (green) and J-bands (blue). The red-only features in this image represent regions of ionized hydrogen so deeply embedded in the molecular cloud that they cannot be detected in the near-infrared, while blue sources are foreground stars. The radio continuum data were taken with the Very Large Array by Chris De Pree. Because of this observational obstacle, nobody had ever looked deep enough into the central most dense regions of the W49A molecular cloud - and nobody really knew what was in there. That is, until João Alves and his colleague, Nicole Homeier decided to obtain "deep" and penetrating observations of this mysterious area with the SofI near-infrared camera on the 3.5-m New Technology Telescope (NTT) at the ESO La Silla Observatory (Chile). A series of infrared images was secured during a spell of good weather and very good atmospheric conditions (seeing about 0.5 arcsec). They clearly show the presence of a cluster of stars at the centre of a region of ionized hydrogen gas (an "HII-region") measuring 20 light-years across. In addition, three other smaller clusters of stars were detected in the image. Altogether, the ESO astronomers were able to identify more than one hundred heavy-weight stars inside W49A , with masses greater than 15 to 20 times the mass of our Sun. Among these, about thirty are located within the 20 light-year central region and about ten in each of the three other clusters. The discovery of these hot and massive stars solves a long-standing problem concerning W49A : the exceptional brightness (in astronomical terminology: "luminosity") of the entire region requires the energetic output from about one hundred massive stars, and nobody had ever seen them. But here they are on the deep and sharp SofI images! Formation scenarios The presence of such a large number of very massive stars spread over the entire region suggests that star formation in the various regions of W49A must have happened rather simultaneously from different seeds and not, as some theories propose, by a "domino-type" chain effect where stellar winds of fast particles and the emitted radiation of newly formed massive stars trigger another burst of star formation in the immediate neighbourhood. The present research results also imply that star formation in W49A began earlier and extends over a larger area than previously thought. João Alves is sure that this news will be received with interest by his colleagues: " W49A has long been known to radio astronomers as one of the most powerful star-forming region in the Galaxy with 30 or so massive baby-stars of the O-type, very deeply embedded in their parental cloud. What we have found is in fact quite amazing: this stellar maternity ward is much bigger than we first thought and it has not stopped forming stars yet. We now have evidence for no less than more than one hundred such stars in this region, way beyond the few dozen known until now ". Nicole Homeier adds: " Above all, we uncovered four massive clusters in there, with stars as massive as 120 times the mass of our Sun - real 'beasts' that bombard their surroundings with incredibly intense stellar winds and strong ultraviolet light. This is not a nice place to live - and imagine, this is all inside our so-called 'quiet Galaxy'!" More information The research described in this press release is presented in a research article in the professional research journal Astrophysical Journal ("Uncovering the Beast: Discovery of Embedded Massive Stellar Clusters in W49A" by João Alves and Nicole Homeier , Volume 589, pp. L45-L49). It is also one of the topics addressed by João Alves during his talk given at the General Assembly of the International Astronomical Union in Sydney on Tuesday, July 22, 2003.
NASA Astrophysics Data System (ADS)
2002-01-01
Fine Images of Saturn and Io with VLT NAOS-CONICA Summary With its new NAOS-CONICA Adaptive Optics facility, the ESO Very Large Telescope (VLT) at the Paranal Observatory has recently obtained impressive views of the giant planet Saturn and Io, the volcanic moon of Jupiter. They show the two objects with great clarity, unprecedented for a ground-based telescope. The photos were made during the ongoing commissioning of this major VLT instrument, while it is being optimized and prepared for regular observations that will start later this year. PR Photo 04a/02 : VLT NAOS-CONICA photo of the giant planet Saturn (composite H+K band image). PR Photo 04b/02 : The Jovian moon Io (Br-gamma image). PR Photo 04c/02 : The Jovian moon Io (composite Br-gamma + L' image). Commissioning of NAOS-CONICA progresses "First light" for the new NAOS-CONICA Adaptive Optics facility on the 8.2-m VLT YEPUN telescope at the Paranal Observatory was achieved in November 2001, cf. ESO PR 25/01. A second phase of the "commissioning" of the new facility began on January 22, 2002, now involving specialized observing modes and with the aim of trimming it to maximum performance before it is made available to the astronomers later this year. During this demanding and delicate work, more test images have been made of various astronomical objects [1]. Some of these show selected solar system bodies, for which the excellent image sharpness achievable with this new instrument is of special significance. In fact, the VLT photos of the giant planet Saturn and Io, the innermost of Jupiter's four large moons, are among the sharpest ever obtained from the ground . They even compare well with some photos obtained from space, as can be seen via the related weblinks indicated below. The raw NAOS-CONICA data from which these images shown in this Photo Release were produced are now available via the public VLT Science Archive Facility [2]. The NAOS adaptive optics corrector was built, under an ESO contract, by the Office National d'Etudes et de Recherches Aérospatiales (ONERA) , Laboratoire d'Astrophysique de Grenoble (LAOG) and the DESPA and DASGAL laboratories of the Observatoire de Paris in France, in collaboration with ESO. The CONICA infra-red camera was built, under an ESO contract, by the Max-Planck-Institut für Astronomie (MPIA) (Heidelberg) and the Max-Planck Institut für Extraterrestrische Physik (MPE) (Garching) in Germany, in collaboration with ESO. Saturn - Lord of the rings ESO PR Photo 04a/02 ESO PR Photo 04a/02 [Preview - JPEG: 460 x 400 pix - 54k] [Normal - JPEG: 1034 x 800 pix - 200k] Caption : PR Photo 04a/02 shows the giant planet Saturn, as observed with the VLT NAOS-CONICA Adaptive Optics instrument on December 8, 2001; the distance was 1209 million km. It is a composite of exposures in two near-infrared wavebands (H and K) and displays well the intricate, banded structure of the planetary atmosphere and the rings. Note also the dark spot at the south pole at the bottom of the image. One of the moons, Tethys, is visible as a small point of light below the planet. It was used to guide the telescope and to perform the adaptive optics "refocussing" for this observation. More details in the text. Technical information about this photo is available below. This NAOS/CONICA image of Saturn ( PR Photo 04a/02 ), the second-largest planet in the solar system, was obtained at a time when Saturn was close to summer solstice in the southern hemisphere. At this moment, the tilt of the rings was about as large as it can be, allowing the best possible view of the planet's South Pole. That area was on Saturn's night side in 1982 and could therefore not be photographed during the Voyager encounter. The dark spot close to the South Pole is a remarkable structure that measures approximately 300 km across. It was only recently observed in visible light from the ground with a telescope at the Pic du Midi Observatory in the Pyrenees (France) - this is the first infrared image to show it. The bright spot close to the equator is the remnant of a giant storm in Saturn's extended atmosphere that has lasted more than 5 years. The present photo provides what is possibly the sharpest view of the ring system ever achieved from a ground-based observatory . Many structures are visible, the most obvious being the main ring sections, the inner C-region (here comparatively dark), the middle B-region (here relatively bright) and the outer A-region, and also the obvious dark "divisions", including the well-known, broad Cassini division between the A- and B-regions, as well as the Encke division close to the external edge of the A-region and the Colombo division in the C-region. Moreover, many narrow rings can be seen at this high image resolution , in particular within the C-region - they may be compared with those seen by the Voyager spacecraft during the flybys, cf. the weblinks below. This image demonstrates the capability of NAOS-CONICA to observe also extended objects with excellent spatial resolution. It is a composite of four short-exposure images taken through the near-infrared H (wavelength 1.6 µm) and K (2.2 µm) filters. This observation was particularly difficult because of the motion of Saturn during the exposure. To provide the best possible images, the Adaptive Optics system of NAOS was pointed towards the Saturnian moon Tethys , while the image of Saturn was kept at a fixed position on the CONICA detector by means of "differential tracking" (compensating for the different motions in the sky of Saturn and Tethys). This is also why the (faint) image of Tethys - visible south of Saturn (i.e., below the planet in PR Photo 04a/02 ) - appears slightly trailed. Io - volcanoes and sulphur ESO PR Photo 04b/02 ESO PR Photo 04b/02 [Preview - JPEG: 400 x 478 pix - 39k] [Normal - JPEG: 800 x 955 pix - 112k] ESO PR Photo 04c/02 ESO PR Photo 04c/02 [Preview - JPEG: 400 x 469 pix - 58k] [Normal - JPEG: 800 x 937 pix - 368k] Caption : PR Photo 04b/02 shows Io , the volcanic moon of Jupiter, as imaged with the VLT NAOS-CONICA Adaptive Optics instrument on December 5, 2001, through a near-infrared, narrow optical filter (Brackett-gamma at wavelength 2.166 µm). Despite the small angular diameter of Io , about 1.2 arcsec, many features are visible at this excellent optical resolution. PR Photo 04c/02 is a composite of the same exposure with another obtained at a longer wavelength (L'-filter at 3.8 µm), with a latitude-longitude grid superposed and some of the main surface features identified. Technical information about these photos is available below. Io has a diameter of 3660 km and orbits Jupiter at a mean distance of 422,000 km - one revolution takes 42.5 hours. Like the Earth's moon, it always turns the same side towards the planet. As shown by the Voyager spacecraft in 1979, its surface is covered by active volcanoes and lava fields - it is in fact the most volcanic place known in the solar system. Due to this activity, Io's surface is continuously reshaped. The features now seen are all correspondingly young, with a mean age of the order of 1 million years only. The variations in appearance and colour are due to different volcanic deposits of sulphur compounds. The cause of all this activity is Jupiter's strong gravitational pull that leads to enormous stresses inside Io and related heating of the entire moon. PR Photo 04b/02 is a near-infrared NAOS-CONICA image of Io , obtained on December 5, 2001, through a narrow optical filter at wavelength 2.166 µm. The excellent image resolution makes it possible to identify many features on the surface. Some of these are volcanoes, others correspond to lava fields between these. PR Photo 04c/02 is a composite of that image and another obtained at longer wavelength (3.8 µm). A latitute-longitude grid has been superposed, with the most prominent features identified by name, including some of the large volcanoes and sulphurus plains on this very active moon. Io has been observed with the NASA Galileo spacecraft since 1996 at higher resolution in the visible and infrared, especially during close encounters with the satellite (a link to Galileo maps of Io is available below). However, this NAOS image fills a gap in the surface coverage of the infrared images from Galileo. The capability of NAOS/CONICA to map Io in the infrared at the present high image resolution will allow astronomers to continue the survey of the volcanic activity and to monitor regularly the related surface processes . Related sites The following links point to a number of prominent photos of these two objects that were obtained elsewhere. Saturn Voyager images : http://vraptor.jpl.nasa.gov/voyager/vgrsat_img.html HST images : http://hubble.stsci.edu/news_.and._views/pr.cgi.2001+15 Pic du Midi images : http://www.bdl.fr/s2p/saturne.html IfA-CFHT : http://www.ifa.hawaii.edu/ao/images/solarsys/new/new.html Io NASA/Galileo site : http://www.jpl.nasa.gov/galileo/moons/io.html Volcanoes on Io : http://volcano.und.nodak.edu/vwdocs/planet_volcano/Io/Overview.html HST image of Io : http://hubble.stsci.edu/news_.and._views/pr.cgi.1997+21 Keck I image of Io : http://www.astro.caltech.edu/mirror/keck/realpublic/inst/ao/Io/IoSnapshot.jpg Galileo and Voyager maps of Io : http://www.lowell.edu/users/ijw/maps/ (also with names of surface features) Notes [1]: The following astronomers and engineers from ESO and the partner institutes have participated in the current commissioning observations of Saturn and Io with NAOS-CONICA: Wolfgang Brandner, Jean-Gabriel Cuby, Pierre Drossart, Thierry Fusco, Eric Gendron, Markus Hartung, Norbert Hubin, François Lacombe, Anne-Marie Lagrange, Rainer Lenzen, David Mouillet, Claire Moutou, Gérard Rousset, Jason Spyromilio and Gérard Zins . [2]: New archive users may register via the ESO/ST-ECF Archive Registration Form. Technical information about the photos PR Photo 04a/02 is based on four exposures, obtained with VLT YEPUN and NAOS-CONICA on December 8, 2001 (UT). Two of these were made with an H-band filter (10 sec exposure each, wavelength 1.6 µm) and two with a K-band filter (12 sec each, 2.2 µm). The satellite Tethys (diameter 1070 km, orbiting Saturn at a distance of approx. 295,000 km) served as reference source for the Adaptive Optics corrections and the telescope was offset guided to compensate for the differential motion. The frames were reduced in the normal way with classical flats, dark and bias correction. No convolution was made before the two colours were combined to produce the image shown. At the time of the exposure, Saturn was 8.80 AU from the Earth. With a diameter of approx. 120,000 km, its disk subtended an angle of 20.6 arcsec. The nominal resolution of the NAOS-CONICA image, about 0.07 arcsec, thus corresponds to 410 km at Saturn. PR Photo 04b/02 is a reproduction based on a total exposure of 230 sec with VLT YEPUN and NAOS-CONICA on December 5, 2001, made through a Brackett-gamma filter centred at 2.166 µm. The resulting image resolution is 0.068 arcsec. At the moment of the exposure, the distance from the Earth to Io was about 641 million km (4.29 AU) and the image resolution therefore corresponds to approx. 210 km on the surface of the moon. PR Photo 04c/02 is based on a combination of the Brackett-gamma (here rendered as blue) with an L' frame (total exposure 4.2 sec; 3.800 µm; red), superposed with a coordinate grid and with some of the major surface features identified. The grid was produced with tools available at the website of the Institut de Mecanique Celeste et de Calcul des Ephemerides.
Explosions in Majestic Spiral Beauties
NASA Astrophysics Data System (ADS)
2004-12-01
Images of beautiful galaxies, and in particular of spiral brethren of our own Milky Way, leaves no-one unmoved. It is difficult indeed to resist the charm of these impressive grand structures. Astronomers at Paranal Observatory used the versatile VIMOS instrument on the Very Large Telescope to photograph two magnificent examples of such "island universes", both of which are seen in a southern constellation with an animal name. But more significantly, both galaxies harboured a particular type of supernova, the explosion of a massive star during a late and fatal evolutionary stage. The first image (PR Photo 33a/04) is of the impressive spiral galaxy NGC 6118 [1], located near the celestial equator, in the constellation Serpens (The Snake). It is a comparatively faint object of 13th magnitude with a rather low surface brightness, making it pretty hard to see in small telescopes. This shyness has prompted amateur astronomers to nickname NGC 6118 the "Blinking Galaxy" as it would appear to flick into existence when viewed through their telescopes in a certain orientation, and then suddenly disappear again as the eye position shifted. There is of course no such problem for the VLT's enormous light-collecting power and ability to produce sharp images, and this magnificent galaxy is here seen in unequalled detail. The colour photo is based on a series of exposures behind different optical filters, obtained with the VIMOS multi-mode instrument on the 8.2-m VLT Melipal telescope during several nights around August 21, 2004. About 80 million light-years away, NGC 6118 is a grand-design spiral seen at an angle, with a very small central bar and several rather tightly wound spiral arms (it is classified as of type "SA(s)cd" [2]) in which large numbers of bright bluish knots are visible. Most of them are active star-forming regions and in some, very luminous and young stars can be perceived. Of particular interest is the comparatively bright stellar-like object situated directly North of the galaxy's centre, near the periphery (see PR Photo 33b/04): it is Supernova 2004dk that was first reported on August 1, 2004. Observations a few days later showed this to be a supernova of Type Ib or Ic [3], caught a few days before maximum light. This particular kind of supernova is believed to result from the demise of a massive star that has somehow lost its entire hydrogen envelope, probably as a result of mass transfer in a binary system, before exploding. Also visible on the image is the trail left by a satellite, which passed by during one of the exposures taken in the B filter, hence its blue colour. This is an illustration that even in such a remote place as the Paranal Observatory in the Atacama desert, astronomers are not completely sheltered from light pollution. ESO PR Photo 33c/04 ESO PR Photo 33c/04 NGC 7424 - VIMOS+VLT Colour composite [Preview - JPEG: 400 x 514 pix - 110k] [Normal - JPEG: 800 x 1028 pix - 995k] [FullRes - JPEG: 1887 x 2424 pix - 5.4M] Caption: ESO PR Photo 33c/04 shows a composite colour-coded image of another magnificent spiral galaxy, NGC 7424, at a distance of 40 million light-years. It is based on images obtained with the multi-mode VIMOS instrument on the ESO Very Large Telescope (VLT) in three different wavelength bands (see Technical information below). The image covers 6.5 x 7.2 arcmin on the sky. North is up and East is to the right. The second galaxy imaged by the VLT (ESO PR Photo 33c/04) is another spiral, the beautiful multi-armed NGC 7424 that is seen almost directly face-on. Located at a distance of roughly 40 million light-years in the constellation Grus (the Crane), this galaxy was discovered by Sir John Herschel while observing at the Cape of Good Hope. This other example of a "grand design" galaxy is classified as "SAB(rs)cd" [2], meaning that it is intermediate between normal spirals (SA) and strongly barred galaxies (SB) and that it has rather open arms with a small central region. It also shows many ionised regions as well as clusters of young and massive stars. Ten young massive star clusters can be identified whose size span the range from 1 to 200 light-years. The galaxy itself is roughly 100,000 light-years across, that is, quite similar in size to our own Milky Way galaxy. Because of its low surface brightness, this galaxy also demands dark skies and a clear night to be observed in this impressive detail. When viewed in a small telescope, it appears as a large elliptical haze with no trace of the many beautiful filamentary arms with a multitude of branches revealed in this striking VLT image. Note also the very bright and prominent bar in the middle. ESO PR Photo 33d/04 ESO PR Photo 33d/04 NGC 7424 and SN2001ig (FORS 2 and VIMOS + VLT) [Preview - JPEG: 400 x 596 pix - 44k] [Normal - JPEG: 800 x 1192 pix - 637k] Caption: ESO PR Photo 33d/04 shows two composite colour-coded image of a part of NGC 7424. The left image was made from an exposure taken with the FORS 2 instrument on VLT Yepun on June 16, 2002. In this, the supernova - although considerably fainter than when it was discovered six months earlier - is still well visible in the middle right of the image. The right image is part of PR Photo 33d/04 on the same scale. Obtained in October 2004, the supernova is no more apparent. The image covers 3.8 x 3.2 arcmin. North is up and East is to the right. On the evening of 10 December 2001, Australian amateur astronomer Reverend Robert Evans, observing from his backyard in the Blue Mountains west of Sydney, discovered with his 30cm telescope his 39th supernova, Supernova 2001ig in the outskirts of NGC 7424. Of magnitude 14.5 (that is, 3000 times fainter than the faintest star that can be seen with the unaided eye), this supernova brightened quickly by a factor 8 to magnitude 12.3. A few months later, it had faded to an insignificant object below 17th magnitude. By comparison, the entire galaxy is of magnitude 11: at the time of its maximum, the supernova was thus only three times fainter than the whole galaxy. It must have been a splendid firework indeed! By digging into the vast Science Archive of the ESO Very Large Telescope, it was possible to find an image of NGC 7424 taken on June 16, 2002 by Massimo Turatto (Observatorio di Padova-INAF, Italy) with the FORS 2 instrument on Yepun (UT4). Although, the supernova was already much fainter than at its maximum 6 months earlier, it is still very well visible on this image (see PR Photo 33d/04). Spectra taken with ESO's 3.6-m telescope at La Silla over the months following the explosion showed the object to evolve to a Type Ib/c supernova. By October 2002, the transition to a Type Ib/c supernova was complete. It is now believed that this supernova arose from the explosion of a very massive star, a so-called Wolf-Rayet star, which together with a massive hot companion belonged to a very close binary system in which the two stars orbited each other once every 100 days or so (read the details in the paper by Ryder et al. here ). Future detailed observations may reveal the presence of the companion star that survived this explosion but which is now doomed to explode as another supernova in due time.
TOASTing Your Images With Montage
NASA Astrophysics Data System (ADS)
Berriman, G. Bruce; Good, John
2017-01-01
The Montage image mosaic engine is a scalable toolkit for creating science-grade mosaics of FITS files, according to the user's specifications of coordinates, projection, sampling, and image rotation. It is written in ANSI-C and runs on all common *nix-based platforms. The code is freely available and is released with a BSD 3-clause license. Version 5 is a major upgrade to Montage, and provides support for creating images that can be consumed by the World Wide Telescope (WWT). Montage treats the TOAST sky tessellation scheme, used by the WWT, as a spherical projection like those in the WCStools library. Thus images in any projection can be converted to the TOAST projection by Montage’s reprojection services. These reprojections can be performed at scale on high-performance platforms and on desktops. WWT consumes PNG or JPEG files, organized according to WWT’s tiling and naming scheme. Montage therefore provides a set of dedicated modules to create the required files from FITS images that contain the TOAST projection. There are two other major features of Version 5. It supports processing of HEALPix files to any projection in the WCS tools library. And it can be built as a library that can be called from other languages, primarily Python. http://montage.ipac.caltech.edu.GitHub download page: https://github.com/Caltech-IPAC/Montage.ASCL record: ascl:1010.036. DOI: dx.doi.org/10.5281/zenodo.49418 Montage is funded by the National Science Foundation under Grant Number ACI-1440620,
Can Commercial Digital Cameras Be Used as Multispectral Sensors? A Crop Monitoring Test
Lebourgeois, Valentine; Bégué, Agnès; Labbé, Sylvain; Mallavan, Benjamin; Prévot, Laurent; Roux, Bruno
2008-01-01
The use of consumer digital cameras or webcams to characterize and monitor different features has become prevalent in various domains, especially in environmental applications. Despite some promising results, such digital camera systems generally suffer from signal aberrations due to the on-board image processing systems and thus offer limited quantitative data acquisition capability. The objective of this study was to test a series of radiometric corrections having the potential to reduce radiometric distortions linked to camera optics and environmental conditions, and to quantify the effects of these corrections on our ability to monitor crop variables. In 2007, we conducted a five-month experiment on sugarcane trial plots using original RGB and modified RGB (Red-Edge and NIR) cameras fitted onto a light aircraft. The camera settings were kept unchanged throughout the acquisition period and the images were recorded in JPEG and RAW formats. These images were corrected to eliminate the vignetting effect, and normalized between acquisition dates. Our results suggest that 1) the use of unprocessed image data did not improve the results of image analyses; 2) vignetting had a significant effect, especially for the modified camera, and 3) normalized vegetation indices calculated with vignetting-corrected images were sufficient to correct for scene illumination conditions. These results are discussed in the light of the experimental protocol and recommendations are made for the use of these versatile systems for quantitative remote sensing of terrestrial surfaces. PMID:27873930
Cloud Optimized Image Format and Compression
NASA Astrophysics Data System (ADS)
Becker, P.; Plesea, L.; Maurer, T.
2015-04-01
Cloud based image storage and processing requires revaluation of formats and processing methods. For the true value of the massive volumes of earth observation data to be realized, the image data needs to be accessible from the cloud. Traditional file formats such as TIF and NITF were developed in the hay day of the desktop and assumed fast low latency file access. Other formats such as JPEG2000 provide for streaming protocols for pixel data, but still require a server to have file access. These concepts no longer truly hold in cloud based elastic storage and computation environments. This paper will provide details of a newly evolving image storage format (MRF) and compression that is optimized for cloud environments. Although the cost of storage continues to fall for large data volumes, there is still significant value in compression. For imagery data to be used in analysis and exploit the extended dynamic range of the new sensors, lossless or controlled lossy compression is of high value. Compression decreases the data volumes stored and reduces the data transferred, but the reduced data size must be balanced with the CPU required to decompress. The paper also outlines a new compression algorithm (LERC) for imagery and elevation data that optimizes this balance. Advantages of the compression include its simple to implement algorithm that enables it to be efficiently accessed using JavaScript. Combing this new cloud based image storage format and compression will help resolve some of the challenges of big image data on the internet.
Evaluation of Algorithms for Compressing Hyperspectral Data
NASA Technical Reports Server (NTRS)
Cook, Sid; Harsanyi, Joseph; Faber, Vance
2003-01-01
With EO-1 Hyperion in orbit NASA is showing their continued commitment to hyperspectral imaging (HSI). As HSI sensor technology continues to mature, the ever-increasing amounts of sensor data generated will result in a need for more cost effective communication and data handling systems. Lockheed Martin, with considerable experience in spacecraft design and developing special purpose onboard processors, has teamed with Applied Signal & Image Technology (ASIT), who has an extensive heritage in HSI spectral compression and Mapping Science (MSI) for JPEG 2000 spatial compression expertise, to develop a real-time and intelligent onboard processing (OBP) system to reduce HSI sensor downlink requirements. Our goal is to reduce the downlink requirement by a factor > 100, while retaining the necessary spectral and spatial fidelity of the sensor data needed to satisfy the many science, military, and intelligence goals of these systems. Our compression algorithms leverage commercial-off-the-shelf (COTS) spectral and spatial exploitation algorithms. We are currently in the process of evaluating these compression algorithms using statistical analysis and NASA scientists. We are also developing special purpose processors for executing these algorithms onboard a spacecraft.
An adaptable navigation strategy for Virtual Microscopy from mobile platforms.
Corredor, Germán; Romero, Eduardo; Iregui, Marcela
2015-04-01
Real integration of Virtual Microscopy with the pathologist service workflow requires the design of adaptable strategies for any hospital service to interact with a set of Whole Slide Images. Nowadays, mobile devices have the actual potential of supporting an online pervasive network of specialists working together. However, such devices are still very limited. This article introduces a novel highly adaptable strategy for streaming and visualizing WSI from mobile devices. The presented approach effectively exploits and extends the granularity of the JPEG2000 standard and integrates it with different strategies to achieve a lossless, loosely-coupled, decoder and platform independent implementation, adaptable to any interaction model. The performance was evaluated by two expert pathologists interacting with a set of 20 virtual slides. The method efficiently uses the available device resources: the memory usage did not exceed a 7% of the device capacity while the decoding times were smaller than the 200 ms per Region of Interest, i.e., a window of 256×256 pixels. This model is easily adaptable to other medical imaging scenarios. Copyright © 2015 Elsevier Inc. All rights reserved.
High-resolution seismic-reflection data from offshore northern California — Bolinas to Sea Ranch
Sliter, Ray W.; Johnson, Samuel Y.; Chin, John L.; Allwardt, Parker; Beeson, Jeffrey; Triezenberg, Peter J.
2016-12-05
The U.S. Geological Survey collected high-resolution seismic-reflection data in September 2009, on survey S-8-09-NC, offshore of northern California between Bolinas and Sea Ranch.The survey area spans about 125 km of California’s coast and extends around Point Reyes. Data were collected aboard the U.S. Geological Survey R/V Parke Snavely. Cumulatively, ~1,150 km of seismic-reflection data were acquired using a SIG 2mille minisparker. Subbottom acoustic depth of penetration spanned tens to several hundred meters and varied by location and underlying sediments and rock types.This report includes maps and a navigation file of the surveyed transects, utilizing Google Earth™ software, as well as digital data files showing images of each transect in SEG-Y and JPEG formats. The images of bedrock, sediment deposits, and tectonic structure provide geologic information that is essential to hazard assessment, regional sediment management, and coastal and marine spatial planning at Federal, State and local levels. This information is also valuable for future research on the geomorphic, sedimentary, tectonic, and climatic record of central California.
Real-time access of large volume imagery through low-bandwidth links
NASA Astrophysics Data System (ADS)
Phillips, James; Grohs, Karl; Brower, Bernard; Kelly, Lawrence; Carlisle, Lewis; Pellechia, Matthew
2010-04-01
Providing current, time-sensitive imagery and geospatial information to deployed tactical military forces or first responders continues to be a challenge. This challenge is compounded through rapid increases in sensor collection volumes, both with larger arrays and higher temporal capture rates. Focusing on the needs of these military forces and first responders, ITT developed a system called AGILE (Advanced Geospatial Imagery Library Enterprise) Access as an innovative approach based on standard off-the-shelf techniques to solving this problem. The AGILE Access system is based on commercial software called Image Access Solutions (IAS) and incorporates standard JPEG 2000 processing. Our solution system is implemented in an accredited, deployable form, incorporating a suite of components, including an image database, a web-based search and discovery tool, and several software tools that act in concert to process, store, and disseminate imagery from airborne systems and commercial satellites. Currently, this solution is operational within the U.S. Government tactical infrastructure and supports disadvantaged imagery users in the field. This paper presents the features and benefits of this system to disadvantaged users as demonstrated in real-world operational environments.
Watching the Birth of a Galaxy Cluster?
NASA Astrophysics Data System (ADS)
1999-07-01
First Visiting Astronomers to VLT ANTU Observe the Early Universe When the first 8.2-m VLT Unit Telescope (ANTU) was "handed over" to the scientists on April 1, 1999, the first "visiting astronomers" at Paranal were George Miley and Huub Rottgering from the Leiden Observatory (The Netherlands) [1]. They obtained unique pictures of a distant exploding galaxy known as 1138 - 262 . These images provide new information about how massive galaxies and clusters of galaxies may have formed in the early Universe. Formation of clusters of galaxies An intriguing question in modern astronomy is how the first galaxies and groupings or clusters of galaxies emerged from the primeval gas produced in the Big Bang. Some theories predict that giant galaxies, often found at the centres of rich galaxy clusters, are built up through a step-wise process. Clumps develop in this gas and stars condense out of those clumps to form small galaxies. Finally these small galaxies merge together to form larger units. An enigmatic class of objects important for investigating such scenarios are galaxies which emit intense radio emission from explosions that occur deep in their nuclei. The explosions are believed to be triggered when material from the merging swarm of smaller galaxies is fed into a rotating black hole located in the central regions. There is strong evidence that these distant radio galaxies are amongst the oldest and most massive galaxies in the early Universe and are often located at the heart of rich clusters of galaxies. They can therefore help pinpoint regions of the Universe in which large galaxies and clusters of galaxies are being formed. The radio galaxy 1138-262 The first visiting astronomers pointed ANTU towards a particularly important radio galaxy named 1138-262 . It is located in the southern constellation Hydra (The Water Snake). This galaxy was discovered some years ago using ESO's 3.5-m New Technology Telescope (NTT) at La Silla. Because 1138-262 is at a distance of about 10,000 million light-years from the Earth (the redshift is 2.2), the VLT sees it as it was when the Universe was only about 20% of its present age. Previous observations of this galaxy by the same team of astronomers showed that its radio, X-ray and optical emission had many extreme characteristics that would be expected from a giant galaxy, forming at the centre of a rich cluster. However, because the galaxy is so distant, the cluster could not be seen directly. Radio data obtained by the Very Large Array (VLA) in the USA and X-ray data with the ROSAT satellite both indicated that the galaxy is surrounded by a hot gas similar to that observed at the centres of nearby rich clusters of galaxies. Most telling was a picture taken by the Hubble Space Telescope that revealed that the galaxy comprises a large number of clumps, and which bore a remarkable resemblance to computer models of the birth of giant galaxies in clusters. From these observations, it was concluded that 1138-262 is likely to be a massive galaxy in the final stage of assemblage through merging with many smaller galaxies in an infant rich cluster and the most distant known X-ray cluster. VLT obtains Lyman-alpha images ESO PR Photo 33a/99 ESO PR Photo 33a/99 [Preview - JPEG: 483 x 400 pix - 86k] [Normal - JPEG: 966 x 800 pix - 230k] [High-Res - JPEG: 2894 x 2396 pix - 1.1M] Caption to ESO PR Photo 33a/99 : False-colour picture of the ionized hydrogen gas surrounding 1138-262 (Lyman-alpha). The size of this cloud is about 5 times larger than the optical extent of the Milky Way Galaxy. A contour plot, as observed with VLT ANTU + FORS1 in a narrow-band filter around the wavelength of the redshifted Lyman-alpha line, is superposed on a false-colour representation of the same image. The contour levels are a geometric progression in steps of 2 1/2. The image has not been flux calibrated, so the first contour level is arbitrary. The field measures 35 x 25 arcsec 2 , corresponding to about 910,000 x 650,000 light-years (280 x 200 kpc). The linear scale is indicated at the lower left. North is up and East is left. The Leiden astronomers used the FORS1 instrument on ANTU to take long-exposure pictures of 1138-262 and a surrounding field of 36 square arcmin. Images were obtained through two optical filters, one which tunes in to light produced by hydrogen gas (the redshifted Lyman-alpha line) and the other which is dominated by light from stars (the B-band). The "difference" between the images shows that the hydrogen gas surrounding the galaxy and from which the galaxy is presumably forming is huge ( Photo 33a/99 ). The measured size is about 20 arcsec or, at the distance of the cluster, somewhat more than 500,000 light-years (160 kpc), making it the largest such structure ever seen. It corresponds to about 5 times the size of the optical extent of the Milky Way Galaxy ! ESO PR Photo 33b/99 ESO PR Photo 33b/99 [Preview - JPEG: 400 x 593 pix - 149k] [Normal - JPEG: 800 x 1185 pix - 335k] [High-Res - JPEG: 1982 x 2935 pix - 1.1M] Caption to ESO PR Photo 33b/99 : Three small fields near radio galaxy 1138-262 as observed with VLT ANTU + FORS1 in a narrow-band filter at the redshifted wavelength of Lyman-alpha emission in that galaxy (left) and a broader filter in the surrounding spectral region (right), respectively. Three excellent candidates of Lyman-alpha emitters are seen at the centres of the fields. They are clearly visible in the narrow-band image (that mostly shows the gas), but are not detected in the broad-band image (that mostly shows the stars). Each field measures 24 x 24 arcsec 2 , corresponding to about 620,000 x 620,000 light-years (190 x 190 kpc); North is up and East is left. Even more intriguing is the presence of a number of objects in the gas picture (to the left in PR Photo 33b/99 ), but absent from the stars' picture (right). These are galaxies whose hydrogen gas is emitting the bright Lyman-alpha spectral line within a distance of the order of about 3 million light-years (1 Mpc) from the radio galaxy, and probably in the surrounding cluster. The team has pinpointed a total of 26 objects in the surrounding field that may be companion galaxies with fainter hydrogen emission. The detection by the VLT of the huge gas halo and of the companion galaxies is further evidence that 1138-262 is a massive galaxy, forming in a group or cluster of galaxies. The next step The next step in the project will be to confirm the distances of the candidate companion galaxies and establish that they are indeed members of a cluster of galaxies surrounding 1138-262 . This can be done using one of the spectrographs on the VLT. Note [1] The project on 1138-262 is being carried out by a large international consortium of scientists led by astronomers from the Leiden Observatory. Besides George Miley and Huub Rottgering , the team includes Jaron Kurk , Laura Pentericci , and Bram Venemans (Leiden), Alan Moorwood (ESO), Chris Carilli (US National Radio Astronomy Observatory - NRAO), Wil van Breugel (University of California, USA) Holland Ford and Tim Heckman (Johns Hopkins University, Baltimore, USA) and Pat McCarthy (Carnegie Institute, Pasadena, USA). Technical information about the VLT images of 1138-262 Narrow and broad-band imaging was carried out on April 12 and 13, 1999, with the ESO VLT ANTU (UT1), using the FORS1 multi-mode instrument in imaging mode. A narrow-band filter was used which has a central wavelength of 381.4 nm and a bandpass of 6.5 nm. For 1138-262 (redshift z = 2.2), the emission of Lyman-alpha at 121.6 nm is redshifted to 383.8 nm, which falls in this narrow band. The broad-band filter was a Bessel-B with central wavelength of 429.0 nm. The detector was a Tektronix CCD with 2048 x 2046 pixels and an image scale of 0.20 arcsec/pixel. Eight separate 30-min exposures were taken in the narrow band and six 5-min in the broad band, shifted by about 20 arcsec with respect to each other to minimize problems due to flat-fielding and to facilitate cosmic ray removal. The average seeing was 1.0 arcsec. Image reduction was carried out by means of the IRAF reduction package. The individual images were bias subtracted and flat-fielded using twilight exposures (narrow band) or an average of the unregistered science exposures (broad-band). The images were then registered by shifting them in position by an amount determined from the location of several stars on the CCD. The registered images were co-added and dark pixels from cosmic rays were cleaned. To improve the signal-to-noise ratio, the resulting images were smoothed with a Gaussian function having full-width-at half-maximum (FWHM) = 1 arcsec (5 pixels). How to obtain ESO Press Information ESO Press Information is made available on the World-Wide Web (URL: http://www.eso.org../ ). ESO Press Photos may be reproduced, if credit is given to the European Southern Observatory.
Reflectance Prediction Modelling for Residual-Based Hyperspectral Image Coding
Xiao, Rui; Gao, Junbin; Bossomaier, Terry
2016-01-01
A Hyperspectral (HS) image provides observational powers beyond human vision capability but represents more than 100 times the data compared to a traditional image. To transmit and store the huge volume of an HS image, we argue that a fundamental shift is required from the existing “original pixel intensity”-based coding approaches using traditional image coders (e.g., JPEG2000) to the “residual”-based approaches using a video coder for better compression performance. A modified video coder is required to exploit spatial-spectral redundancy using pixel-level reflectance modelling due to the different characteristics of HS images in their spectral and shape domain of panchromatic imagery compared to traditional videos. In this paper a novel coding framework using Reflectance Prediction Modelling (RPM) in the latest video coding standard High Efficiency Video Coding (HEVC) for HS images is proposed. An HS image presents a wealth of data where every pixel is considered a vector for different spectral bands. By quantitative comparison and analysis of pixel vector distribution along spectral bands, we conclude that modelling can predict the distribution and correlation of the pixel vectors for different bands. To exploit distribution of the known pixel vector, we estimate a predicted current spectral band from the previous bands using Gaussian mixture-based modelling. The predicted band is used as the additional reference band together with the immediate previous band when we apply the HEVC. Every spectral band of an HS image is treated like it is an individual frame of a video. In this paper, we compare the proposed method with mainstream encoders. The experimental results are fully justified by three types of HS dataset with different wavelength ranges. The proposed method outperforms the existing mainstream HS encoders in terms of rate-distortion performance of HS image compression. PMID:27695102
Interactive Courseware Standards
1992-07-01
music industry standard provides data formats and transmission specifications for musical notation. Joint Photographic Experts Group (JPEG). This...has been used in the music industry for several years, especially for electronically programmable keyboards and 16 instruments. The video compression
Content Preserving Watermarking for Medical Images Using Shearlet Transform and SVD
NASA Astrophysics Data System (ADS)
Favorskaya, M. N.; Savchina, E. I.
2017-05-01
Medical Image Watermarking (MIW) is a special field of a watermarking due to the requirements of the Digital Imaging and COmmunications in Medicine (DICOM) standard since 1993. All 20 parts of the DICOM standard are revised periodically. The main idea of the MIW is to embed various types of information including the doctor's digital signature, fragile watermark, electronic patient record, and main watermark in a view of region of interest for the doctor into the host medical image. These four types of information are represented in different forms; some of them are encrypted according to the DICOM requirements. However, all types of information ought to be resulted into the generalized binary stream for embedding. The generalized binary stream may have a huge volume. Therefore, not all watermarking methods can be applied successfully. Recently, the digital shearlet transform had been introduced as a rigorous mathematical framework for the geometric representation of multi-dimensional data. Some modifications of the shearlet transform, particularly the non-subsampled shearlet transform, can be associated to a multi-resolution analysis that provides a fully shift-invariant, multi-scale, and multi-directional expansion. During experiments, a quality of the extracted watermarks under the JPEG compression and typical internet attacks was estimated using several metrics, including the peak signal to noise ratio, structural similarity index measure, and bit error rate.
Compression techniques in tele-radiology
NASA Astrophysics Data System (ADS)
Lu, Tianyu; Xiong, Zixiang; Yun, David Y.
1999-10-01
This paper describes a prototype telemedicine system for remote 3D radiation treatment planning. Due to voluminous medical image data and image streams generated in interactive frame rate involved in the application, the importance of deploying adjustable lossy to lossless compression techniques is emphasized in order to achieve acceptable performance via various kinds of communication networks. In particular, the compression of the data substantially reduces the transmission time and therefore allows large-scale radiation distribution simulation and interactive volume visualization using remote supercomputing resources in a timely fashion. The compression algorithms currently used in the software we developed are JPEG and H.263 lossy methods and Lempel-Ziv (LZ77) lossless methods. Both objective and subjective assessment of the effect of lossy compression methods on the volume data are conducted. Favorable results are obtained showing that substantial compression ratio is achievable within distortion tolerance. From our experience, we conclude that 30dB (PSNR) is about the lower bound to achieve acceptable quality when applying lossy compression to anatomy volume data (e.g. CT). For computer simulated data, much higher PSNR (up to 100dB) is expectable. This work not only introduces such novel approach for delivering medical services that will have significant impact on the existing cooperative image-based services, but also provides a platform for the physicians to assess the effects of lossy compression techniques on the diagnostic and aesthetic appearance of medical imaging.
The Blob, the Very Rare Massive Star and the Two Populations
NASA Astrophysics Data System (ADS)
2005-04-01
The nebula N214 [1] is a large region of gas and dust located in a remote part of our neighbouring galaxy, the Large Magellanic Cloud. N214 is a quite remarkable site where massive stars are forming. In particular, its main component, N214C (also named NGC 2103 or DEM 293), is of special interest since it hosts a very rare massive star, known as Sk-71 51 [2] and belonging to a peculiar class with only a dozen known members in the whole sky. N214C thus provides an excellent opportunity for studying the formation site of such stars. Using ESO's 3.5-m New Technology telescope (NTT) located at La Silla (Chile) and the SuSI2 and EMMI instruments, astronomers from France and the USA [3] studied in great depth this unusual region by taking the highest resolution images so far as well as a series of spectra of the most prominent objects present. N214C is a complex of ionised hot gas, a so-called H II region [4], spreading over 170 by 125 light-years (see ESO PR Photo 12b/05). At the centre of the nebula lies Sk-71 51, the region's brightest and hottest star. At a distance of ~12 light-years north of Sk-71 51 runs a long arc of highly compressed gas created by the strong stellar wind of the star. There are a dozen less bright stars scattered across the nebula and mainly around Sk-71 51. Moreover, several fine, filamentary structures and fine pillars are visible. The green colour in the composite image, which covers the bulk of the N214C region, comes from doubly ionised oxygen atoms [5] and indicates that the nebula must be extremely hot over a very large extent. The Star Sk-71 51 decomposed ESO PR Photo 12c/05 ESO PR Photo 12c/05 The Cluster Around Sk-71 51 [Preview - JPEG: 400 x 620 pix - 189k] [Normal - JPEG: 800 x 1239 pix - 528k] Caption: ESO PR Photo 12c/05 shows a small field around the hot star Sk-71 51 as seen through the V filter. The left image shows a single frame after subtraction of the nebular background. The image quality - or seeing - is roughly 8.5 pixels, corresponding to 0".72. The right panel shows the same field after applying a sophisticated image-sharpening software ("deconvolution"). The resulting resolution of the sources is 3 pixels, or 0".25 on the sky. This shows that the brightest object is in fact a very tight cluster, composed of 6 stars in an area 4 arcseconds wide. The field size is 21".7 x 21".7. North is up and east to the left. The central and brightest object in ESO PR Photo 12b/05 is not a single star but a small, compact cluster of stars. In order to study this very tight cluster in great detail, the astronomers used sophisticated image-sharpening software to produce high-resolution images on which precise brightness and positional measurements could then be performed (see ESO PR Photo 12c/05). This so-called "deconvolution" technique makes it possible to visualize this complex system much better, leading to the conclusion that the tight core of the Sk-71 51 cluster, covering a ~ 4 arc seconds area, is made up of at least 6 components. From additional spectra taken with EMMI (ESO Multi-Mode Instrument), the brightest component is found to belong to the rare class of very massive stars of spectral type O2 V((f*)). The astronomers derive a mass of ~80 solar masses for this object but it might well be that this is a multiple system, in which case, each component would be less massive. Stellar populations ESO PR Photo 12d/05 ESO PR Photo 12d/05 Colour-Magnitude Diagram of 2341 Stars towards N214C [Preview - JPEG: 400 x 453 pix - 118k] [Normal - JPEG: 800 x 906 pix - 278k] Caption: ESO PR Photo 12d/05 presents a colour-magnitude, V versus B - V, diagram for the 2341 stars observed toward LMC N214C. Three curves are shown, representing the positions of stars having an age of 1 million years (red curve), 1,000 million years (dotted blue), and 10,000 million years (dashed-dotted green), computed for the LMC metallicity and distance. It is clear from this diagram that N214C is composed of two populations: a very young one, containing very massive stars, and an older one. Star numbered 17 is the main component of the Sk -71 51 cluster. From the unique images obtained and reproduced as ESO PR Photo 12b/05, the astronomers could study in great depth the properties of the 2341 stars lying towards the N214C region. This was done by putting them in a so-called colour-magnitude diagram, where the abscissa is the colour (representative of the temperature of the object) and the ordinate the magnitude (related to the intrinsic brightness). Plotting the temperature of stars against their intrinsic brightness reveals a typical distribution that reflects their different evolutionary stages. Two main stellar populations show up in this particular diagram (ESO PR Photo 12d/05): a main sequence, that is, stars that like the Sun are still centrally burning their hydrogen, and an evolved population. The main sequence is made up of stars with initial masses from roughly 2-4 to about 80 solar masses. The stars that follow the red line on ESO PR Photo 12d/05 are main sequence stars still very young, with an estimated age of about 1 million years only. The evolved population is mainly composed of much older and lower mass stars, having an age of 1,000 million years. From their work, the astronomers classified several massive O and B stars, which are associated with the H II region and therefore contribute to its ionisation. A Blob of Ionised Gas ESO PR Photo 12e/05 ESO PR Photo 12e/05 The Nebular Blob in N214C [Preview - JPEG: 400 x 455 pix - 182k] [Normal - JPEG: 800 x 909 pix - 682k] [Full Res - JPEG: 1228 x 1395 pix - 1.7M] Caption: ESO PR Photo 12e/05 zooms-in on the nebular blob lying ~ 60" (50 light-years) north of the Sk-71 51 cluster. The image is based on individual exposures taken through narrow-band filters around H-alpha (red), [O III] (green) and H-beta (blue). The field size is 104" x 101" on the sky, corresponding to roughly 85 by 82 light years. North is up and east to the left. A remarkable feature of N214C is the presence of a globular blob of hot and ionised gas at ~ 60 arc seconds (~ 50 light-years in projection) north of Sk-71 51. It appears as a sphere about four light-years across, split into two lobes by a dust lane which runs along an almost north-south direction (ESO PR Photo 12d/05). The blob seems to be placed on a ridge of ionised gas that follows the structure of the blob, implying a possible interaction. The H II blob coincides with a strong infrared source, 05423-7120, which was detected with the IRAS satellite. The observations indicate the presence of a massive heat source, 200,000 times more luminous than the Sun. This is more probably due to an O7 V star of about 40 solar masses embedded in an infrared cluster. Alternatively, it might well be that the heating arises from a very massive star of about 100 solar masses still in the process of being formed. "It is possible that the blob resulted from massive star formation following the collapse of a thin shell of neutral matter accumulated through the effect of strong irradiation and heating of the star Sk-71 51", says Mohammad Heydari-Malayeri from the Observatoire de Paris (France) and member of the team."Such a "sequential star formation" has probably occurred also toward the southern ridge of N214C". Newcomer to the Family The compact H II region discovered in N214C may be a newcomer to the family of HEBs ("High Excitation Blobs") in the Magellanic Clouds, the first member of which was detected in LMC N159 at ESO. In contrast to the typical H II regions of the Magellanic Clouds, which are extended structures spanning more than 150 light years and are powered by a large number of hot stars, HEBs are dense, small regions usually "only" 4 to 9 light-years wide. Moreover, they often form adjacent to or apparently inside the typical giant H II regions, and rarely in isolation. "The formation mechanisms of these objects are not yet fully understood but it seems however sure that they represent the youngest massive stars of their OB associations", explains Frederic Meynadier, another member of the team from the Observatoire de Paris. "So far only a half-dozen of them have been detected and studied using the ESO telescopes as well as the Hubble Space Telescope. But the stars responsible for the excitation of the tightest or youngest members of the family still remain to be detected." More information The research made on N214C has been presented in a paper accepted for publication by the leading professional journal, Astronomy and Astrophysics ("The LMC H II Region N214C and its peculiar nebular blob", by F. Meynadier, M. Heydari-Malayeri and Nolan R. Walborn). The full text is freely accessible as a PDF file from the A&A web site. Notes [1]: The letter "N" (for "Nebula") in the designation of these objects indicates that they were included in the "Catalogue of H-alpha emission stars and nebulae in the Magellanic Clouds" compiled and published in 1956 by American astronomer-astronaut Karl Henize (1926 - 1993). [2]: The name Sk-71 51, is the abbreviation of Sanduleak -71 51. The American astronomer Nicholas Sanduleak, while working at the Cerro Tololo Observatory, published in 1970 an important list of objects (stars and nebulae showing emission-lines in their spectra) in the Magellanic Clouds. The "-71" in the star's name is the declination of the object, while the "51" is the entry number in the catalogue. [3]: The team of astronomers consists of Frederic Meynadier and Mohammad Heydari-Malayeri (LERMA, Paris Observatory, France), and Nolan R. Walborn (Space Telescope Science Institute, USA). [4]: A gas is said to be ionised when its atoms have lost one or more electrons - in this case by the action of energetic ultraviolet radiation emitted by very hot and luminous stars close by. The heated gas shines mostly in the light of ionized hydrogen (H) atoms, leading to an emission nebula. Such nebulae are referred to as "H II regions". The well-known Orion Nebula is an outstanding example of that type of nebula, cf. ESO PR Photos 03a-c/01 and ESO PR Photo 20/04. [5]: The hotter the central object of an emission nebula, the hotter and more excited will be the surrounding nebula. The word "excitation" refers to the degree of ionization of the nebular gas. The more energetic the impinging particles and radiation, the more electrons will be lost and higher is the degree of excitation. In N214C, the central cluster of stars is so hot that the oxygen atoms are twice ionized, i.e. they have lost two electrons.
Implementation of remote monitoring and managing switches
NASA Astrophysics Data System (ADS)
Leng, Junmin; Fu, Guo
2010-12-01
In order to strengthen the safety performance of the network and provide the big convenience and efficiency for the operator and the manager, the system of remote monitoring and managing switches has been designed and achieved using the advanced network technology and present network resources. The fast speed Internet Protocol Cameras (FS IP Camera) is selected, which has 32-bit RSIC embedded processor and can support a number of protocols. An Optimal image compress algorithm Motion-JPEG is adopted so that high resolution images can be transmitted by narrow network bandwidth. The architecture of the whole monitoring and managing system is designed and implemented according to the current infrastructure of the network and switches. The control and administrative software is projected. The dynamical webpage Java Server Pages (JSP) development platform is utilized in the system. SQL (Structured Query Language) Server database is applied to save and access images information, network messages and users' data. The reliability and security of the system is further strengthened by the access control. The software in the system is made to be cross-platform so that multiple operating systems (UNIX, Linux and Windows operating systems) are supported. The application of the system can greatly reduce manpower cost, and can quickly find and solve problems.
Cornelissen, Frans; Cik, Miroslav; Gustin, Emmanuel
2012-04-01
High-content screening has brought new dimensions to cellular assays by generating rich data sets that characterize cell populations in great detail and detect subtle phenotypes. To derive relevant, reliable conclusions from these complex data, it is crucial to have informatics tools supporting quality control, data reduction, and data mining. These tools must reconcile the complexity of advanced analysis methods with the user-friendliness demanded by the user community. After review of existing applications, we realized the possibility of adding innovative new analysis options. Phaedra was developed to support workflows for drug screening and target discovery, interact with several laboratory information management systems, and process data generated by a range of techniques including high-content imaging, multicolor flow cytometry, and traditional high-throughput screening assays. The application is modular and flexible, with an interface that can be tuned to specific user roles. It offers user-friendly data visualization and reduction tools for HCS but also integrates Matlab for custom image analysis and the Konstanz Information Miner (KNIME) framework for data mining. Phaedra features efficient JPEG2000 compression and full drill-down functionality from dose-response curves down to individual cells, with exclusion and annotation options, cell classification, statistical quality controls, and reporting.
A COLLISION IN THE HEART OF A GALAXY
NASA Technical Reports Server (NTRS)
2002-01-01
The Hubble Space Telescope's Near Infrared Camera and Multi-Object Spectrometer (NICMOS) has uncovered a collision between two spiral galaxies in the heart of the peculiar galaxy called Arp 220. The collision has provided the spark for a burst of star formation. The NICMOS image captures bright knots of stars forming in the heart of Arp 220. The bright, crescent moon-shaped object is a remnant core of one of the colliding galaxies. The core is a cluster of 1 billion stars. The core's half-moon shape suggests that its bottom half is obscured by a disk of dust about 300 light-years across. This disk is embedded in the core and may be swirling around a black hole. The core of the other colliding galaxy is the bright round object to the left of the crescent moon-shaped object. Both cores are about 1,200 light-years apart and are orbiting each other. Arp 220, located 250 million light-years away in the constellation Serpens, is the 220th object in Halton Arp's Atlas of Peculiar Galaxies. The image was taken with three filters. The colors have been adjusted so that, in this infrared image, blue corresponds to shorter wavelengths, red to longer wavelengths. The image was taken April 5, 1997. Credits: Rodger Thompson, Marcia Rieke, Glenn Schneider (University of Arizona) and Nick Scoville (California Institute of Technology), and NASA Image files in GIF and JPEG format and captions may be accessed on the Internet via anonymous ftp from ftp.stsci.edu in /pubinfo.
Effect of resin infiltration on white spot lesions after debonding orthodontic brackets.
Hammad, Shaza M; El Banna, Mai; El Zayat, Inas; Mohsen, Mohamed Abdel
2012-02-01
To evaluate the effect of application of a resin infiltration material on masking the white spot lesions (WSLs) after bracket removal. 18 patients participated in this study and were divided into two groups of nine patients each; by a visual score based on the extent of demineralization, according to the classification of the WSLs. Group 1: Visible WSLs without surface disruption and Group 2: WSLs showed a roughened surface but not requiring restoration. Three successive photographs were taken for every patient; immediately after bracket removal, 1 week after oral hygiene measures and after Icon material application. The JPEG images were imported into image analysis software (Image J version 1.33u for Windows XP, US National Institutes of Health) which presented the images into histograms of gray scale from (0 to 255). Initial and final images were compared for percentage of WSLs masking area. For both groups, a statistically significant difference at P<0.05 was obtained as follows; for WSLs in Group 1, the means at gray scale for the initial and the final photographs were 126.091 +/- 13.452 and 221.268 +/- 9.350 respectively and they revealed significance by Wilcoxon's signed rank test = 0.038, P<0.05. For WSLs in Group 2, the means at gray scale for the initial and the final photographs were 95.585 +/- 20.973 and 155.612 +/- 31.203 respectively and they revealed significance by Wilcoxon's signed rank test = 0.029, P<0.05.
Privacy enabling technology for video surveillance
NASA Astrophysics Data System (ADS)
Dufaux, Frédéric; Ouaret, Mourad; Abdeljaoued, Yousri; Navarro, Alfonso; Vergnenègre, Fabrice; Ebrahimi, Touradj
2006-05-01
In this paper, we address the problem privacy in video surveillance. We propose an efficient solution based on transformdomain scrambling of regions of interest in a video sequence. More specifically, the sign of selected transform coefficients is flipped during encoding. We address more specifically the case of Motion JPEG 2000. Simulation results show that the technique can be successfully applied to conceal information in regions of interest in the scene while providing with a good level of security. Furthermore, the scrambling is flexible and allows adjusting the amount of distortion introduced. This is achieved with a small impact on coding performance and negligible computational complexity increase. In the proposed video surveillance system, heterogeneous clients can remotely access the system through the Internet or 2G/3G mobile phone network. Thanks to the inherently scalable Motion JPEG 2000 codestream, the server is able to adapt the resolution and bandwidth of the delivered video depending on the usage environment of the client.
Chemistry Comes Alive! Vol. 3: Abstract of Special Issue 23 on CD-ROM
NASA Astrophysics Data System (ADS)
Jacobsen, Jerrold J.; Moore, John W.
1999-09-01
Literature Cited
1. Jacobsen, J. J.; Moore, J. W. Chemistry Comes Alive! Vol. 1 [CD-ROM]; J. Chem. Educ. Software 1998, SP 18.
2. Jacobsen, J. J.; Moore, J. W. Chemistry Comes Alive! Vol. 2 [CD-ROM]; J. Chem. Educ. Software 1998, SP 21.
3. Moore, J. W.; Jacobsen, J. J.; Hunsberger, L. R.; Gammon, S. D.; Jetzer, K. H.; Zimmerman, J. ChemDemos Videodisc; J. Chem. Educ. Software 1994, SP 8.
4. Moore, J. W.; Jacobsen, J. J.; Jetzer, K. H.; Gilbert, G.; Mattes, F.; Phillips, D.; Lisensky, G.; Zweerink, G. ChemDemos II; J. Chem. Educ. Software 1996, SP 14.
5. Jacobsen, J. J.; Jetzer, K. H.; Patani, N.; Zimmerman, J. Titration Techniques Videodisc; J. Chem. Educ. Software 1995, SP9.
Morgan, Karen L. M.
2017-04-03
The U.S. Geological Survey (USGS), as part of the National Assessment of Storm-Induced Coastal Change Hazards project, conducts baseline and storm-response photography missions to document and understand the changes in vulnerability of the Nation's coasts to extreme storms. On June 9, 2011, the USGS conducted an oblique aerial photographic survey from Dauphin Island, Alabama, to Breton Island, Louisiana, aboard a Beechcraft BE90 King Air (aircraft) at an altitude of 500 feet (ft) (152 meters (m)) and approximately 1,200 ft (366 m) offshore. This mission was conducted to collect baseline data for assessing incremental changes in the beach and nearshore area and can be used to assess future coastal change.The photographs in this report are Joint Photographic Experts Group (JPEG) images. These photographs document the state of the barrier islands and other coastal features at the time of the survey.
Efficient transmission of compressed data for remote volume visualization.
Krishnan, Karthik; Marcellin, Michael W; Bilgin, Ali; Nadar, Mariappan S
2006-09-01
One of the goals of telemedicine is to enable remote visualization and browsing of medical volumes. There is a need to employ scalable compression schemes and efficient client-server models to obtain interactivity and an enhanced viewing experience. First, we present a scheme that uses JPEG2000 and JPIP (JPEG2000 Interactive Protocol) to transmit data in a multi-resolution and progressive fashion. The server exploits the spatial locality offered by the wavelet transform and packet indexing information to transmit, in so far as possible, compressed volume data relevant to the clients query. Once the client identifies its volume of interest (VOI), the volume is refined progressively within the VOI from an initial lossy to a final lossless representation. Contextual background information can also be made available having quality fading away from the VOI. Second, we present a prioritization that enables the client to progressively visualize scene content from a compressed file. In our specific example, the client is able to make requests to progressively receive data corresponding to any tissue type. The server is now capable of reordering the same compressed data file on the fly to serve data packets prioritized as per the client's request. Lastly, we describe the effect of compression parameters on compression ratio, decoding times and interactivity. We also present suggestions for optimizing JPEG2000 for remote volume visualization and volume browsing applications. The resulting system is ideally suited for client-server applications with the server maintaining the compressed volume data, to be browsed by a client with a low bandwidth constraint.
Energy and Quality-Aware Multimedia Signal Processing
NASA Astrophysics Data System (ADS)
Emre, Yunus
Today's mobile devices have to support computation-intensive multimedia applications with a limited energy budget. In this dissertation, we present architecture level and algorithm-level techniques that reduce energy consumption of these devices with minimal impact on system quality. First, we present novel techniques to mitigate the effects of SRAM memory failures in JPEG2000 implementations operating in scaled voltages. We investigate error control coding schemes and propose an unequal error protection scheme tailored for JPEG2000 that reduces overhead without affecting the performance. Furthermore, we propose algorithm-specific techniques for error compensation that exploit the fact that in JPEG2000 the discrete wavelet transform outputs have larger values for low frequency subband coefficients and smaller values for high frequency subband coefficients. Next, we present use of voltage overscaling to reduce the data-path power consumption of JPEG codecs. We propose an algorithm-specific technique which exploits the characteristics of the quantized coefficients after zig-zag scan to mitigate errors introduced by aggressive voltage scaling. Third, we investigate the effect of reducing dynamic range for datapath energy reduction. We analyze the effect of truncation error and propose a scheme that estimates the mean value of the truncation error during the pre-computation stage and compensates for this error. Such a scheme is very effective for reducing the noise power in applications that are dominated by additions and multiplications such as FIR filter and transform computation. We also present a novel sum of absolute difference (SAD) scheme that is based on most significant bit truncation. The proposed scheme exploits the fact that most of the absolute difference (AD) calculations result in small values, and most of the large AD values do not contribute to the SAD values of the blocks that are selected. Such a scheme is highly effective in reducing the energy consumption of motion estimation and intra-prediction kernels in video codecs. Finally, we present several hybrid energy-saving techniques based on combination of voltage scaling, computation reduction and dynamic range reduction that further reduce the energy consumption while keeping the performance degradation very low. For instance, a combination of computation reduction and dynamic range reduction for Discrete Cosine Transform shows on average, 33% to 46% reduction in energy consumption while incurring only 0.5dB to 1.5dB loss in PSNR.
Use of zerotree coding in a high-speed pyramid image multiresolution decomposition
NASA Astrophysics Data System (ADS)
Vega-Pineda, Javier; Cabrera, Sergio D.; Lucero, Aldo
1995-03-01
A Zerotree (ZT) coding scheme is applied as a post-processing stage to avoid transmitting zero data in the High-Speed Pyramid (HSP) image compression algorithm. This algorithm has features that increase the capability of the ZT coding to give very high compression rates. In this paper the impact of the ZT coding scheme is analyzed and quantified. The HSP algorithm creates a discrete-time multiresolution analysis based on a hierarchical decomposition technique that is a subsampling pyramid. The filters used to create the image residues and expansions can be related to wavelet representations. According to the pixel coordinates and the level in the pyramid, N2 different wavelet basis functions of various sizes and rotations are linearly combined. The HSP algorithm is computationally efficient because of the simplicity of the required operations, and as a consequence, it can be very easily implemented with VLSI hardware. This is the HSP's principal advantage over other compression schemes. The ZT coding technique transforms the different quantized image residual levels created by the HSP algorithm into a bit stream. The use of ZT's compresses even further the already compressed image taking advantage of parent-child relationships (trees) between the pixels of the residue images at different levels of the pyramid. Zerotree coding uses the links between zeros along the hierarchical structure of the pyramid, to avoid transmission of those that form branches of all zeros. Compression performance and algorithm complexity of the combined HSP-ZT method are compared with those of the JPEG standard technique.
NASA Astrophysics Data System (ADS)
Kim, Hie-Sik; Nam, Chul; Ha, Kwan-Yong; Ayurzana, Odgeral; Kwon, Jong-Won
2005-12-01
The embedded systems have been applied to many fields, including households and industrial sites. The user interface technology with simple display on the screen was implemented more and more. The user demands are increasing and the system has more various applicable fields due to a high penetration rate of the Internet. Therefore, the demand for embedded system is tend to rise. An embedded system for image tracking was implemented. This system is used a fixed IP for the reliable server operation on TCP/IP networks. Using an USB camera on the embedded Linux system developed a real time broadcasting of video image on the Internet. The digital camera is connected at the USB host port of the embedded board. All input images from the video camera are continuously stored as a compressed JPEG file in a directory at the Linux web-server. And each frame image data from web camera is compared for measurement of displacement Vector. That used Block matching algorithm and edge detection algorithm for past speed. And the displacement vector is used at pan/tilt motor control through RS232 serial cable. The embedded board utilized the S3C2410 MPU, which used the ARM 920T core form Samsung. The operating system was ported to embedded Linux kernel and mounted of root file system. And the stored images are sent to the client PC through the web browser. It used the network function of Linux and it developed a program with protocol of the TCP/IP.
NASA Astrophysics Data System (ADS)
Bell, J. F.; Godber, A.; McNair, S.; Caplinger, M. A.; Maki, J. N.; Lemmon, M. T.; Van Beek, J.; Malin, M. C.; Wellington, D.; Kinch, K. M.; Madsen, M. B.; Hardgrove, C.; Ravine, M. A.; Jensen, E.; Harker, D.; Anderson, R. B.; Herkenhoff, K. E.; Morris, R. V.; Cisneros, E.; Deen, R. G.
2017-07-01
The NASA Curiosity rover Mast Camera (Mastcam) system is a pair of fixed-focal length, multispectral, color CCD imagers mounted 2 m above the surface on the rover's remote sensing mast, along with associated electronics and an onboard calibration target. The left Mastcam (M-34) has a 34 mm focal length, an instantaneous field of view (IFOV) of 0.22 mrad, and a FOV of 20° × 15° over the full 1648 × 1200 pixel span of its Kodak KAI-2020 CCD. The right Mastcam (M-100) has a 100 mm focal length, an IFOV of 0.074 mrad, and a FOV of 6.8° × 5.1° using the same detector. The cameras are separated by 24.2 cm on the mast, allowing stereo images to be obtained at the resolution of the M-34 camera. Each camera has an eight-position filter wheel, enabling it to take Bayer pattern red, green, and blue (RGB) "true color" images, multispectral images in nine additional bands spanning 400-1100 nm, and images of the Sun in two colors through neutral density-coated filters. An associated Digital Electronics Assembly provides command and data interfaces to the rover, 8 Gb of image storage per camera, 11 bit to 8 bit companding, JPEG compression, and acquisition of high-definition video. Here we describe the preflight and in-flight calibration of Mastcam images, the ways that they are being archived in the NASA Planetary Data System, and the ways that calibration refinements are being developed as the investigation progresses on Mars. We also provide some examples of data sets and analyses that help to validate the accuracy and precision of the calibration.
Map_plot and bgg_plot: software for integration of geoscience datasets
NASA Astrophysics Data System (ADS)
Gaillot, Philippe; Punongbayan, Jane T.; Rea, Brice
2004-02-01
Since 1985, the Ocean Drilling Program (ODP) has been supporting multidisciplinary research in exploring the structure and history of Earth beneath the oceans. After more than 200 Legs, complementary datasets covering different geological environments, periods and space scales have been obtained and distributed world-wide using the ODP-Janus and Lamont Doherty Earth Observatory-Borehole Research Group (LDEO-BRG) database servers. In Earth Sciences, more than in any other science, the ensemble of these data is characterized by heterogeneous formats and graphical representation modes. In order to fully and quickly assess this information, a set of Unix/Linux and Generic Mapping Tool-based C programs has been designed to convert and integrate datasets acquired during the present ODP and the future Integrated ODP (IODP) Legs. Using ODP Leg 199 datasets, we show examples of the capabilities of the proposed programs. The program map_plot is used to easily display datasets onto 2-D maps. The program bgg_plot (borehole geology and geophysics plot) displays data with respect to depth and/or time. The latter program includes depth shifting, filtering and plotting of core summary information, continuous and discrete-sample core measurements (e.g. physical properties, geochemistry, etc.), in situ continuous logs, magneto- and bio-stratigraphies, specific sedimentological analyses (lithology, grain size, texture, porosity, etc.), as well as core and borehole wall images. Outputs from both programs are initially produced in PostScript format that can be easily converted to Portable Document Format (PDF) or standard image formats (GIF, JPEG, etc.) using widely distributed conversion programs. Based on command line operations and customization of parameter files, these programs can be included in other shell- or database-scripts, automating plotting procedures of data requests. As an open source software, these programs can be customized and interfaced to fulfill any specific plotting need of geoscientists using ODP-like datasets.
Perceptual Image Compression in Telemedicine
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Ahumada, Albert J., Jr.; Eckstein, Miguel; Null, Cynthia H. (Technical Monitor)
1996-01-01
The next era of space exploration, especially the "Mission to Planet Earth" will generate immense quantities of image data. For example, the Earth Observing System (EOS) is expected to generate in excess of one terabyte/day. NASA confronts a major technical challenge in managing this great flow of imagery: in collection, pre-processing, transmission to earth, archiving, and distribution to scientists at remote locations. Expected requirements in most of these areas clearly exceed current technology. Part of the solution to this problem lies in efficient image compression techniques. For much of this imagery, the ultimate consumer is the human eye. In this case image compression should be designed to match the visual capacities of the human observer. We have developed three techniques for optimizing image compression for the human viewer. The first consists of a formula, developed jointly with IBM and based on psychophysical measurements, that computes a DCT quantization matrix for any specified combination of viewing distance, display resolution, and display brightness. This DCT quantization matrix is used in most recent standards for digital image compression (JPEG, MPEG, CCITT H.261). The second technique optimizes the DCT quantization matrix for each individual image, based on the contents of the image. This is accomplished by means of a model of visual sensitivity to compression artifacts. The third technique extends the first two techniques to the realm of wavelet compression. Together these two techniques will allow systematic perceptual optimization of image compression in NASA imaging systems. Many of the image management challenges faced by NASA are mirrored in the field of telemedicine. Here too there are severe demands for transmission and archiving of large image databases, and the imagery is ultimately used primarily by human observers, such as radiologists. In this presentation I will describe some of our preliminary explorations of the applications of our technology to the special problems of telemedicine.
Infrared Images of an Infant Solar System
NASA Astrophysics Data System (ADS)
2002-05-01
ESO Telescopes Detect a Strange-Looking Object Summary Using the ESO 3.5-m New Technology Telescope and the Very Large Telescope (VLT) , a team of astronomers [1] have discovered a dusty and opaque disk surrounding a young solar-type star in the outskirts of a dark cloud in the Milky Way. It was found by chance during an unrelated research programme and provides a striking portrait of what our Solar System must have looked like when it was in its early infancy. Because of its striking appearance, the astronomers have nicknamed it the "Flying Saucer" . The new object appears to be a perfect example of a very young star with a disk in which planets are forming or will soon form, and located far away from the usual perils of an active star-forming environment . Most other young stars, especially those that are born in dense regions, run a serious risk of having their natal dusty disks destroyed by the blazing radiation of their more massive and hotter siblings in these clusters. The star at the centre of the "Flying Saucer", seems destined to live a long and quiet life at the centre of a planetary system , very much like our own Sun. This contributes to making it a most interesting object for further studies with the VLT and other telescopes. The mass of the observed disk of gas and dust is at least twice that of the planet Jupiter and its radius measures about 45 billion km, or 5 times the size of the orbit of Neptune. PR Photo 12a/02 : The "Flying Saucer" object photographed with NTT/SOFI. PR Photo 12b/02 : VLT/ISAAC image of this object. PR Photo 12c/02 : Enlargement of VLT/ISAAC image . Circumstellar Disks and Planets Planets form in dust disks around young stars. This is a complex process of which not all stages are yet fully understood but it begins when small dust particles collide and stick to each other. For this reason, observations of such dust disks, in particular those that appear as extended structures (are "resolved"), are very important for our understanding of the formation of solar-type stars and planetary systems from the interstellar medium. However, in most cases the large difference of brightness between the young star and its surrounding material makes it impossible to image directly the circumstellar disk. But when the disk is seen nearly edge-on, the light from the central star will be blocked out by the dust grains in the disk. Other grains below and above the disk midplane scatter the stellar light, producing a typical pattern of a dark lane between two reflection nebulae. The first young stellar object (YSO) found to display this typical pattern, HH 30 IRS in the Taurus dark cloud at a distance of about 500 light-years (140 pc), was imaged by the Hubble Space telescope (HST) in 1996. Edge-on disks have since also been observed with ground-based telescopes in the near-infrared region of the spectrum, sometimes by means of adaptive optics techniques or speckle imaging, or under very good sky image quality, cf. ESO PR Photo 03d/01 with a VLT image of such an object in the Orion Nebula. A surprise discovery ESO PR Photo 12a/02 ESO PR Photo 12a/02 [Preview - JPEG: 400 x 459 pix - 55k] [Normal - JPEG: 800 x 918 pix - 352k] Caption : PR Photo 12a/02 shows a three-colour reproduction of the discovery image of strange-looking object (nicknamed the "Flying Saucer" by the astronomers), obtained with the SOFI multi-mode instrument at the ESO 3.5-m New Technology Telescope (NTT) at the La Silla Observatory. Compared to the unresolved stars in the field, the image of this object appears extended. Two characteristic reflection nebulae are barely visible, together with a marginally resolved dark dust lane in front of the star and oriented East-West. Technical information about the photo is available below. Last year, a group of astronomers [1] carried out follow-up observations of new X-ray sources found by the ESA XMM-Newton and NASA Chandra X-ray satellites. They were looking at the periphery of the so-called Rho Ophiuchi dark cloud , one of the nearest star-forming regions at a distance of about 500 light-years (140 pc), obtaining images in near-infrared light with the SOFI multi-mode instrument on the 3.5-m New Technology Telescope (NTT) at the ESO La Silla Observatory (Chile). On one of the NTT photos obtained on April 7, 2001, they discovered by chance a strange object which by closer inspection turned out to be a resolved edge-on circumstellar disk, so far unnoticed and displaying infrared scattered light around a young star. On this photo ( PR Photo 12a/02 ) two characteristic reflection nebulae can barely be seen, flanking a marginally resolved dark dust lane in the East-West direction in front of the star. VLT confirmation ESO PR Photo 12b/02 ESO PR Photo 12b/02 [Preview - JPEG: 437 x 430 pix - 64k] [Normal - JPEG: 873 x 800 pix - 564k] ESO PR Photo 12c/02 ESO PR Photo 12c/02 [Preview - JPEG: 400 x 468 pix - 69k] [Normal - JPEG: 800 x 935 pix - 432k] Captions : PR Photo 12b/02 shows the new object, as imaged with the ISAAC multi-mode instrument on the 8.2-m VLT ANTU telescope at Paranal during the follow-up observations. The circumstellar disk is well visible in the left part of the field as a shadow in front of the nebula. Many background galaxies are visible in this deep image and one edge-on galaxy is seen visible close to the image centre. A close-up of the object is shown in PR Photo 12c/02 . Note the reddish aspect of the upper nebula; this phenomenon is not yet fully understood. Technical information about the photos is available below. To confirm this discovery and in order to learn more about the object and the disk, the astronomers obtained additional observations (during "Director's Discretionary Time") with the 8.2-m VLT ANTU telescope. The observations were carried out in "service mode" by ESO staff, using the near-infrared multi-mode Infrared Spectrometer And Array Camera (ISAAC) - the "father" of the SOFI instrument ("Son OF Isaac"). A series of fine images was obtained on August 15, 2001, under very good observing conditions (with "seeing" of 0.4 arcsec). Now the two reflection nebulae are clearly seen ( PR Photos 12b-c/02 ), and the dark dust lane is well resolved. The leader of the group, Nicolas Grosso , recalls the first impression when seeing the true shape of the object: "That is when we looked at each other and, with one voice, immediately decided to nickname it the `Flying Saucer'!". The nature of the new object Seven young stars in the Rho Ophiuchi star-forming region are known to display similar reflection nebulae surrounding a dark lane (suggesting the presence of a dusty disk), but these objects are all still deeply embedded in the dense cores of this dark cloud. They are mostly protostars with ages of about 100,000 years, surrounded by a remnant infalling envelope. On the other hand, astronomers think that the newly found object has an age of about 1 million years and is in a more evolved stage than those in the neighboring Rho Ophiuchi star-forming region. The new disk is located at the periphery of the dark cloud and is much less obscured than the younger objects still embedded in the dense dark cloud nursery, thus allowing a much clearer view of the dust disk. The resolved circumstellar dust disk in the "Flying Saucer" has a radius of about 300 Astronomical Units (45 billion km), or 5 times the size of the orbit of Neptune (assuming the same distance as the Rho Ophiuchi star-forming cloud, 500 light-years). From model calculations, the astronomers find that it is inclined only about 4° to the line of sight and therefore seen very nearly from the side. A lower limit to the total mass of the disk is about twice the mass of planet Jupiter, or 600-700 times the mass of the Earth. A study of the recorded (reflected) light from the optical to the near-infrared indicates that the central young solar-type star has a temperature of about 3000 K and 0.4 times the luminosity of our actual Sun. A detailed analysis of both reflection nebulae shows an unusual excess of infrared light from the upper nebula, both visible in the NTT and VLT images, which cannot be explained by a simple axisymmetrical model. Future complementary high-resolution observations by the VLT adaptive optics camera NAOS-CONICA will help the astronomers to understand the origin of this puzzling phenomenon, and its possible link to the planet-forming mechanism. Said Nicolas Grosso : "The `Flying Saucer' object presents us with a striking portrait of our Solar System in its early infancy. With this object, Nature has provided us a perfect laboratory for the study of both dust and gas in young circumstellar disks, the raw material of planets." The next steps As this disk is located at a dark cloud periphery and not embedded in it, follow-up studies at millimetre wavelengths with existing antenna arrays will give a clear view without the complication of unrelated background emission from dark cloud material. These future observations will provide an easy mapping of the gas and dust material around this young solar-type star, and allow a study of the chemical processes at work in this protoplanetary disk. Moreover, current antenna arrays should be able to detect the Keplerian rotation of this disk, providing a direct measurement of the mass of the central star. Computer simulations predict that baby planets produce measurable structural changes in circumstellar disks, however such signs of the planet formation are far from the sensitivity and the spatial resolution of the actual antenna arrays. The detection of these features are the goal of ALMA , and there is no doubt that this "planet nursery" object will be a prime target for this future array of antennas. More information The results described in this Press Release have been submitted to the European research journal Astronomy & Astrophysics ("The `Flying Saucer': a new edge-on circumstellar dust disk at the periphery of the rho Ophiuchi dark cloud" by N. Grosso and co-authors). Notes [1]: The team consists of Nicolas Grosso (Max-Planck-Institut für extraterrestrische Physik, Garching, Germany), João Alves (ESO, Garching, Germany), Kenneth Wood (School of Physics & Astronomy, University of St Andrews, Scotland, UK), Ralph Neuhäuser (Max-Planck-Institut für extraterrestrische Physik, Garching, Germany), Thierry Montmerle (Service d'Astrophysique, CEA Saclay,Gif-sur-Yvette, France) and Jon E. Bjorkman (Ritter Observatory, Department of Physics & Astronomy, University of Toledo, Ohio, USA).
Immunochromatographic diagnostic test analysis using Google Glass.
Feng, Steve; Caire, Romain; Cortazar, Bingen; Turan, Mehmet; Wong, Andrew; Ozcan, Aydogan
2014-03-25
We demonstrate a Google Glass-based rapid diagnostic test (RDT) reader platform capable of qualitative and quantitative measurements of various lateral flow immunochromatographic assays and similar biomedical diagnostics tests. Using a custom-written Glass application and without any external hardware attachments, one or more RDTs labeled with Quick Response (QR) code identifiers are simultaneously imaged using the built-in camera of the Google Glass that is based on a hands-free and voice-controlled interface and digitally transmitted to a server for digital processing. The acquired JPEG images are automatically processed to locate all the RDTs and, for each RDT, to produce a quantitative diagnostic result, which is returned to the Google Glass (i.e., the user) and also stored on a central server along with the RDT image, QR code, and other related information (e.g., demographic data). The same server also provides a dynamic spatiotemporal map and real-time statistics for uploaded RDT results accessible through Internet browsers. We tested this Google Glass-based diagnostic platform using qualitative (i.e., yes/no) human immunodeficiency virus (HIV) and quantitative prostate-specific antigen (PSA) tests. For the quantitative RDTs, we measured activated tests at various concentrations ranging from 0 to 200 ng/mL for free and total PSA. This wearable RDT reader platform running on Google Glass combines a hands-free sensing and image capture interface with powerful servers running our custom image processing codes, and it can be quite useful for real-time spatiotemporal tracking of various diseases and personal medical conditions, providing a valuable tool for epidemiology and mobile health.
Immunochromatographic Diagnostic Test Analysis Using Google Glass
2014-01-01
We demonstrate a Google Glass-based rapid diagnostic test (RDT) reader platform capable of qualitative and quantitative measurements of various lateral flow immunochromatographic assays and similar biomedical diagnostics tests. Using a custom-written Glass application and without any external hardware attachments, one or more RDTs labeled with Quick Response (QR) code identifiers are simultaneously imaged using the built-in camera of the Google Glass that is based on a hands-free and voice-controlled interface and digitally transmitted to a server for digital processing. The acquired JPEG images are automatically processed to locate all the RDTs and, for each RDT, to produce a quantitative diagnostic result, which is returned to the Google Glass (i.e., the user) and also stored on a central server along with the RDT image, QR code, and other related information (e.g., demographic data). The same server also provides a dynamic spatiotemporal map and real-time statistics for uploaded RDT results accessible through Internet browsers. We tested this Google Glass-based diagnostic platform using qualitative (i.e., yes/no) human immunodeficiency virus (HIV) and quantitative prostate-specific antigen (PSA) tests. For the quantitative RDTs, we measured activated tests at various concentrations ranging from 0 to 200 ng/mL for free and total PSA. This wearable RDT reader platform running on Google Glass combines a hands-free sensing and image capture interface with powerful servers running our custom image processing codes, and it can be quite useful for real-time spatiotemporal tracking of various diseases and personal medical conditions, providing a valuable tool for epidemiology and mobile health. PMID:24571349
"First Light" for HARPS at La Silla
NASA Astrophysics Data System (ADS)
2003-03-01
"First Light" for HARPS at La Silla Advanced Planet-Hunting Spectrograph Passes First Tests With Flying Colours Summary The initial commissioning period of the new HARPS spectrograph (High Accuracy Radial Velocity Planet Searcher) of the 3.6-m telescope at the ESO La Silla Observatory has been successfully accomplished in the period February 11 - 27, 2003. This new instrument is optimized to detect planets in orbit around other stars ("exoplanets") by means of accurate (radial) velocity measurements with an unequalled precision of 1 meter per second . This high sensitivity makes it possible to detect variations in the motion of a star at this level, caused by the gravitational pull of one or more orbiting planets, even relatively small ones. "First Light" occurred on February 11, 2003, during the first night of tests. The instrument worked flawlessly and was fine-tuned during subsequent nights, achieving the predicted performance already during this first test run. The measurement of accurate stellar radial velocities is a very efficient way to search for planets around other stars. More than one hundred extrasolar planets have so far been detected , providing an increasingly clear picture of a great diversity of exoplanetary systems . However, current technical limitations have so far prevented the discovery around solar-type stars of exoplanets that are much less massive than Saturn, the second-largest planet in the solar system. HARPS will break through this barrier and will carry this fundamental exploration towards detection of exoplanets with masses like Uranus and Neptune. Moreover, in the case of low-mass stars - like Proxima Centauri, cf. ESO PR 05/03 - HARPS will have the unique capability to detect big "telluric" planets with only a few times the mass of the Earth . The HARPS instrument is being offered to the research community in the ESO member countries, already from October 2003 . PR Photo 08a/03 : The large optical grating of the HARPS spectrograph . PR Photo 08b/03 : The HARPS spectrograph . PR Photo 08c/03 : HARPS spectrum of the star HD100623 ("raw"). PR Photo 08d/03 : Extracted spectral tracing of the star HD100623 . PR Photo 08e/03 : Measured stability of HARPS. The HARPS Spectrograph ESO PR Photo 08a/03 ESO PR Photo 08a/03 [Preview - JPEG: 449 x 400 pix - 58k [Normal - JPEG: 897 x 800 pix - 616k] [Full-Res - JPEG: 1374 x 1226 pix - 1.3M] ESO PR Photo 08b/03 ESO PR Photo 08b/03 [Preview - JPEG: 500 x 400 pix - 83k [Normal - JPEG: 999 x 800 pix - 727k] [Full-Res - JPEG: 1600 x 1281 pix - 1.3M] Captions : PR Photo 08a/03 and PR Photo 08b/03 show the HARPS spectrograph during laboratory tests. The vacuum tank is open so that some of the high-precision components inside can be seen. On PR Photo 08a/03 , the large optical grating by which the incoming stellar light is dispersed is visible on the top of the bench; it measures 200 x 800 mm. HARPS is a unique fiber-fed "echelle" spectrograph able to record at once the visible range of a stellar spectrum (wavelengths from 380 - 690 nm) with very high spectral resolving power (better than R = 100,000 ). Any light losses inside the instrument caused by reflections of the starlight in the various optical components (mirrors and gratings), have been minimised and HARPS therefore works very efficiently . First observations ESO PR Photo 08c/03 ESO PR Photo 08c/03 [Preview - JPEG: 400 x 490 pix - 52k [Normal - JPEG: 800 x 980 pix - 362k] [Full-Res - JPEG: 1976 x 1195 pix - 354k] ESO PR Photo 08d/03 ESO PR Photo 08d/03 [Preview - JPEG: 485 x 400 pix - 53k [Normal - JPEG: 969X x 800 pix - 160k] Captions : PR Photo 08c/03 displays a HARPS untreated ("raw") exposure of the star HD100623 , of the comparatively cool stellar spectral type K0V. The frame shows the complete image as recorded with the 4000 x 4000 pixel CCD detector in the focal plane of the spectrograph. The horizontal white lines correspond to the stellar spectrum, divided into 70 adjacent spectral bands which together cover the entire visible wavelength range from 380 to 690 nm. Some of the stellar absorption lines are seen as dark horizontal features; they are the spectral signatures of various chemical elements in the star's upper layers ("atmosphere"). Bright emission lines from the heavy element thorium are visible between the bands - they are exposed by a lamp in the spectrograph to calibrate the wavelengths. This allows measuring any instrumental drift, thereby guaranteeing the exceedingly high precision that qualifies HARPS. PR Photo 08d/03 displays a small part of the spectrum of the star HD100623 following on-line data extraction (in astronomical terminology: "reduction") of the previous raw frame, shown in PR Photo 08c/03 . Several deep absorption lines are clearly visible. During the first commissioning period in February 2003, the high efficiency of HARPS was clearly demonstrated by observations of a G6V-type star of magnitude 8. This star is similar to, but slightly less heavy than our Sun and about 5 times fainter than the faintest stars visible with the unaided eye. During an exposure lasting only one minute, a signal-to-noise ratio (S/N) of 45 per pixel was achieved - this allows to determine the star's radial velocity with an uncertainty of only ~1 m/s! . For comparison, the velocity of a briskly walking person is about 2 m/s. A main performance goal of the HARPS instrument has therefore been reached, already at this early moment. This result also demonstrates an impressive gain in efficiency of no less than about 75 times as compared to that achievable with its predecessor CORALIE. That instrument has been operating very successfully at the 1.2-m Swiss Leonard Euler telescope at La Silla and has discovered several exoplanets during the past years, see for instance ESO Press Releases ( PR 18/98 , PR 13/00 and PR 07/01 ). In practice, this means that this new planet searcher at La Silla can now investigate many more stars in a given observing time and consequently with much increased probability for success. Extraordinary stability ESO PR Photo 08e/03 ESO PR Photo 08e/03 [Preview - JPEG: 478 x 400 pix - 38k [Normal - JPEG: 955 x 800 pix - 111k] Captions : PR Photo 08e/03 is a powerful demonstration of the extraordinary stability of the HARPS spectrograph. It plots the instrumentally induced velocity change, as measured during one night (9 consecutive hours) in the commissioning period. The drift of the instrument is determined by computing the exact position of the Thorium emission lines. As can be seen, the drift is of the order of 1 m/s during 9 hours and is measured with an accuracy of only 20 cm/s. The goal of measuring velocities of stars with an accuracy comparable to that of a pedestrian has required extraordinary efforts for the design and construction of this instrument. Indeed, HARPS is the most stable spectrograph ever built for astronomical applications . A crucial measure in this respect is the location of the HARPS spectrograph in a climatized room in the telescope building. The starlight captured by the 3.6-m telescope is guided to the instrument through a very efficient optical fibre from the telescope's Cassegrain focus. Moreover, the spectrograph is placed inside a vacuum tank to reduce to a minimum any movement of the sensitive optical elements because of changes in pressure and temperature. The temperature of the critical components of HARPS itself is kept very stable, with less than 0.005 degree variation and the spectrum therefore drifts by less than 2 m/s per night. This is a very small value - 1 m/s corresponds to a displacement of the stellar spectrum on the CCD detector by about 1/1000 the size of one CCD pixel, which is equivalent to 15 nm or only about 150 silicon atoms! This drift is continuously measured by means of a Thorium spectrum which is simultaneously recorded on the detector with an accuracy of only 20 cm/s. PR Photo 08e/03 illustrates two fundamental issues: HARPS performs with an overall stability never before reached by any other astronomical spectrograph , and it is possible to measure any nightly drift with an accuracy never achieved before [1]. During this first commissioning period in February 2003, all instrument functions were tested, as well as the complete data flow system hard- and software. Already during the second test night, the data-reduction pipeline was used to obtain the extracted and wavelength-calibrated spectra in a completely automatic way. The first spectra obtained with HARPS will now allow the construction of templates needed to compute the radial velocities of different types of stars with the best efficiency. The second commissioning period in June will then be used to achieve the optimal performance of this new, very powerful instrument. Astronomers in the ESO community will have the opportunity to observe with HARPS from October 1, 2003. Other research opportunities opening This superb radial velocity machine will also play an important role for the study of stellar interiors by asteroseismology. Oscillation modes were recently discovered in the nearby solar-type star Alpha Centauri A from precise radial velocity measurements carried out with CORALIE (see ESO PR 15/01 ). HARPS is able to carry out similar measurements on fainter stars, thus reaching a much wider range of masses, spectral characteristics and ages. Michel Mayor , Director of the Geneva Observatory and co-discoverer of the first known exoplanet, is confident: "With HARPS operating so well already during the first test nights, there is every reason to believe that we shall soon see some breakthroughs in this field also" . The HARPS Consortium HARPS has been designed and built by an international consortium of research institutes, led by the Observatoire de Genève (Switzerland) and including Observatoire de Haute-Provence (France), Physikalisches Institut der Universität Bern (Switzerland), the Service d'Aeronomie (CNRS, France), as well as ESO La Silla and ESO Garching . The HARPS consortium has been granted 100 observing nights per year during a 5-year period at the ESO 3.6-m telescope to perform what promises to be the most ambitious systematic search for exoplanets so far implemented worldwide . The project team is directed by Michel Mayor (Principal Investigator), Didier Queloz (Mission Scientist), Francesco Pepe (Project Managers Consortium) and Gero Rupprecht (ESO representative).
Energy efficiency of task allocation for embedded JPEG systems.
Fan, Yang-Hsin; Wu, Jan-Ou; Wang, San-Fu
2014-01-01
Embedded system works everywhere for repeatedly performing a few particular functionalities. Well-known products include consumer electronics, smart home applications, and telematics device, and so forth. Recently, developing methodology of embedded systems is applied to conduct the design of cloud embedded system resulting in the applications of embedded system being more diverse. However, the more energy consumes result from the more embedded system works. This study presents hyperrectangle technology (HT) to embedded system for obtaining energy saving. The HT adopts drift effect to construct embedded systems with more hardware circuits than software components or vice versa. It can fast construct embedded system with a set of hardware circuits and software components. Moreover, it has a great benefit to fast explore energy consumption for various embedded systems. The effects are presented by assessing a JPEG benchmarks. Experimental results demonstrate that the HT, respectively, achieves the energy saving by 29.84%, 2.07%, and 68.80% on average to GA, GHO, and Lin.
Analysis-Preserving Video Microscopy Compression via Correlation and Mathematical Morphology
Shao, Chong; Zhong, Alfred; Cribb, Jeremy; Osborne, Lukas D.; O’Brien, E. Timothy; Superfine, Richard; Mayer-Patel, Ketan; Taylor, Russell M.
2015-01-01
The large amount video data produced by multi-channel, high-resolution microscopy system drives the need for a new high-performance domain-specific video compression technique. We describe a novel compression method for video microscopy data. The method is based on Pearson's correlation and mathematical morphology. The method makes use of the point-spread function (PSF) in the microscopy video acquisition phase. We compare our method to other lossless compression methods and to lossy JPEG, JPEG2000 and H.264 compression for various kinds of video microscopy data including fluorescence video and brightfield video. We find that for certain data sets, the new method compresses much better than lossless compression with no impact on analysis results. It achieved a best compressed size of 0.77% of the original size, 25× smaller than the best lossless technique (which yields 20% for the same video). The compressed size scales with the video's scientific data content. Further testing showed that existing lossy algorithms greatly impacted data analysis at similar compression sizes. PMID:26435032
Energy Efficiency of Task Allocation for Embedded JPEG Systems
2014-01-01
Embedded system works everywhere for repeatedly performing a few particular functionalities. Well-known products include consumer electronics, smart home applications, and telematics device, and so forth. Recently, developing methodology of embedded systems is applied to conduct the design of cloud embedded system resulting in the applications of embedded system being more diverse. However, the more energy consumes result from the more embedded system works. This study presents hyperrectangle technology (HT) to embedded system for obtaining energy saving. The HT adopts drift effect to construct embedded systems with more hardware circuits than software components or vice versa. It can fast construct embedded system with a set of hardware circuits and software components. Moreover, it has a great benefit to fast explore energy consumption for various embedded systems. The effects are presented by assessing a JPEG benchmarks. Experimental results demonstrate that the HT, respectively, achieves the energy saving by 29.84%, 2.07%, and 68.80% on average to GA, GHO, and Lin. PMID:24982983
NICMOS PEERS INTO HEART OF DYING STAR
NASA Technical Reports Server (NTRS)
2002-01-01
The Egg Nebula, also known as CRL 2688, is shown on the left as it appears in visible light with the Hubble Space Telescope's Wide Field and Planetary Camera 2 (WFPC2) and on the right as it appears in infrared light with Hubble's Near Infrared Camera and Multi-Object Spectrometer (NICMOS). Since infrared light is invisible to humans, the NICMOS image has been assigned colors to distinguish different wavelengths: blue corresponds to starlight reflected by dust particles, and red corresponds to heat radiation emitted by hot molecular hydrogen. Objects like the Egg Nebula are helping astronomers understand how stars like our Sun expel carbon and nitrogen -- elements crucial for life -- into space. Studies on the Egg Nebula show that these dying stars eject matter at high speeds along a preferred axis and may even have multiple jet-like outflows. The signature of the collision between this fast-moving material and the slower outflowing shells is the glow of hydrogen molecules captured in the NICMOS image. The distance between the tip of each jet is approximately 200 times the diameter of our solar system (out to Pluto's orbit). Credits: Rodger Thompson, Marcia Rieke, Glenn Schneider, Dean Hines (University of Arizona); Raghvendra Sahai (Jet Propulsion Laboratory); NICMOS Instrument Definition Team; and NASA Image files in GIF and JPEG format and captions may be accessed on the Internet via anonymous ftp from ftp.stsci.edu in /pubinfo.
Rate-distortion optimized tree-structured compression algorithms for piecewise polynomial images.
Shukla, Rahul; Dragotti, Pier Luigi; Do, Minh N; Vetterli, Martin
2005-03-01
This paper presents novel coding algorithms based on tree-structured segmentation, which achieve the correct asymptotic rate-distortion (R-D) behavior for a simple class of signals, known as piecewise polynomials, by using an R-D based prune and join scheme. For the one-dimensional case, our scheme is based on binary-tree segmentation of the signal. This scheme approximates the signal segments using polynomial models and utilizes an R-D optimal bit allocation strategy among the different signal segments. The scheme further encodes similar neighbors jointly to achieve the correct exponentially decaying R-D behavior (D(R) - c(o)2(-c1R)), thus improving over classic wavelet schemes. We also prove that the computational complexity of the scheme is of O(N log N). We then show the extension of this scheme to the two-dimensional case using a quadtree. This quadtree-coding scheme also achieves an exponentially decaying R-D behavior, for the polygonal image model composed of a white polygon-shaped object against a uniform black background, with low computational cost of O(N log N). Again, the key is an R-D optimized prune and join strategy. Finally, we conclude with numerical results, which show that the proposed quadtree-coding scheme outperforms JPEG2000 by about 1 dB for real images, like cameraman, at low rates of around 0.15 bpp.
Determining the Completeness of the Nimbus Meteorological Data Archive
NASA Technical Reports Server (NTRS)
Johnson, James; Moses, John; Kempler, Steven; Zamkoff, Emily; Al-Jazrawi, Atheer; Gerasimov, Irina; Trivedi, Bhagirath
2011-01-01
NASA launched the Nimbus series of meteorological satellites in the 1960s and 70s. These satellites carried instruments for making observations of the Earth in the visible, infrared, ultraviolet, and microwave wavelengths. The original data archive consisted of a combination of digital data written to 7-track computer tapes and on various film media. Many of these data sets are now being migrated from the old media to the GES DISC modern online archive. The process involves recovering the digital data files from tape as well as scanning images of the data from film strips. Some of the challenges of archiving the Nimbus data include the lack of any metadata from these old data sets. Metadata standards and self-describing data files did not exist at that time, and files were written on now obsolete hardware systems and outdated file formats. This requires creating metadata by reading the contents of the old data files. Some digital data files were corrupted over time, or were possibly improperly copied at the time of creation. Thus there are data gaps in the collections. The film strips were stored in boxes and are now being scanned as JPEG-2000 images. The only information describing these images is what was written on them when they were originally created, and sometimes this information is incomplete or missing. We have the ability to cross-reference the scanned images against the digital data files to determine which of these best represents the data set from the various missions, or to see how complete the data sets are. In this presentation we compared data files and scanned images from the Nimbus-2 High-Resolution Infrared Radiometer (HRIR) for September 1966 to determine whether the data and images are properly archived with correct metadata.
An interactive toolbox for atlas-based segmentation and coding of volumetric images
NASA Astrophysics Data System (ADS)
Menegaz, G.; Luti, S.; Duay, V.; Thiran, J.-Ph.
2007-03-01
Medical imaging poses the great challenge of having compression algorithms that are lossless for diagnostic and legal reasons and yet provide high compression rates for reduced storage and transmission time. The images usually consist of a region of interest representing the part of the body under investigation surrounded by a "background", which is often noisy and not of diagnostic interest. In this paper, we propose a ROI-based 3D coding system integrating both the segmentation and the compression tools. The ROI is extracted by an atlas based 3D segmentation method combining active contours with information theoretic principles, and the resulting segmentation map is exploited for ROI based coding. The system is equipped with a GUI allowing the medical doctors to supervise the segmentation process and eventually reshape the detected contours at any point. The process is initiated by the user through the selection of either one pre-de.ned reference image or one image of the volume to be used as the 2D "atlas". The object contour is successively propagated from one frame to the next where it is used as the initial border estimation. In this way, the entire volume is segmented based on a unique 2D atlas. The resulting 3D segmentation map is exploited for adaptive coding of the different image regions. Two coding systems were considered: the JPEG3D standard and the 3D-SPITH. The evaluation of the performance with respect to both segmentation and coding proved the high potential of the proposed system in providing an integrated, low-cost and computationally effective solution for CAD and PAC systems.
DICOM image integration into an electronic medical record using thin viewing clients
NASA Astrophysics Data System (ADS)
Stewart, Brent K.; Langer, Steven G.; Taira, Ricky K.
1998-07-01
Purpose -- To integrate radiological DICOM images into our currently existing web-browsable Electronic Medical Record (MINDscape). Over the last five years the University of Washington has created a clinical data repository combining in a distributed relational database information from multiple departmental databases (MIND). A text-based view of this data called the Mini Medical Record (MMR) has been available for three years. MINDscape, unlike the text based MMR, provides a platform independent, web browser view of the MIND dataset that can easily be linked to other information resources on the network. We have now added the integration of radiological images into MINDscape through a DICOM webserver. Methods/New Work -- we have integrated a commercial webserver that acts as a DICOM Storage Class Provider to our, computed radiography (CR), computed tomography (CT), digital fluoroscopy (DF), magnetic resonance (MR) and ultrasound (US) scanning devices. These images can be accessed through CGI queries or by linking the image server database using ODBC or SQL gateways. This allows the use of dynamic HTML links to the images on the DICOM webserver from MINDscape, so that the radiology reports already resident in the MIND repository can be married with the associated images through the unique examination accession number generated by our Radiology Information System (RIS). The web browser plug-in used provides a wavelet decompression engine (up to 16-bits per pixel) and performs the following image manipulation functions: window/level, flip, invert, sort, rotate, zoom, cine-loop and save as JPEG. Results -- Radiological DICOM image sets (CR, CT, MR and US) are displayed with associated exam reports for referring physician and clinicians anywhere within the widespread academic medical center on PCs, Macs, X-terminals and Unix computers. This system is also being used for home teleradiology application. Conclusion -- Radiological DICOM images can be made available medical center wide to physicians quickly using low-cost and ubiquitous, thin client browsing technology and wavelet compression.
Low-altitude aerial color digital photographic survey of the San Andreas Fault
Lynch, David K.; Hudnut, Kenneth W.; Dearborn, David S.P.
2010-01-01
Ever since 1858, when Gaspard-Félix Tournachon (pen name Félix Nadar) took the first aerial photograph (Professional Aerial Photographers Association 2009), the scientific value and popular appeal of such pictures have been widely recognized. Indeed, Nadar patented the idea of using aerial photographs in mapmaking and surveying. Since then, aerial imagery has flourished, eventually making the leap to space and to wavelengths outside the visible range. Yet until recently, the availability of such surveys has been limited to technical organizations with significant resources. Geolocation required extensive time and equipment, and distribution was costly and slow. While these situations still plague older surveys, modern digital photography and lidar systems acquire well-calibrated and easily shared imagery, although expensive, platform-specific software is sometimes still needed to manage and analyze the data. With current consumer-level electronics (cameras and computers) and broadband internet access, acquisition and distribution of large imaging data sets are now possible for virtually anyone. In this paper we demonstrate a simple, low-cost means of obtaining useful aerial imagery by reporting two new, high-resolution, low-cost, color digital photographic surveys of selected portions of the San Andreas fault in California. All pictures are in standard jpeg format. The first set of imagery covers a 92-km-long section of the fault in Kern and San Luis Obispo counties and includes the entire Carrizo Plain. The second covers the region from Lake of the Woods to Cajon Pass in Kern, Los Angeles, and San Bernardino counties (151 km) and includes Lone Pine Canyon soon after the ground was largely denuded by the Sheep Fire of October 2009. The first survey produced a total of 1,454 oblique digital photographs (4,288 x 2,848 pixels, average 6 Mb each) and the second produced 3,762 nadir images from an elevation of approximately 150 m above ground level (AGL) on the southeast leg and 300 m AGL on the northwest leg. Spatial resolution (pixel size or ground sample distance) is a few centimeters. Time and geographic coordinates of the aircraft were automatically written into the exchangeable image file format (EXIF) data within each jpeg photograph. A few hours after acquisition and validation, the photographs were uploaded to a publically accessible Web page. The goal was to obtain quick-turnaround, low-cost, high-resolution, overlapping, and contiguous imagery for use in planning field operations, and to provide imagery for a wide variety of land use and educational studies. This work was carried out in support of ongoing geological research on the San Andreas fault, but the technique is widely applicable beyond geology.
NASA Astrophysics Data System (ADS)
2005-10-01
Near-infrared images of the active galaxy NGC 1097, obtained with the NACO adaptive optics instrument on ESO's Very Large Telescope, disclose with unprecedented detail a complex central network of filamentary structure spiralling down to the centre of the galaxy. These observations provide astronomers with new insights on how super-massive black holes lurking inside galaxies get fed. "This is possibly the first time that a detailed view of the channelling process of matter, from the main part of the galaxy down to the very end in the nucleus is released," says Almudena Prieto (Max-Planck Institute, Heidelberg, Germany), lead author of the paper describing these results. Located at a distance of about 45 million light-years in the southern constellation Fornax (the Furnace), NGC 1097 is a relatively bright, barred spiral galaxy seen face-on. At magnitude 9.5, and thus just 25 times fainter than the faintest object that can be seen with the unaided eye, it appears in small telescopes as a bright, circular disc. NGC 1097 is a very moderate example of an Active Galactic Nucleus (AGN), whose emission is thought to arise from matter (gas and stars) falling into oblivion in a central black hole. However, NGC 1097 possesses a comparatively faint nucleus only, and the black hole in its centre must be on a very strict "diet": only a small amount of gas and stars is apparently being swallowed by the black hole at any given moment. Astronomers have been trying to understand for a long time how the matter is "gulped" down towards the black hole. Watching directly the feeding process requires very high spatial resolution at the centre of galaxies. This can be achieved by means of interferometry as was done with the VLTI MIDI instrument on the central parts of another AGN, NGC 1068 (see ESO PR 17/03), or with adaptive optics [1]. Thus, astronomers [2] obtained images of NGC 1097 with the adaptive optics NACO instrument attached to Yepun, the fourth Unit Telescope of ESO's VLT. These new images probe with unprecedented detail the presence and extent of material in the very proximity of the nucleus. The resolution achieved with the images is about 0.15 arcsecond, corresponding to about 30 light-years across. For comparison, this is only 8 times the distance between the Sun and its nearest star, Proxima Centauri. ESO PR Photo 33b/05 ESO PR Photo 33b/05 Filamentary Structures in NGC 1097 [Preview - JPEG: 400 x 570 pix - 275k] [Normal - JPEG: 800 x 1140 pix - 900k] [Full Res - JPEG: 1422 x 2026 pix - 2.6M] Caption: ESO PR Photo 33b/05: The left image shows the same central region as imaged in PR Photo 33a/05 but this time as seen in the J-Ks colour. It clearly shows the nucleus, the central spiral arms extending up to 1,300 light-years from the centre, and the star-forming ring. The right image shows the same but after a masking process has been applied to suppress the central stellar light of the galaxy. The central spiral arms are now seen as dark channels, some extending up to the star-forming ring. North is up and East is to the left. As can be seen in last year's image (see ESO PR Photo 35d/04), NGC 1097 has a very strong bar and a prominent star-forming ring inside it. Interior to the ring, a secondary bar crosses the nucleus almost perpendicular to the primary bar. The newly released NACO near-infrared images show in addition more than 300 star-forming regions, a factor four larger than previously known from Hubble Space Telescope images. These "HII regions" can be seen as white spots in ESO PR Photo 33a/05. At the centre of the ring, a moderate active nucleus is located. Details from the nucleus and its immediate surroundings are however outshone by the overwhelming stellar light of the galaxy seen as the bright diffuse emission all over the image. The astronomers therefore applied a masking technique that allowed them to suppress the stellar light (see ESO PR Photo 33b/05). This unveils a bright nucleus at the centre, but mostly a complex central network of filamentary structures spiralling down to the centre. "Our analysis of the VLT/NACO images of NGC 1097 shows that these filaments end up at the very centre of the galaxy", says co-author Juha Reunanen from ESO. "This network closely resembles those seen in computer models", adds co-worker Witold Maciejewski from the University of Oxford, UK. "The nuclear filaments revealed in the NACO images are the tracers of cold dust and gas being channelled towards the centre to eventually ignite the AGN." The astronomers also note that the curling of the spiral pattern in the innermost 300 light-years seem indeed to confirm the presence of a super-massive black hole in the centre of NGC 1097. Such a black hole in the centre of a galaxy causes the nuclear spiral to wind up as it approaches the centre, while in its absence the spiral would be unwinding as it moves closer to the centre. An image of NGC 1097 and its small companion, NGC 1097A, was taken in December 2004, in the presence of Chilean President Lagos with the VIMOS instrument on ESO's Very Large Telescope (VLT). It is available as ESO PR Photo 35d/04. More information This ESO Press Photo is based on research published in the October issue of Astronomical Journal, vol. 130, p. 1472 ("Feeding the Monster: The Nucleus of NGC 1097 at Subarcsecond Scales in the Infrared with the Very Large Telescope", by M. Almudena Prieto, Witold Maciejewski, and Juha Reunanen).
Chandra Observatory Uncovers Hot Stars In The Making
NASA Astrophysics Data System (ADS)
2000-11-01
Cambridge, Mass.--In resolving the hot core of one of the Earth's closest and most massive star-forming regions, the Chandra X-ray Observatory showed that almost all the young stars' temperatures are more extreme than expected. Orion Trapezium JPEG, TIFF, PS The Orion Trapezium as observed on October 31st UT 05:47:21 1999. The colors represent energy, where blue and white indicate very high energies and therefore exterme temperatures. The size of the X-ray source in the image also reflects its brightness, i.e. more bright sources appear larger in size. The is an artifact caused by the limiting blur of the telescope optics. The projected diameter of the field of view is about 80 light days. Credit: NASA/MIT Orion Trapezium JPEG, TIFF, PS The Orion Trapezium as observed on November 24th UT 05:37:54 1999. The colors represent energy, where blue and white indicate very high energies and therefore exterme temperatures. The size of the X-ray source in the image also reflects its brightness, i.e. more bright sources appear larger in size. The is an artifact caused by the limiting blur of the telescope optics. The projected diameter of the field of view is about 80 light days. Credit: NASA/MIT The Orion Trapezium Cluster, only a few hundred thousand years old, offers a prime view into a stellar nursery. Its X-ray sources detected by Chandra include several externally illuminated protoplanetary disks ("proplyds") and several very massive stars, which burn so fast that they will die before the low mass stars even fully mature. One of the major highlights of the Chandra observations are identification of proplyds as X-ray point source in the near vicinity of the most massive star in the Trapezium. Previous observations did not have the ability to separate the contributions of the different objects. "We've seen high temperatures in stars before, but what clearly surprised us was that nearly all the stars we see appear at rather extreme temperatures in X-rays, independent of their type," said Norbert S. Schulz, MIT research scientist at the Chandra X-ray Center, who leads the Orion Project. "And by extreme, we mean temperatures which are in some cases well above 60 million degrees." The hottest massive star known so far has been around 25 million degrees. The great Orion Nebula harbors the Orion Nebula Cluster (ONC), a loose association of around 2,000 mostly very young stars of a wide range of mass confined within a radius of less than 10 light years. The Orion Trapezium Cluster is a younger subgroup of stars at the core of the ONC confined within a radius of about 1.5 light years. Its median age is around 300,000 years. The constant bright light of the Trapezium and its surrounding stars at the heart of the Orion nebula (M42) are visible to the naked eye on clear nights. In X-rays, these young stars are constantly active and changing in brightness, sometimes within half a day, sometimes over weeks. "Never before Chandra have we seen images of stellar activity with such brilliance," said Joel Kastner, professor at the Chester F. Carlson Center for Imaging Science at the Rochester Institute of Technology. "Here the combination of very high angular resolution, with high quality spectra that Chandra offers, clearly pays off." The observation was performed using the High Energy Transmission Grating Spectrometer (HETGS) and the X-ray spectra were recorded with the spectroscopic array of the Advanced CCD Imaging Spectrometer (ACIS). The ACIS detector is a sophisticated version of the CCD detectors commonly used in video cameras or digital cameras. The orion stars are so bright in X-rays that they easily saturate the ccds. Here the team used the gratings as a blocking filter. Orion Trapezium - X-ray & Optical JPEG, TIFF, PS X-ray contours of the Chandra observation overlaid onto the optical Hubble image (courtesy of J. Bally, CASA Colorado). The field of view is 30"x30". Besides the bright main Trapezium stars, which were found to be extremely hot massive stars, several externally illuminated objects are also X-ray emitters. Some of them with temperatures up to 100 Million degrees. The ones that do not show X-ray contours are probably too faint to be detected in these particular Chandra observations. Credit: J. Bally, CASA Colorad It is generally assumed that low-mass stars like our Sun, when they are young, are more than 1,000 times more luminous in X-rays. The X-ray emission here is thought to arise from magnetic activity in connection with stellar rotation. Consequently, high temperatures would be observed in very violent and giant flares. Here temperatures as high as 60 million degrees have been observed in very few cases. The absence of many strong flares in the light curves, as well as temperatures in the Chandra ACIS spectra wich exceed the ones in giant flares, could mean that they are either young protostars (i.e stars in the making), or a special class of more evolved, hot young stars. Schulz concedes that although astronomers have gathered many clues in recent years about the X-ray behavior of very young stellar objects, "we are far from being able to uniquely classify evolutionary stages of their X-ray emission." The five main young and massive Trapezium stars are responsible for the illumination of the entire Orion Nebula. These stars are born with masses 15 to 30 times larger than the mass of our Sun. X-rays in such stars are thought to be produced by shocks that occur when high velocity stellar winds ram into slower dense material. The Chandra spectra show a temperature component of about 5 million to 10 million degrees, which is consistent with this model. However, four of these five stars also show additional components between 30 million and 60 million degrees. "The fact that some of these massive stars show such a hot component and some not, and that a hot component seems to be more common than previously assumed, is an important new aspect in the spectral behavior of these stars," said David Huenemoerder, research physicist at the MIT Center for Space Research. Standard shock models cannot explain such high temperatures, which may be caused by magnetically confined plasmas, which are generally only attributed to stars like the Sun. Such an effect would support the suspicion that some aspects in the X-ray emission of massive stars may not be different from our Sun, which also has a hot corona. More study is needed to confirm this conclusion. The latest in NASA's series of Great Observatories. Chandra is the "X-ray Hubble," launched in July 1999 into a deep-space orbit around the Earth. Chandra carries a large X-ray telescope to focus X-rays from objects in the sky. An X-ray telescope cannot work on the ground because the X-rays are absorbed by the Earth's atmosphere. The HETGS was built by the Massachusetts Institute of Technology with Bruno Rossi Professor Claude Canizares as Principal Investigator. The ACIS X-ray camera was conceived and developed for NASA by Penn State and the Massachusetts Institute of Technology under the leadership of Gordon Garmire, Evan Pugh Professor of Astronomy and Astrophysics at Penn State. The Orion observation was part of Prof. Canizares guaranteed observing time during the first round of Chandra observations. NASA's Marshall Space Flight Center in Huntsville, Alabama, manages the Chandra program. TRW Inc., Redondo Beach, California, is the prime contractor for the spacecraft. The Smithsonian's Chandra X-ray Center controls science and flight operations from Cambridge, Massachusetts. Orion Trapezium Handout Constellation Orion To follow Chandra's progress, visit the Chandra site at: http://chandra.harvard.edu AND http://chandra.nasa.gov Various Images for this release and a postscript version of a preprint of the accepted science paper (The Astrophysical Main Journal) can be downloaded from http://space.mit.edu/~nss/orion/orion.html
How C2 Goes Wrong (Briefing Chart)
2014-06-01
Guardian/Pix/pictures/2012/12/19/1355903591995/Hillsborough-disaster-010.jpg Cases (3): Disaster/Emergency Response (Cont.) Columbine High School ...r337173_1529332.jpg http://bossip.files.wordpress.com/2012/11/ massacre -e1352384704110.jpeg?w=625&h=389 The Punchline “What we’ve got here, is
Genomics & Genetics | National Agricultural Library
Skip to main content Home National Agricultural Library United States Department of Agriculture Ag agricultural and environmental settings. Deadpool proximal sensing cart docx xlsx 3x jpeg 5x pdf Data from Buytaert. NAL Home | USDA.gov | Agricultural Research Service | Plain Language | FOIA | Accessibility
VLT Smashes the Record of the Farthest Known Galaxy
NASA Astrophysics Data System (ADS)
2004-03-01
Redshift 10 Galaxy discovered at the Edge of the Dark Ages [1] Summary Using the ISAAC near-infrared instrument on ESO's Very Large Telescope, and the magnification effect of a gravitational lens, a team of French and Swiss astronomers [2] has found several faint galaxies believed to be the most remote known. Further spectroscopic studies of one of these candidates has provided a strong case for what is now the new record holder - and by far - of the most distant galaxy known in the Universe. Named Abell 1835 IR1916, the newly discovered galaxy has a redshift of 10 [3] and is located about 13,230 million light-years away. It is therefore seen at a time when the Universe was merely 470 million years young, that is, barely 3 percent of its current age. This primeval galaxy appears to be ten thousand times less massive than our Galaxy, the Milky Way. It might well be among the first class of objects which put an end to the Dark Ages of the Universe. This remarkable discovery illustrates the potential of large ground-based telescopes in the near-infrared domain for the exploration of the very early Universe. PR Photo 05a/04: Abell 1835 IR1916 - the Farthest Galaxy - Seen in the Near-Infrared PR Photo 05b/04: Two-dimensional Spectra of Abell 1835 IR1916 Digging into the past Like palaeontologists who dig deeper and deeper to find the oldest remains, astronomers try to look further and further to scrutinise the very young Universe. The ultimate quest? Finding the first stars and galaxies that formed just after the Big Bang. More precisely, astronomers are trying to explore the last "unknown territories", the boundary between the "Dark Ages" and the "Cosmic Renaissance". Rather shortly after the Big Bang, which is now believed to have taken place some 13,700 million years ago, the Universe plunged into darkness. The relic radiation from the primordial fireball had been stretched by the cosmic expansion towards longer wavelengths and neither stars nor quasars had yet been formed which could illuminate the vast space. The Universe was a cold and opaque place. This sombre era is therefore quite reasonably dubbed the "Dark Ages". A few hundred million years later, the first generation of stars and, later still, the first galaxies and quasars, produced intense ultraviolet radiation, gradually lifting the fog over the Universe. This was the end of the Dark Ages and, with a term again taken over from human history, is sometimes referred to as the "Cosmic Renaissance". Astronomers are trying to pin down when - and how - exactly the Dark Ages finished. This requires looking for the remotest objects, a challenge that only the largest telescopes, combined with a very careful observing strategy, can take up. Using a Gravitational Telescope With the advent of 8-10 meter class telescopes spectacular progress has been achieved during the last decade. Indeed it has since become possible to observe with some detail several thousand galaxies and quasars out to distances of nearly 12 billion light-years (i.e. up to a redshift of 3 [3]). In other words astronomers are now able to study individual galaxies, their formation, evolution, and other properties over typically 85 % of the past history of the Universe. Further in the past, however, observations of galaxies and quasars become scarce. Currently, only a handful of very faint galaxies are seen approximately 1,200 to 750 million years after the Big Bang (redshift 5-7). Beyond that, the faintness of these sources and the fact their light is shifted from the optical to the near infrared has so far severely limited the studies. An important breakthrough in this quest for the earliest formed galaxy has now been achieved by a team of French and Swiss astronomers [2] using ESO's Very Large Telescope (VLT) equipped with the near-infrared sensitive instrument ISAAC. To accomplish this, they had to combine the light amplification effect of a cluster of galaxies - a Gravitational Telescope - with the light gathering power of the VLT and the excellent sky conditions prevailing at Paranal. Searching for distant galaxies The hunt for such faint, elusive objects demands a particular approach. First of all, very deep images of a cluster of galaxies named Abell 1835 were taken using the ISAAC near-infrared instrument on the VLT. Such relatively nearby massive clusters are able to bend and amplify the light of background sources - a phenomenon called Gravitational Lensing and predicted by Einstein's theory of General Relativity. This natural amplification allows the astronomers to peer at galaxies which would otherwise be too faint to be seen. In the case of the newly discovered galaxy, the light is amplified approximately 25 to 100 times! Combined with the power of the VLT it has thereby been possible to image and even to take a spectrum of this galaxy. Indeed, the natural amplification effectively increases the aperture of the VLT from 8.2-m to 40-80 m. The deep near-IR images taken at different wavelengths have allowed the astronomers to characterise the properties of a few thousand galaxies in the image and to select a handful of them as potentially very distant galaxies. Using previously obtained images taken at the Canada-France-Hawaii Telescope (CFHT) on Mauna Kea and images from the Hubble Space Telescope, it has then been verified that these galaxies are indeed not seen in the optical. In this way, six candidate high redshift galaxies were recognised whose light may have been emitted when the Universe was less than 700 million years old. To confirm and obtain a more precise determination of the distance of one of these galaxies, the astronomers obtained Director's Discretionary Time to use again ISAAC on the VLT, but this time in its spectroscopic mode. After several months of careful analysis of the data, the astronomers are convinced to have detected a weak but clear spectral feature in the near-infrared domain. The astronomers have made a strong case that this feature is most certainly the Lyman-alpha emission line typical of these objects. This line, which occurs in the laboratory at a wavelength of 0.1216 μm, that is, in the ultraviolet, has been stretched to the near infrared at 1.34 μm, making Abell 1835 IR1916 the first galaxy known to have a redshift as large as 10. The most distant galaxy known to date ESO PR Photo 05a/04 ESO PR Photo 05a/04 ISAAC images of Abell 1835 [Preview - JPEG: 405 x 400 pix - 240k] [Normal - JPEG: 810 x 800 pix - 760k] ESO PR Photo 05b/04 ESO PR Photo 05b/04 Two-dimensional spectra of Abell 1835 IR1936 [Preview - JPEG: 555 x 400 pix - 208k] [Normal - JPEG: 1110 x 800 pix - 570k] Captions: ESO PR Photo 05a/04 shows an ISAAC image in the near-infrared of the core of the lensing cluster Abell 1835 (upper) with the location of the galaxy Abell 1835 IR1916 (white circle). The thumbnail images at the bottom show the images of the remote galaxy in the visible R-band (HST-WPC image) and in the J-, H-, and K-bands. The fact that the galaxy is not detected in the visible image but present in the others - and more so in the H-band - is an indication that this galaxy has a redshift around 10. ESO PR Photo 05b/04 is a reproduction from two-dimensional spectra around the emission line at 1.33745 μm showing the detected emission line of Abell 1835 IR1916 (circle above). If identified as Ly-alpha (0.1216 μm), this leads to a redshift z=10. The line has been observed in two independent spectra corresponding to two different settings of the spectrograph: the right panels show the spectra in the short wavelength setting (centred on 1.315 μm), the long wavelength setting (centred on 1.365 μm), and in the composite, respectively. The line is seen in the dark circles. This is the strongest case for a redshift in excess of the current spectroscopically confirmed record at z=6.6 and the first case of a double-digit redshift. Scaling the age of the Universe to a person's lifetime (80 years, say), the previous confirmed record showed a four-year toddler. With the present observations, we have a picture of the child when he was two and a half years old. From the images of this galaxy obtained in the various wavebands, the astronomers deduce that it is undergoing a period of intense star formation. But the amount of stars formed is estimated to be "only" 10 million times the mass of the sun, approximately ten thousand times smaller than the mass of our Galaxy, the Milky Way. In other words, what the astronomers see is the first building block of the present-day large galaxies. This finding agrees well with our current understanding of the process of galaxy formation corresponding to a successive build-up of the large galaxies seen today through numerous mergers of "building blocks", smaller and younger galaxies formed in the past. It is these building blocks which may have provided the first light sources that lifted the fog over the Universe and put an end to the Dark Ages. For Roser Pelló, from the Observatoire Midi-Pyrénées (France) and co-leader of the team, "these observations show that under excellent sky conditions like those at ESO's Paranal Observatory, and using strong gravitational lensing, direct observations of distant galaxies close to the Dark Ages are feasible with the best ground-based telescopes." The other co-leader of the team, Daniel Schaerer from the Geneva Observatory and University (Switzerland), is excited: "This discovery opens the way to future explorations of the first stars and galaxies in the early Universe."
Java Image I/O for VICAR, PDS, and ISIS
NASA Technical Reports Server (NTRS)
Deen, Robert G.; Levoe, Steven R.
2011-01-01
This library, written in Java, supports input and output of images and metadata (labels) in the VICAR, PDS image, and ISIS-2 and ISIS-3 file formats. Three levels of access exist. The first level comprises the low-level, direct access to the file. This allows an application to read and write specific image tiles, lines, or pixels and to manipulate the label data directly. This layer is analogous to the C-language "VICAR Run-Time Library" (RTL), which is the image I/O library for the (C/C++/Fortran) VICAR image processing system from JPL MIPL (Multimission Image Processing Lab). This low-level library can also be used to read and write labeled, uncompressed images stored in formats similar to VICAR, such as ISIS-2 and -3, and a subset of PDS (image format). The second level of access involves two codecs based on Java Advanced Imaging (JAI) to provide access to VICAR and PDS images in a file-format-independent manner. JAI is supplied by Sun Microsystems as an extension to desktop Java, and has a number of codecs for formats such as GIF, TIFF, JPEG, etc. Although Sun has deprecated the codec mechanism (replaced by IIO), it is still used in many places. The VICAR and PDS codecs allow any program written using the JAI codec spec to use VICAR or PDS images automatically, with no specific knowledge of the VICAR or PDS formats. Support for metadata (labels) is included, but is format-dependent. The PDS codec, when processing PDS images with an embedded VIAR label ("dual-labeled images," such as used for MER), presents the VICAR label in a new way that is compatible with the VICAR codec. The third level of access involves VICAR, PDS, and ISIS Image I/O plugins. The Java core includes an "Image I/O" (IIO) package that is similar in concept to the JAI codec, but is newer and more capable. Applications written to the IIO specification can use any image format for which a plug-in exists, with no specific knowledge of the format itself.
A software to digital image processing to be used in the voxel phantom development.
Vieira, J W; Lima, F R A
2009-11-15
Anthropomorphic models used in computational dosimetry, also denominated phantoms, are based on digital images recorded from scanning of real people by Computed Tomography (CT) or Magnetic Resonance Imaging (MRI). The voxel phantom construction requests computational processing for transformations of image formats, to compact two-dimensional (2-D) images forming of three-dimensional (3-D) matrices, image sampling and quantization, image enhancement, restoration and segmentation, among others. Hardly the researcher of computational dosimetry will find all these available abilities in single software, and almost always this difficulty presents as a result the decrease of the rhythm of his researches or the use, sometimes inadequate, of alternative tools. The need to integrate the several tasks mentioned above to obtain an image that can be used in an exposure computational model motivated the development of the Digital Image Processing (DIP) software, mainly to solve particular problems in Dissertations and Thesis developed by members of the Grupo de Pesquisa em Dosimetria Numérica (GDN/CNPq). Because of this particular objective, the software uses the Portuguese idiom in their implementations and interfaces. This paper presents the second version of the DIP, whose main changes are the more formal organization on menus and menu items, and menu for digital image segmentation. Currently, the DIP contains the menus Fundamentos, Visualizações, Domínio Espacial, Domínio de Frequências, Segmentações and Estudos. Each menu contains items and sub-items with functionalities that, usually, request an image as input and produce an image or an attribute in the output. The DIP reads edits and writes binary files containing the 3-D matrix corresponding to a stack of axial images from a given geometry that can be a human body or other volume of interest. It also can read any type of computational image and to make conversions. When the task involves only an output image, this is saved as a JPEG file in the Windows default; when it involves an image stack, the output binary file is denominated SGI (Simulações Gráficas Interativas (Interactive Graphic Simulations), an acronym already used in other publications of the GDN/CNPq.
Isolated Star-Forming Cloud Discovered in Intracluster Space
NASA Astrophysics Data System (ADS)
2003-01-01
Subaru and VLT Join Forces in New Study of Virgo Galaxy Cluster [1] Summary At a distance of some 50 million light-years, the Virgo Cluster is the nearest galaxy cluster. It is located in the zodiacal constellation of the same name (The Virgin) and is a large and dense assembly of hundreds of galaxies. The "intracluster" space between the Virgo galaxies is permeated by hot X-ray emitting gas and, as has become clear recently, by a sparse "intracluster population of stars". So far, stars have been observed to form in the luminous parts of galaxies. The most massive young stars are often visible indirectly by the strong emission from surrounding cocoons of hot gas, which is heated by the intense radiation from the embedded stars. These "HII regions" (pronounced "Eitch-Two" and so named because of their content of ionized hydrogen) may be very bright and they often trace the beautiful spiral arms seen in disk galaxies like our own Milky Way. New observations by the Japanese 8-m Subaru telescope and the ESO Very Large Telescope (VLT) have now shown that massive stars can also form in isolation, far from the luminous parts of galaxies. During a most productive co-operation between astronomers working at these two world-class telescopes, a compact HII region has been discovered at the very boundary between the outer halo of a Virgo cluster galaxy and Virgo intracluster space. This cloud is illuminated and heated by a few hot and massive young stars. The estimated total mass of the stars in the cloud is only a few hundred times that of the Sun. Such an object is rare at the present epoch. However, there may have been more in the past, at which time they were perhaps responsible for the formation of a fraction of the intracluster stellar population in clusters of galaxies. Massive stars in such isolated HII regions will explode as supernovae at the end of their short lives, and enrich the intracluster medium with heavy elements. Observations of two other Virgo cluster galaxies, Messier 86 and Messier 84, indicate the presence of other isolated HII regions, thus suggesting that isolated star formation may occur more generally in galaxies. If so, this process may provide a natural explanation to the current riddle why some young stars are found high up in the halo of our own Milky Way galaxy, far from the star-forming clouds in the main plane. The Virgo Cluster ESO PR Photo 04a/03 ESO PR Photo 04a/03 [Preview - JPEG: 400 x 428 pix - 74k [Normal - JPEG: 800 x 855 pix - 408k] [Hi-Res - JPEG: 4252 x 4544 pix - 10.3M] ESO PR Photo 04b/03 ESO PR Photo 04b/03 [Preview - JPEG: 433 x 400 pix - 60k [Normal - JPEG: 865 x 800 pix - 456k] [Hi-Res - JPEG: 3077 x 2847 pix - 4.2M] Captions: PR Photo 04a/03 displays a sky field near some of the brighter galaxies in the Virgo Cluster. It was obtained in April 2000 with the Wide Field Imager (WFI) at the La Silla Observatory (exposure 6 x 5 min; red R-band; seeing 1.3 arcsec). The large elliptical galaxy at the centre is Messier 84; the elongated image of NGC 4388 (an active spiral galaxy, seen from the side) is in the lower left corner. The field measures 16.9 x 15.7 arcmin2. PR Photo 04b/03 shows a larger region of the Virgo cluster, with the galaxies Messier 86 (at the upper edge of the field, to the left of the centre), as well as Messier 84 (upper right) and NGC 4388 (just below the centre) that are also seen in PR Photo 04a/03. It is reproduced from a long-exposure Subaru Suprime-Cam image, obtained in the red light of ionized hydrogen (the H-alpha spectral line at wavelength 656.2 nm). In order to show the faintest possible hydrogen emitting objects embedded in the outskirts of bright galaxies, their smooth envelopes have been "subtracted" during the image processing. The field measures 34 x 27 arcmin2. Part of this sky field is shown in colour in PR Photo 04c/03. Captions: PR Photo 04a/03 displays a sky field near some of the brighter galaxies in the Virgo Cluster. It was obtained in April 2000 with the Wide Field Imager (WFI) at the La Silla Observatory (exposure 6 x 5 min; red R-band; seeing 1.3 arcsec). The large elliptical galaxy at the centre is Messier 84; the elongated image of NGC 4388 (an active spiral galaxy, seen from the side) is in the lower left corner. The field measures 16.9 x 15.7 arcmin2. PR Photo 04b/03 shows a larger region of the Virgo cluster, with the galaxies Messier 86 (at the upper edge of the field, to the left of the centre), as well as Messier 84 (upper right) and NGC 4388 (just below the centre) that are also seen in PR Photo 04a/03. It is reproduced from a long-exposure Subaru Suprime-Cam image, obtained in the red light of ionized hydrogen (the H-alpha spectral line at wavelength 656.2 nm). In order to show the faintest possible hydrogen emitting objects embedded in the outskirts of bright galaxies, their smooth envelopes have been "subtracted" during the image processing. The field measures 34 x 27 arcmin2. Part of this sky field is shown in colour in PR Photo 04c/03. The galaxies in the Universe are rarely isolated - they prefer company. Many are found within dense structures, referred to as galaxy clusters, cf. e.g., ESO PR Photo 16a/99. The galaxy cluster nearest to us is seen in the direction of the zodiacal constellation Virgo (The Virgin), at a distance of approximately 50 million light-years. PR Photo 04a/03 (from the Wide Field Imager camera at the ESO La Silla Observatory) shows a small sky region near the centre of this cluster with some of the brighter cluster galaxies. PR Photo 04b/03 displays an image of a larger field (partially overlapping Photo 04a/03) in the light of ionized hydrogen - it was obtained by the Japanese 8.2-m Subaru telescope on Mauna Kea (Hawaii, USA). The field includes some of the large galaxies in this cluster, e.g., Messier 86, Messier 84 and NGC 4388. In order to show the faintest possible hydrogen emitting objects embedded in the outskirts of bright galaxies, their smooth envelopes have been "subtracted" during the image processing. This is why they look quite different in the two photos. Clusters of galaxies are believed to have formed because of the strong gravitational pull from dark and luminous matter. The Virgo cluster is considered to be a relatively young cluster, because studies of the distribution of its member galaxies and X-ray investigations of hot cluster gas have revealed small "subclusters of galaxies" around the major galaxies Messier 87, Messier 86 and Messier 49. These subclusters are yet to merge to form a dense and smooth galaxy cluster. The Virgo cluster is apparently cigar-shaped, with its longest dimension of about 10 million light-years near the line-of-sight direction - we see it "from the end". Stars in intracluster space Galaxy clusters are dominated by dark matter. The largest fraction of the luminous (i.e. "visible") cluster mass is made up of the hot gas that permeates all of the cluster. Recent observations of "intracluster" stars have confirmed that, in addition to the individual galaxies, the Virgo cluster also contains a so-called "diffuse stellar component", which is located in the space between the cluster galaxies. The first hint of this dates back to 1951 when Swiss astronomer Fritz Zwicky (1898-1974), working at the 5-m telescope at Mount Palomar in California (USA), claimed the discovery of diffuse light coming from the space between the galaxies in another large cluster of galaxies, the Coma cluster. The brightness of this intracluster light is 100 times fainter than the average night-sky brightness on the ground (mostly caused by the glow of atoms in the upper terrestrial atmosphere) and its measurement is difficult even with present technology. We now know that this intracluster glow comes from individual stars in that region. Planetary nebulae More recently, astronomers have undertaken a new and different approach to detect the elusive intracluster stars. They now search for Sun-like stars in their final dying phase during which they eject their outer layers into surrounding space. At the same time they unveil their small and hot stellar core which appears as a "white dwarf star". Such objects are known as "planetary nebulae" because some of those nearby, e.g. the "Dumbbell Nebula" (cf. ESO PR Photo 38a/98) resemble the disks of the outer solar system planets when viewed in small telescopes. The ejected envelope is illuminated and heated by the very hot star at its centre. This nebula emits strongly in characteristic emission lines of oxygen (green; at wavelengths 495.9 and 500.7 nm) and hydrogen (red; the H-alpha line at 656.2 nm). Planetary nebulae may be distinguished from other emission nebulae by the fact that their main green oxygen line at 500.7 nm is normally about 3 to 5 times brighter than the red H-alpha line. Search for intracluster planetary nebulae An international team of astronomers [2] is now carrying out a very challenging research programme, aimed at finding intracluster planetary nebulae. For this, they observe the regions between cluster galaxies with specially designed, narrow-band optical filters tuned to the wavelength of the green oxygen lines. The main goal is to study the overall properties of the diffuse stellar component in the nearby Virgo cluster. How much diffuse light comes from the intracluster space, how is it distributed within the cluster, and what is its origin? Because the stars in this region are apparently predominantly old, the most likely explanation of their presence in this region is that they formed inside individual galaxies, which were subsequently stripped of many of their stars during close encounters with other galaxies during the initial stages of cluster formation. These "lost" stars were then dispersed into intracluster space where we now find them. The Subaru observations ESO PR Photo 04c/03 ESO PR Photo 04c/03 [Preview - JPEG: 471 x 400 pix - 62k [Normal - JPEG: 941 x 800 pix - 776k] [Hi-Res - JPEG: 3028 x 2573 pix - 4.4M] ESO PR Photo 04d/03 ESO PR Photo 04d/03 [Preview - JPEG: 444 x 400 pix - 92k [Normal - JPEG: 888 x 800 pix - 600k] Captions: PR Photo 04c/03 shows the general location of the newly discovered compact HII region with respect to a previously published Subaru Suprime-Cam image of NGC 4388. The image combines H-alpha narrow-band (hydrogen), O[III] narrow-band (oxygen), and broad-band optical V-band data. The extended pink filamentary structure in this image is due to gas ionized by the radiation from the nucleus of the galaxy. The vertical lines are caused by detector saturation of bright objects. The field of view is 11.6 x 5.0 arcmin2. The outlined region indicates the sky field shown in PR Photo 04d/03 which is an H-alpha image of a 4 x 3 arcmin2 region in the Virgo intracluster region. This is part of the area selected for spectroscopic follow-up observations with the FORS2 multimode instrument at the 8.2-m VLT YEPUN telescope. The image shows the confirmed compact HII region (in blue circle to the left) and the confirmed intracluster planetary nebula (in yellow and red circle at the top). The two other objects (in red circles) are additional planetary nebulae candidates, which will soon be observed spectroscopically. North is up, and East is left. The newly discovered HII-region (blue circle) is well visible on PR Photo 04c/03 and faintly on the high-resolution versions of PR Photo 04a/03 and PR Photo 04b/03. Captions: PR Photo 04c/03 shows the general location of the newly discovered compact HII region with respect to a previously published Subaru Suprime-Cam image of NGC 4388. The image combines H-alpha narrow-band (hydrogen), O[III] narrow-band (oxygen), and broad-band optical V-band data. The extended pink filamentary structure in this image is due to gas ionized by the radiation from the nucleus of the galaxy. The vertical lines are caused by detector saturation of bright objects. The field of view is 11.6 x 5.0 arcmin2. The outlined region indicates the sky field shown in PR Photo 04d/03 which is an H-alpha image of a 4 x 3 arcmin2 region in the Virgo intracluster region. This is part of the area selected for spectroscopic follow-up observations with the FORS2 multimode instrument at the 8.2-m VLT YEPUN telescope. The image shows the confirmed compact HII region (in blue circle to the left) and the confirmed intracluster planetary nebula (in yellow and red circle at the top). The two other objects (in red circles) are additional planetary nebulae candidates, which will soon be observed spectroscopically. North is up, and East is left. The newly discovered HII-region (blue circle) is well visible on PR Photo 04c/03 and faintly on the high-resolution versions of PR Photo 04a/03 and PR Photo 04b/03. Japanese and European astronomers used the Suprime-Cam wide-field mosaic camera at the 8-m Subaru telescope (Mauna Kea, Hawaii, USA) to search for intracluster planetary nebulae in one of the densest regions of the Virgo cluster, cf. PR Photo 04b/03. They needed a telescope of this large size in order to select such objects and securely discriminate them from the thousands of foreground stars in the Milky Way and background galaxies. In particular, by observing in two narrow-band filters sensitive to oxygen and hydrogen, respectively, the planetary nebulae visible in this field could be "separated" from distant (high-redshift) background galaxies, which do not have strong emission in both the green and red band. It is very time-consuming to observe the weak H-alpha emission and this can only be done with a big telescope. Some 40 intracluster planetary nebulae candidates were found in this field which had the expected oxygen/H-alpha line intensity ratios of 3 - 5, such as those depicted PR Photo 04d/03. Unexpectedly, however, the data also showed a small number of star-like emission objects with oxygen/H-alpha line ratios of about 1. This is more typical of a cloud of ionized gas around young, massive stars - like the so-called HII regions in our own galaxy, the Milky Way. However, it would be very unusual to find such star formation regions in the intracluster region, so follow-up spectroscopic observations were clearly needed for confirmation. THE VLT MEASUREMENTS ESO PR Photo 04e/03 ESO PR Photo 04e/03 [Preview - JPEG: 506 x 400 pix - 35k [Normal - JPEG: 1011 x 800 pix - 128k] Captions: PR Photo 04e/03 displays the emission spectrum (in the visible/near-IR spectral region) of the compact HII region in the Virgo intracluster field, as obtained with the FORS2 multi-mode instrument of the 8.2-m VLT YEPUN telescope on Paranal. Emission lines from oxygen ([OIII]) and hydrogen (H-alpha, H-beta, H-gamma) atoms as well as ionized sulphur ([SII], [SIII]) are identified. The only way to make sure that these unusual objects are actually powered by young stars is by a detailed spectroscopical study, analyzing the emitted light over a wide range of wavelengths. One of the objects was observed in this way in April 2002 with the FORS2 multi-mode instrument at the 8.2-m VLT YEPUN telescope at the ESO Paranal Observatory (Chile). This was a most challenging observation, even for this very powerful facility, requiring several hours of exposure time. The brightness of the faint object (the flux of the oxygen [OIII 500.7]-line) was comparable to that of a 60-Watt light bulb at a distance of about 6.6 million km, i.e., about 17 times farther than the Moon. The recorded (long-slit) spectrum (PR Photo 04e/03) is indeed that of an HII region, with characteristic emission lines from hydrogen, oxygen and sulphur, and with underlying blue "continuum" emission from hot, young stars. This is the first concrete evidence that some of the ionized hydrogen gas in the intracluster medium near NGC 4388 is heated by massive stars, rather than radiation from the nucleus of the galaxy. Comparing the spectrum with simple starburst models showed that this HII region is "powered" by one or two hot and massive (O-type) stars. The best-fitting starburst model implies an estimated total mass of young stars of some 400 solar masses with an age of about 3 million years. The object is obviously very compact - it is indeed unresolved in all the images. The inferred radius of the HII region is about 11 light-years. Young stars form far from galaxies This compact star-forming region is located about 3.4 arcmin north and 0.9 arcmin west of the galaxy NGC 4388, corresponding to a distance of some 82,000 light-years (projected) from the main star-forming regions in this galaxy. The small cloud is moving away from us with an observed velocity of 2670 km/sec. This is considerably faster than the mean velocity of the Virgo cluster (about 1200 km/sec) but similar to that of NGC 4388 (2520 km/sec), indicating that it is probably falling through the Virgo cluster core together with NGC 4388, but it cannot have moved far during the comparatively short lifetime of its massive stars. It is not known whether it once was or still is bound to NGC 4388, or whether it only belonged to the surroundings that fell into the Virgo cluster with this galaxy. In any case, the existence of this HII region is a clear demonstration that stars can form in the "diffuse" outskirts of galaxies, if not in intracluster space. Because of internal dynamical processes, the stars in this object cannot remain forever in a dense cluster. Within a few hundred million years they will disperse and mix with the diffuse stellar population nearby. This isolated star formation is therefore likely to contribute to the intracluster stellar population, either directly, or after having moved away from the halo of NGC 4388. This mode of isolated star formation does not contribute much to the total intracluster light emission - at the current rate it can explain only a small fraction of the diffuse light now observed in this region. However, it may have been more significant in the past, when protogalaxies and proto-galaxy groups, rich in neutral gas and with gas clouds at large distances from their centers, fell into the forming Virgo cluster for the first time. Prospects The existence of isolated compact HII regions like this one is important as a very different site of star formation than those normally seen in galaxies. The massive stars born in such isolated clouds will explode as supernovae and enrich the Virgo intracluster medium with metals. Other possible - but not yet spectroscopically verified - compact HII regions in the halos of both Messier 86 and Messier 84 have been detected during this work. This finding thus also calls into question the current use of emission-line planetary nebulae luminosities as a distance indicator; to obtain the best possible accuracy, it will henceforth be necessary to weed out possible HII regions in the samples. If compact HII regions exist generally in galaxies, they may possibly be the birthplaces of some of the young stars now observed in the halo of our Milky Way galaxy, high above the main plane. Observational programmes with both the Subaru and VLT telescopes are now planned to discover more of these interesting objects and to explore their properties.
Protection and governance of MPEG-21 music player MAF contents using MPEG-21 IPMP tools
NASA Astrophysics Data System (ADS)
Hendry; Kim, Munchurl
2006-02-01
MPEG (Moving Picture Experts Groups) is currently standardizing Multimedia Application Format (MAF) which targets to provide simple but practical multimedia applications to the industry. One of the interesting and on-going working items of MAF activity is the so-called Music Player MAF which combines MPEG-1/2 layer III (MP3), JPEG image, and metadata into a standard format. In this paper, we propose a protection and governance mechanism to the Music Player MAF by incorporating other MPEG technology, MPEG-21 IPMP (Intellectual Property Management and Protection). We show, in this paper, use-case of the distribution and consumption of this Music Player contents, requirements, and how this protection and governance can be implemented in conjunction with the current Music Player MAF architecture and file system. With the use of MPEG-21 IPMP, the protection and governance to the content of Music Player MAF fulfils flexibility, extensibility, and granular in protection requirements.
NASA Astrophysics Data System (ADS)
Fridrich, Jessica; Goljan, Miroslav; Lisonek, Petr; Soukal, David
2005-03-01
In this paper, we show that the communication channel known as writing in memory with defective cells is a relevant information-theoretical model for a specific case of passive warden steganography when the sender embeds a secret message into a subset C of the cover object X without sharing the selection channel C with the recipient. The set C could be arbitrary, determined by the sender from the cover object using a deterministic, pseudo-random, or a truly random process. We call this steganography "writing on wet paper" and realize it using low-density random linear codes with the encoding step based on the LT process. The importance of writing on wet paper for covert communication is discussed within the context of adaptive steganography and perturbed quantization steganography. Heuristic arguments supported by tests using blind steganalysis indicate that the wet paper steganography provides improved steganographic security for embedding in JPEG images and is less vulnerable to attacks when compared to existing methods with shared selection channels.
Morgan, Karen L. M.; DeWitt, Nancy T.
2017-04-03
The U.S. Geological Survey (USGS), as part of the National Assessment of Storm-Induced Coastal Change Hazards project, conducts baseline and storm-response photography missions to document and understand the changes in vulnerability of the Nation's coasts to extreme storms. On August 31, 2005, the USGS conducted an oblique aerial photographic survey from Panama City, Florida, to Lakeshore, Mississippi, and the Chandeleur Islands, Louisiana, aboard a Piper Navajo Chieftain aircraft at an altitude of 500 feet and approximately 1,000 feet offshore. This mission was flown to collect post-Hurricane Katrina data, which can be used to assess incremental changes in the beach and nearshore area and can be used to assess future coastal change.The photographs in this report are Joint Photographic Experts Group (JPEG) images. These photographs document the state of the barrier islands and other coastal features at the time of the survey.
Fast H.264/AVC FRExt intra coding using belief propagation.
Milani, Simone
2011-01-01
In the H.264/AVC FRExt coder, the coding performance of Intra coding significantly overcomes the previous still image coding standards, like JPEG2000, thanks to a massive use of spatial prediction. Unfortunately, the adoption of an extensive set of predictors induces a significant increase of the computational complexity required by the rate-distortion optimization routine. The paper presents a complexity reduction strategy that aims at reducing the computational load of the Intra coding with a small loss in the compression performance. The proposed algorithm relies on selecting a reduced set of prediction modes according to their probabilities, which are estimated adopting a belief-propagation procedure. Experimental results show that the proposed method permits saving up to 60 % of the coding time required by an exhaustive rate-distortion optimization method with a negligible loss in performance. Moreover, it permits an accurate control of the computational complexity unlike other methods where the computational complexity depends upon the coded sequence.
First Visiting Astronomers at VLT KUEYEN
NASA Astrophysics Data System (ADS)
2000-04-01
A Deep Look into the Universal Hall of Mirrors Starting in the evening of April 1, 2000, Ghislain Golse and Francisco Castander from the Observatoire Midi-Pyrénées (Toulouse, France) [1] were the first "visiting astronomers" at Paranal to carry out science observations with the second 8.2-m VLT Unit Telescope, KUEYEN . Using the FORS2 multi-mode instrument as a spectrograph, they measured the distances to a number of very remote galaxies, located far out in space behind two clusters of galaxies. Such observations may help to determine the values of cosmological parameters that define the geometry and fate of the Universe. After two nights of observations, the astronomers came away from Paranal with a rich harvest of data and a good feeling. "We are delighted that the telescope performed so well. It is really impressive how far out one can reach with the VLT, compared to the `smaller' 4-meter telescopes with which we previously observed. It opens a new window towards the distant, early Universe. Now we are eager to start reducing and analysing these data!" , Francisco Castander said. Measuring the Geometry of the Universe with Multiple Images in Cluster Lenses The present programme is typical of the fundamental cosmological studies that are now being undertaken with the ESO Very Large Telescope (VLT). Clusters of galaxies are very massive objects. Their gravitational fields intensify ("magnify") and distort the images of galaxies behind them. The magnification factor for the faint background galaxy population seen within a few arcminutes of the centre of a massive cluster at intermediate distance (redshift z ~ 0.2 - 0.4, i.e., corresponding to a look-back time of approx. 2 - 4 billion years) is typically larger than 2, and occasionally much larger. The clusters thus function as gravitational lenses . They may be regarded as "natural telescopes" that help us to see fainter objects further out into space than would otherwise be possible with our own telescopes. In a few cases, the images of the objects behind the clusters are split into several components. Knowing the distance to the objects for which we see multiple images and the distribution of matter in the cluster that produce the lensing effect allows to determine the geometry of the universe in the corresponding direction , independently of its rate of expansion. For a given cluster lens, a minimum of three such multiple-imaged objects with measured distances and positions is in principle sufficient to determine the geometry of the universe in that direction, as expressed by the values of two of the main cosmological parameters, the density (Omega: ) and the cosmological constant (Lambda: ). Detailed observations of these cosmic mirages thus have a direct implication for our understanding of the universe in which we live. A study of the clusters of galaxies Abell 1689 and MS 1008 The first visiting astronomers to KUEYEN used FORS2 to measure the distances to some of the background objects that are being multiple-lensed by the cluster of galaxies Abell 1689 . This cluster was first discovered by American astronomer George Abell some thirty years ago when he studied photographic plates obtained at the Palomar Observatory. Since then, this cluster has been further observed and deep images taken by the Hubble Space Telescope (HST) have revealed at least five multiple-lensed objects in this direction. However, because of the faintness of these images, it has so far not been possible to measure the distances to those objects. This has only become possible now, with the advent of new and powerful astronomical instruments like the FORS2 spectrograph at KUEYEN. At the beginning of the night - before Abell 1689 was high enough in the sky to be observable - the astronomers also observed another cluster lens, MS 1008 . This cluster was discovered with the Einstein X-ray satellite and has been studied in great detail by means of images in different colours by the VLT ANTU telescope during the Science Verification phase. Spectra of distant lensed objects ESO PR Photo 10a/00 ESO PR Photo 10a/00 [Preview - JPEG: 400 x 446 pix - 67k] [Normal - JPEG: 800 x 892 pix - 1.0M] [Full-Res - JPEG: 942 x 1050 pix - 1.3M] Caption : Multi-colour image of the field in the galaxy cluster MS 1008, with a 24.5-mag lensed quasar (arrow) observed at redshift z = 4.0 during the present study. This image was obtained by the VLT/ANTU telescope during its Science Verification phase. The photo is based on a composite of four images with exposure times and seeing conditions of 82 min and 0.72 arcsec (B band), 90 min and 0.65 arcsec (V band), 90 min and 0.64 arcsec (R band) and 67 min and 0.55 arcsec (I band), respectively. The field is 1.8 x 1.6 arcmin 2 ; North is up and East is left. ESO PR Photo 10b/00 ESO PR Photo 10b/00 [Preview - JPEG: 400 x 341 pix - 46k] [Normal - JPEG: 800 x 681 pix - 112k] Caption : The spectrum obtained with FORS2 at KUEYEN of a quasar at redshift z = 4.0, lensed by the massive cluster of galaxies MS 1008. The redshifted Lyman-alpha line from hydrogen (rest wavelength 1216 Å in the far-ultraviolet part of the spectrum) is clearly seen in emission at 6025 Å as a high peak in the red spectral region. Another emission line, from four times ionized nitrogen (rest wavelength 1240 Å), is seen in the right wing of the Lyman-alpha line. The spectrum was obtained after two hours of exposure through a 1.0 arcsec slit in good atmospheric conditions (seeing: 0.6 arcsec). With the comparatively large field-of-view of FORS2 at VLT KUEYEN, the Toulouse team obtained spectra of very faint objects, not only in the cluster core region where the multiple-lensed background galaxies are found, but also in the outer regions of the cluster where the images of objects are not split into several images, but only magnified. One of the faint objects ( Photo 10a/00 ) turned out to be a very distant quasar with a redshift of about z = 4.0, as determined by the Lyman-alpha line well visible in the red region of its spectrum ( Photo 10b/00 ). The quasar is therefore located at a large distance that corresponds to when the universe was quite young, about 10% of its current age. The measured redshift was only slightly higher than what was predicted by the observers ( z = 3.6) on the basis of earlier multi-colour photometric measurements from VLT/ANTU [2]. The magnitude of this quasar is 24.5, i.e., 25 million times fainter than the faintest star that can be seen with the naked eye at a dark site. As the observers remark, this quasar, at the measured magnitude and redshift, is an intrinsically fainter member of its class. A good start Another dozen objects also showed spectral features that will allow the Toulouse team to determine their distances, once their data have been properly analysed. The detection of these spectral features in such distant and faint objects is a powerful demonstration of the extraordinary sensitivity of the KUEYEN/FORS2 constellation. It is also a fine result from the very first observing night with this new facility and an good illustration of the effective use of space- and ground-based telescopes within the same research project. The Toulouse team, with other colleagues, including Ian Smail (Durham University, UK) and Harald Ebeling (Institute for Astrophysics, Hawaii, USA), have again applied for observing time to continue this programme at the VLT , in order to measure the distances of multiple-lensed objects behind other massive clusters of galaxies observed with HST . With more observations of this type available, it will become possible to determine more accurately Omega and Lambda. Notes [1] The present project on the determination of cosmological parameters defining the geometry of the universe by means of multiple images that are gravitationally lensed by massive clusters of galaxies is carried out by a group of astronomers from the Observatoire Midi-Pyrenees (Toulouse, France), including Francisco Castander , Ghislain Golse , Jean-Paul Kneib and Genevieve Soucail . [2] The photometric redshift method to determine cosmological distances is based on measurement of colours. Depending on the redshift and hence, the distance, distinct features in the spectra of galaxies produce changes in the observed colours. More information about the photometric redshift code HyperZ is available at http://webast.ast.obs-mip.fr/hyperz.
First Images from VLT Science Verification Programme
NASA Astrophysics Data System (ADS)
1998-09-01
Two Weeks of Intensive Observations Successfully Concluded After a period of technical commissioning tests, the first 8.2-m telescope of the ESO VLT (UT1) has successfully performed an extensive series of "real science" observations , yielding nearly 100 hours of precious data. They concern all possible types of astronomical objects, from distant galaxies and quasars to pulsars, star clusters and solar system objects. This intensive Science Verification (SV) Programme took place as planned from August 17 to September 1, 1998, and was conducted by the ESO SV Team at the VLT Observatory on Paranal (Chile) and at the ESO Headquarters in Garching (Germany). The new giant telescope lived fully up to the high expectations and worked with spectacular efficiency and performance through the entire period. All data will be released by September 30 via the VLT archive and the web (with some access restrictions - see below). The Science Verification period Just before the beginning of the SV period, the 8.2-m primary mirror in its cell was temporarily removed in order to install the "M3 tower" with the tertiary mirror [1]. The reassembly began on August 15 and included re-installation at the Cassegrain focus of the VLT Test Camera that was also used for the "First Light" images in May 1998. After careful optical alignment and various system tests, the UT1 was handed over to the SV Team on August 17 at midnight local time. The first SV observations began immediately thereafter and the SV Team was active 24 hours a day throughout the two-week period. Video-conferences between Garching and Paranal took place every day at about noon Garching time (6 o'clock in the morning on Paranal). Then, while the Paranal observers were sleeping, data from the previous night were inspected and reduced in Garching, with feedback on what was best to do during the following night being emailed to Paranal several hours in advance of the beginning of the observations. The campaign ended in the morning of September 1 when the telescope was returned to the Commissioning Team that has since continued its work. The FORS instrument is now being installed and the first images from this facility are expected shortly. Observational circumstances During the two-week SV period, a total of 154 hours were available for astronomical observations. Of these, 95 hours (62%) were used to collect scientific data, including calibrations, e.g. flat-fielding and photometric standard star observations. 15 hours (10%) were spent to solve minor technical problems, while another 44 hours (29%) were lost due to adverse meteorological conditions (clouds or wind exceeding 15 m/sec). The amount of telescope technical downtime is very small at this moment of the UT1 commissioning. This fact provides an impressive indication of high technical reliability that has been achieved and which will be further consolidated during the next months. The meteorological conditions that were encountered at Paranal during this period were unfortunately below average, when compared to data from the same calendar period in earlier years. There was an excess of bad seeing and fewer good seeing periods than normal; see, however, ESO PR Photo 35c/98 with 0.26 arcsec image quality. Nevertheless, the measured image quality on the acquired frames was often better than the seeing measured outside the enclosure by the Paranal seeing monitor. Part of this very positive effect is due to "active field stabilization" , now performed during all observations by rapid motion (10 - 70 times per second) of the 1.1-m secondary mirror of beryllium (M2) and compensating for the "twinkling" of stars. Science Verification data soon to be released A great amount of valuable data was collected during the SV programme. The available programme time was distributed as follows: Hubble Deep Field - South [HDF-S; NICMOS and STIS Fields] (37.1 hrs); Lensed QSOs (3.2 hrs); High-z Clusters (6.2 hrs); Host Galaxies of Gamma-Ray Bursters (2.1 hrs); Edge-on Galaxies (7.4 hrs); Globular cluster cores (6.7 hrs); QSO Hosts (4.4 hrs); TNOs (3.4 hrs); Pulsars (1.3 hrs); Calibrations (22.7 hrs). All of the SV data are now in the process of being prepared for public release by September 30, 1998 to the ESO and Chilean astronomical communities. It will be possible to retrieve the data from the VLT archive, and a set of CDs will be distributed to all astronomical research institutes within the ESO member states and Chile. Moreover, data obtained on the HDF-S will become publicly available worldwide, and retrievable from the VLT archive. Updated information on this data release can be found on the ESO web site at http://www.eso.org/vltsv/. It is expected that the first scientific results based on the SV data will become available in the course of October and November 1998. First images from the Science Verification programme This Press Release is accompanied by three photos that reproduce some of the images obtained during the SV period. ESO PR Photo 35a/98 ESO PR Photo 35a/98 [Preview - JPEG: 671 x 800 pix - 752k] [High-Res - JPEG: 2518 x 3000 pix - 5.8Mb] This colour composite was constructed from the U+B, R and I Test Camera Images of the Hubble Deep Field South (HDF-S) NICMOS field. These images are displayed as blue, green and red, respectively. The first photo is a colour composite of the HDF-S NICMOS sky field that combines exposures obtained in different wavebands: ultraviolet (U) + blue (B), red (R) and near-infrared (I). For all of them, the image quality is better than 0.9 arcsec. Most of the objects seen in the field are distant galaxies. The image is reproduced in such a way that it shows the faintest features scaled, while rendering the image of the star below the large spiral galaxy approximately white. The spiral galaxy is displayed in such a way that the internal structure is visible. A provisional analysis has shown that limiting magnitudes that were predicted for the HDF-S observations (27.0 - 28.5, depending on the band), were in fact reached. Technical information : Photo 35a/98 is based on 16 U-frames (~370 nm; total exposure time 17800 seconds; mean seeing 0.71 arcsec) and 15 B-frames (~430 nm; 10200 seconds; 0.71 arcsec) were added and combined with 8 R frames (~600 nm; 7200 seconds; 0.49 arcsec) and 12 I-frames (~800 nm; 10150 seconds; 0.59 arcsec) to make this colour composite. Individual frames were flat-fielded and cleaned for cosmics before combination. The field shown measures 1.0 x 1.0 arcmin. North is up; East is to the left. ESO PR Photo 35b/98 ESO PR Photo 35b/98 [Preview - JPEG: 679 x 800 pix - 760k] [High-Res - JPEG: 2518 x 3000 pix - 5.7Mb] The colour composite of the HDF-S NICMOS field constructed by combining VLT Test Camera images in U+B and R bands with a HST NICMOS near-IR H-band exposure. These images are displayed as blue, green and red, respectively. The NICMOS image was smoothed to match the angular resolution of the R-band VLT image. The boundary of the NICMOS image is also shown. The next photo is similar to the first one, but uses a near-IR frame obtained with the Hubble Space Telescope NICMOS instrument instead of the VLT I-frame. The HST image has nearly the same total exposure time as the VLT images. Their combination is meaningful since the VLT and NICMOS images reach similar depths and show more or less the same faint objects. This is the result of several effects compensating each other: while more distant galaxies are redder and therefore better visible at the infrared waveband of the NICMOS image and this image has a better angular resolution than those from the VLT, the collecting area of the UT1 mirror is over 11 times larger than that of the HST. It is interesting to note that all objects in the NICMOS image are also visible in the VLT images, with the exception of the very red object just left of the face-on spiral. The bright red object near the bottom has not before been detected in optical images (to the limit of R ~ 26 mag), but is clearly present in all the VLT Test Camera coadded images, with the exception of the U-band image. Both of these very red objects are possibly extremely distant, elliptical galaxies [2]. The additional information that can be obtained from the combination of the VLT and the infrared NICMOS images has an immediate bearing on the future work with the VLT. When the infrared, multi-mode ISAAC instrument enters into operation in early 1999, it will be able to obtain spectra of such objects and, in general, to deliver very deep infrared images. Thus, the combination of visual (from FORS) and infrared (from ISAAC) images and spectra promises to become an extremely powerful tool that will allow the detection of very red and therefore exceedingly distant galaxies. Moreover, it is obvious that this sky field is not very crowded - much longer exposure times will thus be possible without encountering serious problems of overlapping objects at the "confusion limit". Technical information : Photo 35b/98 is based on 16 U-frames (~370 nm; total exposure time 17800 seconds; mean seeing 0.71 arcsec) and 15 B-frames (~430 nm; 10200 seconds; 0.71 arcsec) were added and combined with 8 R frames (~600 nm; 7200 seconds; 0.49 arcsec) as well as a HST/NICMOS H-band frame(a H-band HST/NICMOS image from the ST-ECF public archive) (~1600 nm; 7040 seconds; 0.2 arcsec) to make this colour composite. Individual frames were flat-fielded and cleaned for cosmics before combination. The field shown measures 1.0 x 1.0 arcmin. North is up; East is to the left. ESO PR Photo 35c/98 ESO PR Photo 35c/98 [Preview - JPEG: 654 x 800 pix - 280k] [High-Res - JPEG: 2489 x 3000 pix - 2.6Mb] Coaddition of two R-band images of edge-on galaxy ESO342-G017 , obtained with 0.26 arcsec image quality. The galaxy ESO342-G017 was observed on August 19, 1998 during a spell of excellent observing conditions. Two exposures, each lasting 120 seconds, were taken through a red filtre to produce this photo. The quality of the original images is excellent, with seeing (FWHM) of only 0.26 arcsec measured on the stars in the frame. ESO342-G017 is an Sc-type spiral galaxy seen edge-on, and the Test Camera was rotated so that the disk of the galaxy appears horizontal in the figure. Thanks to the image quality, the photo shows much detail in the rather flat disk, including a very thin, obscuring dust band and some brighter knots, most probably star-forming regions. This galaxy is located well outside the Milky Way band in the southern constellation of Sagittarius. Its distance is about 400 million light-years (recession velocity about 7,700 km/sec). A number of more distant galaxies are seen in the background on this short exposure. Technical information : Photo 35c/98 is a reproduced from a composite of two 120-second exposures in the red R-band (~600 nm) of the edge-on galaxy ESO342-G017, both with 0.26 arcsec image quality. The frames were flat-fielded and cleaned for cosmics before combination. The field shown measures 1.5 x 1.5 arcmin. North is inclined 38 o clockwise from the top, East is to the left. Notes: [1] The flat and elliptically shaped, tertiary mirror M3 is mounted on top of the M3 Tower that is fixed in the center of the M1 Cell. The tower can rotate along its axis and deflects the light coming from the M2 mirror to the astronomical instruments on either Nasmyth platform. A mechanism at the top of the M3 Tower is used to move the M3 mirror away from the optical path when the instrument at the Cassegrain focus is used, e.g. the Test Camera during the SV observations. [2] This effect is due to the fact that the more distant a galaxy is, the larger is the velocity with which it recedes from us (Hubble's law). The larger the velocity, the further its emitted light will be shifted redwards in the observed spectrum (the Doppler effect) and the redder its image will appear to us. By comparing the brightness of a distant galaxy in different wavebands (measuring its colour), it is therefore in practice possible to estimate its redshift and thus its distance (the " photometric redshift" method). How to obtain ESO Press Information ESO Press Information is made available on the World-Wide Web (URL: http://www.eso.org ). ESO Press Photos may be reproduced, if credit is given to the European Southern Observatory.
VLT Data Flow System Begins Operation
NASA Astrophysics Data System (ADS)
1999-06-01
Building a Terabyte Archive at the ESO Headquarters The ESO Very Large Telescope (VLT) is the sum of many sophisticated parts. The site at Cerro Paranal in the dry Atacama desert in Northern Chile is one of the best locations for astronomical observations from the surface of the Earth. Each of the four 8.2-m telescopes is a technological marvel with self-adjusting optics placed in a gigantic mechanical structure of the utmost precision, continuously controlled by advanced soft- and hardware. A multitude of extremely complex instruments with sensitive detectors capture the faint light from distant objects in the Universe and record the digital data fast and efficiently as images and spectra, with a minimum of induced noise. And now the next crucial link in this chain is in place. A few nights ago, following an extended test period, the VLT Data Flow System began providing the astronomers with a steady stream of high-quality, calibrated image and spectral data, ready to be interpreted. The VLT project has entered into a new phase with a larger degree of automation. Indeed, the first 8.2-m Unit Telescope, ANTU, with the FORS1 and ISAAC instruments, has now become a true astronomy machine . A smooth flow of data through the entire system ESO PR Photo 25a/99 ESO PR Photo 25a/99 [Preview - JPEG: 400 x 292 pix - 104k] [Normal - JPEG: 800 x 584 pix - 264k] [High-Res - JPEG: 3000 x 2189 pix - 1.5M] Caption to ESO PR Photo 25a/99 : Simplified flow diagramme for the VLT Data Flow System . It is a closed-loop software system which incorporates various subsystems that track the flow of data all the way from the submission of proposals to storage of the acquired data in the VLT Science Archive Facility. The DFS main components are: Program Handling, Observation Handling, Telescope Control System, Science Archive, Pipeline and Quality Control. Arrows indicate lines of feedback. Already from the start of this project more than ten years ago, the ESO Very Large Telescope was conceived as a complex digital facility to explore the Universe. In order for astronomers to be able to use this marvellous research tool in the most efficient manner possible, the VLT computer software and hardware systems must guarantee a smooth flow of scientific information through the entire system. This process starts when the astronomers submit well-considered proposals for observing time and it ends with large volumes of valuable astronomical data being distributed to the international astronomical community. For this, ESO has produced an integrated collection of software and hardware, known as the VLT Data Flow System (DFS) , that manages and facilitates the flow of scientific information within the VLT Observatory. Early information about this new concept was published as ESO Press Release 12/96 and extensive tests were first carried out at ESOs 3.5-m New Technology Telescope (NTT) at La Silla, cf. ESO Press Release 03/97 [1]. The VLT DFS is a complete (end-to-end) system that guarantees the highest data quality by optimization of the observing process and repeated checks that identify and eliminate any problems. It also introduces automatic calibration of the data, i.e. the removal of external effects introduced by the atmospheric conditions at the time of the observations, as well as the momentary state of the telescope and the instruments. From Proposals to Observations In order to obtain observing time with ESO telescopes, also with the VLT, astronomers must submit a detailed observing proposal to the ESO Observing Programmes Committee (OPC) . It meets twice a year and ranks the proposals according to scientific merit. More than 1000 proposals are submitted each year, mostly by astronomers from the ESO members states and Chile; the competition is fierce and only a fraction of the total demand for observing time can be fulfilled. During the submission of observing proposals, DFS software tools available over the World Wide Web enable the astronomers to simulate their proposed observations and provide accurate estimates of the amount of telescope time they will need to complete their particular scientific programme. Once the proposals have been reviewed by the OPC and telescope time is awarded by the ESO management according to the recommendation by this Committee, the successful astronomers begin to assemble detailed descriptions of their intended observations (e.g. position in the sky, time and duration of the observation, the instrument mode, etc.) in the form of computer files called Observation Blocks (OBs) . The software to make OBs is distributed by ESO and used by the astronomers at their home institutions to design their observing programs well before the observations are scheduled at the telescope. The OBs can then be directly executed by the VLT and result in an increased efficiency in the collection of raw data (images, spectra) from the science instruments on the VLT. The activation (execution) of OBs can be done by the astronomer at the telescope on a particular set of dates ( visitor mode operation) or it can be done by ESO science operations astronomers at times which are optimally suited for the particular scientific programme ( service mode operation). An enormous VLT Data Archive ESO PR Photo 25b/99 ESO PR Photo 25b/99 [Preview - JPEG: 400 x 465 pix - 160k] [Normal - JPEG: 800 x 929 pix - 568k] [High-Res - JPEG: 3000 x 3483 pix - 5.5M] Caption to ESO PR Photo 25b/99 : The first of several DVD storage robot at the VLT Data Archive at the ESO headquarters include 1100 DVDs (with a total capacity of about 16 Terabytes) that may be rapidly accessed by the archive software system, ensuring fast availbility of the requested data. The raw data generated at the telescope are stored by an archive system that sends these data regularly back to ESO headquarters in Garching (Germany) in the form of CD and DVD ROM disks. While the well-known Compact Disks (CD ROMs) store about 600 Megabytes (600,000,000 bytes) each, the new Digital Versatile Disks (DVD ROMs) - of the same physical size - can store up 3.9 Gigabytes (3,900,000,000 bytes) each, or over 6 times more. The VLT will eventually produce more than 20 Gigabytes (20,000,000,000 bytes) of astronomical data every night, corresponding to about 10 million pages of text [2]. Some of these data also pass through "software pipelines" that automatically remove the instrumental effects on the data and deliver data products to the astronomer that can more readily be turned into scientific results. Ultimately these data are stored in a permanent Science Archive Facility at ESO headquarters which is jointly operated by ESO and the Space Telescope European Coordinating Facility (ST-ECF). From here, data are distributed to astronomers on CD ROMs and over the World Wide Web. The archive facility is being developed to enable astronomers to "mine" the large volumes of data that will be collected from the VLT in the coming years. Within the first five years of operations the VLT is expected to produce around 100 Terabytes (100,000,000,000,000 bytes) of data. It is difficult to visualize this enormous amount of information. However, it corresponds to the content of 50 million books of 1000 pages each; they would occupy some 2,500 kilometres of bookshelves! The VLT Data Flow System enters into operation ESO PR Photo 25c/99 ESO PR Photo 25c/99 [Preview - JPEG: 400 x 444 pix - 164k] [Normal - JPEG: 800 x 887 pix - 552k] [High-Res - JPEG: 3000 x 3327 pix - 6.4M] Caption to ESO PR Photo 25c/99 : Astronomers from ESO Data Flow Operations Group at work with the VLT Archive. Science operations with the first VLT 8.2-m telescope ( ANTU ) began on April 1, 1999. Following the first call for proposals to use the VLT in October 1998, the OPC met in December and the observing schedule was finalized early 1999. The related Observation Blocks were prepared by the astronomers in February and March. Service-mode observations began in April and by late May the first scientific programs conducted by ESO science operations were completed. Raw data, instrument calibration information and the products of pipeline processing from these programs have now been assembled and packed onto CD ROMs by ESO science operations staff. On June 15 the first CD ROMs were delivered to astronomers in the ESO community. This event marks the closing of the data flow loop at the VLT for the first time and the successful culmination of more than 5 years of hard work by ESO engineers and scientists to implement a system for efficient and effective scientific data flow. This was achieved by a cross-organization science operations team involving staff in Chile and Europe. With the VLT Data Flow System, a wider research community will have access to the enormous wealth of data from the VLT. It will help astronomers to keep pace with the new technologies and extensive capabilities of the VLT and so obtain world-first scientific results and new insights into the universe. Notes [1] A more technical description of the VLT Data Flow System is available in Chapter 10 of the VLT Whitebook. [2] By definition, one "normal printed page" contains 2,000 characters. How to obtain ESO Press Information ESO Press Information is made available on the World-Wide Web (URL: http://www.eso.org../ ). ESO Press Photos may be reproduced, if credit is given to the European Southern Observatory.
Observation sequences and onboard data processing of Planet-C
NASA Astrophysics Data System (ADS)
Suzuki, M.; Imamura, T.; Nakamura, M.; Ishi, N.; Ueno, M.; Hihara, H.; Abe, T.; Yamada, T.
Planet-C or VCO Venus Climate Orbiter will carry 5 cameras IR1 IR 1micrometer camera IR2 IR 2micrometer camera UVI UV Imager LIR Long-IR camera and LAC Lightning and Airglow Camera in the UV-IR region to investigate atmospheric dynamics of Venus During 30 hr orbiting designed to quasi-synchronize to the super rotation of the Venus atmosphere 3 groups of scientific observations will be carried out i image acquisition of 4 cameras IR1 IR2 UVI LIR 20 min in 2 hrs ii LAC operation only when VCO is within Venus shadow and iii radio occultation These observation sequences will define the scientific outputs of VCO program but the sequences must be compromised with command telemetry downlink and thermal power conditions For maximizing science data downlink it must be well compressed and the compression efficiency and image quality have the significant scientific importance in the VCO program Images of 4 cameras IR1 2 and UVI 1Kx1K and LIR 240x240 will be compressed using JPEG2000 J2K standard J2K is selected because of a no block noise b efficiency c both reversible and irreversible d patent loyalty free and e already implemented as academic commercial software ICs and ASIC logic designs Data compression efficiencies of J2K are about 0 3 reversible and 0 1 sim 0 01 irreversible The DE Digital Electronics unit which controls 4 cameras and handles onboard data processing compression is under concept design stage It is concluded that the J2K data compression logics circuits using space
NASA Astrophysics Data System (ADS)
Xie, ChengJun; Xu, Lin
2008-03-01
This paper presents an algorithm based on mixing transform of wave band grouping to eliminate spectral redundancy, the algorithm adapts to the relativity difference between different frequency spectrum images, and still it works well when the band number is not the power of 2. Using non-boundary extension CDF(2,2)DWT and subtraction mixing transform to eliminate spectral redundancy, employing CDF(2,2)DWT to eliminate spatial redundancy and SPIHT+CABAC for compression coding, the experiment shows that a satisfied lossless compression result can be achieved. Using hyper-spectral image Canal of American JPL laboratory as the data set for lossless compression test, when the band number is not the power of 2, lossless compression result of this compression algorithm is much better than the results acquired by JPEG-LS, WinZip, ARJ, DPCM, the research achievements of a research team of Chinese Academy of Sciences, Minimum Spanning Tree and Near Minimum Spanning Tree, on the average the compression ratio of this algorithm exceeds the above algorithms by 41%,37%,35%,29%,16%,10%,8% respectively; when the band number is the power of 2, for 128 frames of the image Canal, taking 8, 16 and 32 respectively as the number of one group for groupings based on different numbers, considering factors like compression storage complexity, the type of wave band and the compression effect, we suggest using 8 as the number of bands included in one group to achieve a better compression effect. The algorithm of this paper has priority in operation speed and hardware realization convenience.
ImageX: new and improved image explorer for astronomical images and beyond
NASA Astrophysics Data System (ADS)
Hayashi, Soichi; Gopu, Arvind; Kotulla, Ralf; Young, Michael D.
2016-08-01
The One Degree Imager - Portal, Pipeline, and Archive (ODI-PPA) has included the Image Explorer interactive image visualization tool since it went operational. Portal users were able to quickly open up several ODI images within any HTML5 capable web browser, adjust the scaling, apply color maps, and perform other basic image visualization steps typically done on a desktop client like DS9. However, the original design of the Image Explorer required lossless PNG tiles to be generated and stored for all raw and reduced ODI images thereby taking up tens of TB of spinning disk space even though a small fraction of those images were being accessed by portal users at any given time. It also caused significant overhead on the portal web application and the Apache webserver used by ODI-PPA. We found it hard to merge in improvements made to a similar deployment in another project's portal. To address these concerns, we re-architected Image Explorer from scratch and came up with ImageX, a set of microservices that are part of the IU Trident project software suite, with rapid interactive visualization capabilities useful for ODI data and beyond. We generate a full resolution JPEG image for each raw and reduced ODI FITS image before producing a JPG tileset, one that can be rendered using the ImageX frontend code at various locations as appropriate within a web portal (for example: on tabular image listings, views allowing quick perusal of a set of thumbnails or other image sifting activities). The new design has decreased spinning disk requirements, uses AngularJS for the client side Model/View code (instead of depending on backend PHP Model/View/Controller code previously used), OpenSeaDragon to render the tile images, and uses nginx and a lightweight NodeJS application to serve tile images thereby significantly decreasing the Time To First Byte latency by a few orders of magnitude. We plan to extend ImageX for non-FITS images including electron microscopy and radiology scan images, and its featureset to include basic functions like image overlay and colormaps. Users needing more advanced visualization and analysis capabilities could use a desktop tool like DS9+IRAF on another IU Trident project called StarDock, without having to download Gigabytes of FITS image data.
NASA Astrophysics Data System (ADS)
Anders, Niels; Keesstra, Saskia; Masselink, Rens
2014-05-01
Unmanned Aerial System (UAS) are becoming popular tools in the geosciences due to improving technology and processing/analysis techniques. They can potentially fill the gap between spaceborne or manned aircraft remote sensing and terrestrial remote sensing, both in terms of spatial and temporal resolution. In this study we analyze a multi-temporal data set that was acquired with a fixed-wing UAS in an agricultural catchment (2 sq. km) in Navarra, Spain. The goal of this study is to register soil erosion activity after one year of agricultural activity. The aircraft was equipped with a Panasonic GX1 16MP pocket camera with a 20 mm lens to capture normal JPEG RGB images. The data set consisted of two sets of imagery acquired in the end of February in 2013 and 2014 after harvesting. The raw images were processed using Agisoft Photoscan Pro which includes the structure-from-motion (SfM) and multi-view stereopsis (MVS) algorithms producing digital surface models and orthophotos of both data sets. A discussion is presented that is focused on the suitability of multi-temporal UAS data and SfM/MVS processing for quantifying soil loss, mapping the distribution of eroded materials and analyzing re-occurrences of rill patterns after plowing.
The effect of flight altitude to data quality of fixed-wing UAV imagery: case study in Murcia, Spain
NASA Astrophysics Data System (ADS)
Anders, Niels; Keesstra, Saskia; Cammeraat, Erik
2014-05-01
Unmanned Aerial System (UAS) are becoming popular tools in the geosciences due to improving technology and processing techniques. They can potentially fill the gap between spaceborne or manned aircraft remote sensing and terrestrial remote sensing, both in terms of spatial and temporal resolution. In this study we tested a fixed-wing Unmanned Aerial System (UAS) for the application of digital landscape analysis. The focus was to analyze the effect of flight altitude and the effect to accuracy and detail of the produced digital elevation models, derived terrain properties and orthophotos. The aircraft was equipped with a Panasonic GX1 16MP pocket camera with 20 mm lens to capture normal JPEG RGB images. Images were processed using Agisoft Photoscan Pro which includes the structure-from-motion and multiview stereopsis algorithms. The test area consisted of small abandoned agricultural fields in semi-arid Murcia in southeastern Spain. The area was severely damaged after a destructive rainfall event, including damaged check dams, rills, deep gully incisions and piping. Results suggest that careful decisions on flight altitude are essential to find a balance between the area coverage, ground sampling distance, UAS ground speed, camera processing speed and the accurate registration of specific soil erosion features of interest.
A Supermassive Black Hole in a Nearby Galaxy
NASA Astrophysics Data System (ADS)
2001-03-01
ISAAC Inspects the Center of Centaurus A Summary The nearby galaxy Centaurus A harbours a supermassive black hole at its centre . Using the ISAAC instrument at the ESO Very Large Telescope (VLT) , an international team of astronomers [1] has peered right through the spectacular dust lane of the peculiar galaxy Centaurus A , located approximately 11 million light-years away. They were able to probe the thin disk of gas that surrounds the very center of this galaxy. The new measurements show that the compact nucleus in the middle weighs more than 200 million solar masses ! This is too much just to be due to normal stars. The astronomers thus conclude the existence of a supermassive black hole lurking at the centre of Centaurus A . PR Photo 08a/01 : Visual image of the centre of Centaurus A . PR Photo 08b/01 : ISAAC spectrum of the centre of Centaurus A . PR Photo 08c/01 : The corresponding rotation curve from which the mass of the black hole was deduced. A well studied galaxy with a hidden center ESO PR Photo 08a/01 ESO PR Photo 08a/01 [Preview - JPEG: 352 x 400 pix - 160k] [Normal - JPEG: 704 x 800 pix - 376k] Caption : PR Photo 08a/01 shows a small area in the direction of the heavily obscured centre of the peculiar radio galaxy Centaurus A , as seen in visual light. It measures about 80 x 80 arcsec 2 , or 4400 x 4400 light-year 2 at the distance of this galaxy, and has been reproduced from exposures made with the FORS2 multi-mode instrument at the 8.2-m VLT KUEYEN telescope at Paranal. The full field may be seen in PR Photo 05b/00. Technical information about this photo is available below. The galaxy Centaurus A (NGC 5128) is one of the most studied objects in the southern sky. The unique appearance of this galaxy was already noticed by the famous British astronomer John Herschel in 1847 who catalogued the southern skies and made a comprehensive list of "nebulae". A fine photo of Centaurus A from the VLT was published last year as PR Photo 05b/00. Herschel could not know, however, that this beautiful and spectacular appearance is due to an opaque dust lane that covers the central part of the galaxy. This dust is likely the remain of a cosmic merger between a giant elliptical galaxy, and a smaller spiral galaxy full of dust. Centaurus A is even more spectacular when observed with radio telescopes. It is in fact one of the brightest radio sources in the sky (its name indicates that it is the strongest radio source in the southern constellation Centaurus). At a distance of merely 11 million light-years, it is also the nearest radio galaxy. The radio emission from the very compact centre exhibits strong activity. It has for some time been suspected that this powerful energy release is due to accretion of material onto a massive black hole. The details of the centre have remained largely unknown, due to the dense dust lane that completely obscures the central part of the galaxy in optical light, cf. PR Photo 08a/01 . Observations of the dust emission in the mid-infrared spectral region were carried out with the ISOCAM camera onboard the ESA Infrared Space Observatory . They revealed a structure extending over 5 arcmin (16,500 light-years or 5 kpc), centred on the compact radio source, and very similar to that of a small barred galaxy. This bar may serve to funnel gas towards the active nucleus of the galaxy. Peering through the dust To look into the very centre of the galaxy, the observations must be carried out at wavelengths longer than those of visual light, e.g., in the infrared spectral region. This is because the dust absorbs much less the infrared radiation. Infrared observations of the innermost regions (of Centaurus A (on an arcsec scale) were recently done by a team of astronomers from Italy, UK and USA [1], by means of the multi-mode ISAAC instrument on the ESO Very Large Telescope (VLT) at Paranal Observatory. In fact, the team started their infrared studies of this galaxy already in 1997, using the NICMOS camera on board the Hubble Space Telescope (HST) . That close view of the galaxy nucleus revealed a thin gaseous disk of material close to the center, which looked very much like an accretion disk that was feeding material into a central black hole. The HST image prompted further spectroscopic observations to probe the rotation of the disk, and thus to measure the mass of the central object. The ISAAC spectra ESO PR Photo 08b/01 ESO PR Photo 08b/01 [Preview - JPEG: 400 x 303 pix - 216k] [Normal - JPEG: 800 x 606 pix - 572k] [Hires - JPEG: 2274 x 3000 pix - 4.0M] Caption : PR Photo 08b/01 shows two wavelength regions of one of the infrared ISAAC spectra of the center of Centaurus A . The direction of the long spectrograph slit is vertical and the dispersion (wavelength) direction is horizontal; longer wavelengths are towards the right. The two emission lines shown originate in singly ionized Iron ([FeII]; rest wavelength 1256.68 nm) and in Hydrogen (Paschen-Beta; 1281.81 nm) and both are clearly tilted. This is due to the rapid rotation of the accretion disk surrounding the supermassive black hole in the center of the galaxy. The light from the receding edge of the disk is Doppler-shifted towards the red (to the right) and the light from the part of the disk approaching us is shifted to the left. This may be better seen in the inserted enlargements. Therefore the inclined disk shows a tilted spectrum. These motions may be represented in a rotation curve, cf. PR Photo 08c/01 . There are other emitting areas above and below the nucleus, especially in the Paschen-Beta line. Technical information about these photos is available below. ESO PR Photo 08c/01 ESO PR Photo 08c/01 [Preview - JPEG: 341 x 400 pix - 56k] [Normal - JPEG: 682 x 800 pix - 132k] Caption : PR Photo 08c/01 shows the rotation curve (velocity vrs. distance from the centre) of the disk surrounding the black hole at the centre of Centaurus A . From the ISAAC spectrum displayed in PR Photo 08b/01 , the `average' gas velocities along the slit direction can be derived. Position `0' on the horizontal axis indicates the exact position of the galaxy nucleus; at the distance of Centaurus A , 1 arcsec corresponds to 55.5 light-years (17 pc). The blue triangles and the red squares correspond to emission lines from singly ionized Iron atoms ([Fe II]) and Hydrogen (Paschen-Beta), respectively. The high velocities are the hallmark of a central black hole. The thick solid line represents the expected velocities, assuming the presence of a 200 million solar-mass black hole at the centre. Technical information about these photos is available below. The spectroscopic observations required both a high sensitivity in the infrared and excellent seeing conditions. This combination was achieved using ISAAC at VLT. Peering through the thick walls of dust enshrouding the nuclear region of Centaurus A , the astronomers succeeded in acquiring several high-quality spectra of the thin central disk; the exposure time for each spectrum was (about) 35 min. The spectra did show the characteristic shape of a rotating disk, cf. PR Photo 08b/01 . High-speed motions of the gas in this disk were detected ( PR Photo 08c/01 ), which are the hallmark of a black hole. An analysis of the rotational speed of the disk leads to determination of the total mass of the material inside the disk. This showed that about 200 million solar masses of material resides inside the nuclear disk. A massive black hole The astronomers quickly realized that this enormous mass within the central region cannot be caused by normal stars, as it would then be much more luminous. Instead they conclude that the most conservative explanation for the dark, central mass concentration observed in Centaurus A is indeed a supermassive black hole. The most likely mass of this "central beast" is then about 200 million times the mass of our Sun. This discovery confirms the previous suspicion that the active nucleus of Centaurus A is powered by a supermassive black hole. It is the first time infrared spectroscopy has been used to weigh a black hole. Many other galaxies have dust-enshrouded nuclei, and the excellent capabilities of ISAAC now hold a great potential to discover and weigh many more black holes. More Information The research described in this Press Release is reported in a research article ("Peering through the dust: Evidence for a supermassive Black Hole at the Nucleus of Centaurus A from VLT IR spectroscopy"), that will appear in the international research journal the Astrophysical Journal on March 10, 2001. The full article is also available on the web as astro-ph/0011059. Note [1]: The team is composed by Ethan Schreier (Principal Investigator; Space Telescope Science Institute - STScI, Baltimore, USA), Alessandro Marconi (Arcetri Observatory, Italy), Alessandro Capetti (Turin Observatory, Italy), David Axon (University of Hertfordshire, United Kingdom), Anton Koekemoer (STScI, USA) and Duccio Macchetto (ESA/STScI, USA). Technical information about the photos PR Photo 08a/01 is reproduced from three exposures, obtained during the night of January 31 - February 1, 2000. It is a composite of three exposures in B (300 sec exposure, image quality 0.60 arcsec; here rendered in blue colour), V (240 sec, 0.60 arcsec; green) and R (240 sec, 0.55 arcsec; red). The field covered corresponds to about 80 x 80 arcsec 2 (395 x 395 pix 2 , 1 pix = 0.2 arcsec). North is up and East is left. PR Photo 08b+c/01 : The original ISAAC spectra were exposed for 35 min each with an average seeing of 0.5 arcsec. Three spectrograph slits were used, but only one of these is shown here. It was centered on the nucleus of Centaurus A and oriented at 33°, measured counter-clockwise from the North direction. The spectral pixel size is 0.6 Angstrom x 0.15 arcsec (i.e., 14 km/sec x 8.3 light-year). The large and small figures cover 2300 km/s x 1665 light-years and 1150 km/s x 330 light-years, respectively.
Into the Epoch of Galaxy Formation
NASA Astrophysics Data System (ADS)
2000-02-01
Infrared VLT Observations Identify Hidden Galaxies in the Early Universe Working with the ESO Very Large Telescope (VLT) at the Paranal Observatory , a group of European astronomers [1] has just obtained one of the deepest looks into the distant Universe ever made by an optical telescope. These observations were carried out in the near-infrared spectral region and are part of an attempt to locate very distant galaxies that have so far escaped detection in the visual bands. The first results are very promising and some concentrations of galaxies at very large distances were uncovered. Some early galaxies may be in hiding Current theories hypothesize that more than 80% of all stars ever formed were assembled in galaxies during the latter half of the elapsed lifetime of the Universe, i.e., during the past 7-8 billion years. However, doubts have arisen about these ideas. There are now observational indications that a significant number of those galaxies that formed during the first 20% of the age of the Universe, i.e. within about 3 billion years after the Big Bang, may not be visible to optical telescopes. In some cases, we do not see them, because their light is obscured by dust. Other distant galaxies may escape detection by optical telescopes because star formation in them has ceased and their light is mainly emitted in the red and infrared spectral bands. This is because, while very young galaxies mostly contain hot and blue stars, older galaxies have substantial numbers of cool and red stars. They are then dominated by an older, "evolved" stellar population that is cooler and redder. The large cosmic velocities of these galaxies further enhance this effect by causing their light to be "redshifted" towards longer wavelengths, i.e. into the near-infrared spectral region. Observations in the infrared needed Within the present programme, long exposures in near-infrared wavebands were made with the Infrared Spectrometer And Array Camera (ISAAC) , mounted on ANTU , the first of the four 8.2-m VLT Unit Telescopes. A first analysis of the new observations indicates that "evolved" galaxies were already present when the Universe was only 4 billion years old. This information is of great importance to our understanding of how the matter in the early Universe condensed and the first galaxies and stars came into being. While in the nearby Universe evolved galaxies are preferentially located in denser environments such as groups and clusters of galaxies, little is currently known about the distribution in space of such objects at early cosmic epochs. In order to be able to see such obscured and/or "evolved" galaxies in the early Universe, and to look for hitherto unknown galaxies beyond the limits of "deep-field" imaging in visible spectral bands, it is necessary to employ other observing techniques. The astronomers must search for such objects on large-field, very long-exposure sky images obtained in the near-infrared (NIR, wavelength 1-2 µm) region of the electromagnetic spectrum and at even longer wavelengths (> 10 µm) in the far-IR and in the sub-mm range. Such observations are beyond the capability of the infrared cameras installed on the world's 4-m class telescopes. However, the advent of the ISAAC instrument at the 8.2-m ANTU telescope has now opened new and exciting research opportunities in this direction for European astronomers. With ISAAC , it is possible to obtain "deep" NIR images in an unprecedentedly wide field of view, covering a sky area about 7 times larger than with the best instruments previously available on very large telescopes. Such observations also benefit greatly from the very good optical quality provided by the active optics control of the VLT, as well as the excellent Paranal site. The ISAAC/ANTU observations ESO PR Photo 06a/00 ESO PR Photo 06a/00 [Preview - JPEG: 400 x 427pix - 69k] [Normal - JPEG: 800 x 853 pix - 195k] [Full-Res - JPEG: 942 x 1004 pix - 635k] Caption : ESO PR Photo 06a/00 displays a 4.5 arcmin 2 area of the "AXAF Deep Field" , as observed with the ISAAC multi-mode instrument at VLT ANTU in the near-IR K band (at wavelength 2.x µm). The total integration time is 8.5 hours and the limiting magnitude is K = 23.5 per arcsec 2 (at S/N-ratio = 3). The pixel size is 0.15 arcsec. North is up and east is left. The "Full-Res" version maintains the original pixels and is of the highest reproduction quality (least file compression). The reproduction is "negative", with dark objects on a light sky, in order to better show the faintest objects. See also the technical note below. ESO PR Photo 06b/00 ESO PR Photo 06b/00 [Preview - JPEG: 400 x 451 pix - 103k] [Normal - JPEG: 800 x 902 pix - 270k] [Full-Res - JPEG: 924 x 1042 pix - 704k] Caption : ESO PR Photo 06b/00 is a composite colour image of the field shown in PR Photo 06a/00 . It is a combination of the K-band image from ANTU/ISAAC shown in PR Photo 06a/00 with two images obtained in the B and R bands with the SUSI-2 optical imager at the New Technology Telescope (NTT) on La Silla in the framework of the ESO-EIS survey. Note the relatively high density of red galaxies, visible in the upper right part of this image. The colours of most of these galaxies are consistent with those of "evolved" galaxies, already present when the Universe was only 4 billions years old. The "Full-Res" version maintains the original pixels and is of the highest reproduction quality (least file compression). The group of European astronomers recently obtained a first "ultra-deep" 4.5 arcmin 2 image in the near-infrared J (wavelength 1.2 µm) and K (2.2 µm) bands, centered in the so-called "AXAF Deep Field", cf. PR Photos 06a-b/00 . This area of the sky is remarkably devoid of bright stars and provides a clear view towards the remote Universe, as there is little obscuring dust in our own Galaxy, the Milky Way, in this direction. It is therefore uniquely suited to probe the depth of the Universe. It is exactly for this reason that it was selected for a deep survey to be conducted with the Chandra X-Ray Observatory (CXO) during the guaranteed observing time of the former ESO Director General, Professor Riccardo Giacconi , and as a deep field of the ESO Imaging Survey (EIS, cf. ESO Press Photos 46a-j/99 ). The sky field observed with ISAAC and shown above is near the centre of the WFI image (ESO PR Photo 46a/99); it is displaced about 3.6 arcmin towards West and 1.0 armin towards North. As seen on the photos, there are great numbers of faint galaxies in this direction. Those of very red colour emit most of their light in the infrared spectral region and are particularly interesting since they may either be highly obscured or contain mostly old stars, as described above. New research possibilities With observations as these, ISAAC is now opening a new window towards the distant Universe. The comparison of the new NIR observations with earlier exposures at other wavelengths provides unique research opportunities. It is possible to measure the average star formation rate and the total stellar mass content in galaxies that are heavily obscured and are therefore not observable in the optical bands and which may constitute a substantial fraction of the primeval galaxy population. Such measurements will also allow to test current theories of galaxy formation that predict stars to be gradually assembled into galaxies, and hence envisage a progressive decline in the galaxy population towards very early cosmic times, in particular within 1-2 billion years after the Big Bang. Moreover, a comparison of NIR, optical and X-ray images will make it possible to gain new insights into the nuclear activity at the center of star-forming galaxies. It will become possible to study the distinct effects due to massive black holes and bursts of star formation. Concentrations of galaxies at large distances The relatively large field-of-view of ISAAC allows to gain information about the distribution in space of the faintest and most distant, evolved galaxies and also about the existence of associations of distant galaxies. A first clear example is the concentration of galaxies that appear uniformly yellow in PR Photo 06b/00 , apparently tracing a group of galaxies that was already assembled when the Universe was only 6 billion years old. A confirmation of the distance of a few of these galaxies has already been obtained by means of spectral observations in the framework of an ESO Large Programme , entitled "A Stringent Test on the Formation of Early Type and Massive Galaxies" and carried out by another group of astronomers [2]. A further clear example of a concentration of distant galaxies is seen in the upper right part of PR Photo 06b/00 . The very red colours of several galaxies in this sky area indicate that they are even more distant, "evolved" galaxies, already present when the Universe was only 1/3 of the current age. Notes [1] The European team consists of Emanuele Giallongo (Principal Investigator), Adriano Fontana , Nicola Menci and Francesco Poli (all at Rome Observatory), Stephane Arnouts and Sandro D'Odorico (European Southern Observatory, Garching), Stefano Cristiani (ST European Coordinating Facility, Garching) and Paolo Saracco (Milan Observatory). The data analysis was performed at the Milan ( P. Saracco ) and Rome ( A. Fontana , F. Poli ) Observatories. [2] This programme is conducted Andrea Cimatti (Principal Investigator) and Emanuele Daddi (both at Arcetri Observatory), Tom Broadhurst , Sandro D'Odorico , Roberto Gilmozzi and Alvio Renzini (European Southern Observatory), Stefano Cristiani (ST European Coordinating Facility, Garching), Adriano Fontana , Emanuele Giallongo , Nicola Menci and Francesco Poli (Rome Observatory), Marco Mignoli , Lucia Pozzetti and Giovanni Zamorani (Bologna Observatory) and Paolo Saracco (Milan Observatory). Technical note : The K-band image ( PR Photo 06a/00 ) is the result of 510 min of integration time with ISAAC at VLT ANTU. The 3-sigma magnitude limit is about K = 23.5 per arcsec 2. A J-band image was also obtained during 200 min of integration, with a 3-sigma limit of J = 25 per arcsec 2. The seeing FWHM (Full Width at Half Maximum) is 0.65 arcsec for both bands. The redshift, estimated on the basis of the measured colours of the mentioned over-density of yellow galaxies (cf. PR Photo 06b/00 ), is between 0.6 and 0.7 and that of the red galaxies is between 1 and 1.4. ESO PR Photos may be reproduced, if credit is given to the European Southern Observatory.
Trahearn, Nicholas; Tsang, Yee Wah; Cree, Ian A; Snead, David; Epstein, David; Rajpoot, Nasir
2017-06-01
Automation of downstream analysis may offer many potential benefits to routine histopathology. One area of interest for automation is in the scoring of multiple immunohistochemical markers to predict the patient's response to targeted therapies. Automated serial slide analysis of this kind requires robust registration to identify common tissue regions across sections. We present an automated method for co-localized scoring of Estrogen Receptor and Progesterone Receptor (ER/PR) in breast cancer core biopsies using whole slide images. Regions of tumor in a series of fifty consecutive breast core biopsies were identified by annotation on H&E whole slide images. Sequentially cut immunohistochemical stained sections were scored manually, before being digitally scanned and then exported into JPEG 2000 format. A two-stage registration process was performed to identify the annotated regions of interest in the immunohistochemistry sections, which were then scored using the Allred system. Overall correlation between manual and automated scoring for ER and PR was 0.944 and 0.883, respectively, with 90% of ER and 80% of PR scores within in one point or less of agreement. This proof of principle study indicates slide registration can be used as a basis for automation of the downstream analysis for clinically relevant biomarkers in the majority of cases. The approach is likely to be improved by implantation of safeguarding analysis steps post registration. © 2016 International Society for Advancement of Cytometry. © 2016 International Society for Advancement of Cytometry.
NASA Technical Reports Server (NTRS)
Miller, James G.
1997-01-01
In this Progress Report, we describe our further development of advanced ultrasonic nondestructive evaluation methods applied to the characterization of anisotropic materials. We present images obtained from experimental measurements of ultrasonic diffraction patterns transmitted through water only and transmitted through water and a thin woven composite. All images of diffraction patterns have been included on the accompanying CD-ROM in the JPEG format and Adobe TM Portable Document Format (PDF), in addition to the inclusion of hardcopies of the images contained in this report. In our previous semi-annual Progress Report (NAG 1-1848, December, 1996), we proposed a simple model to simulate the effect of a thin woven composite on an insonifying ultrasonic pressure field. This initial approach provided an avenue to begin development of a robust measurement method for nondestructive evaluation of anisotropic materials. In this Progress Report, we extend that work by performing experimental measurements on a single layer of a five-harness biaxial woven composite to investigate how a thin, yet architecturally complex, material interacts with the insonifying ultrasonic field. In Section 2 of this Progress Report we describe the experimental arrangement and methods for data acquisition of the ultrasonic diffraction patterns upon transmission through a thin woven composite. We also briefly describe the thin composite specimen investigated. Section 3 details the analysis of the experimental data followed by the experimental results in Section 4. Finally, a discussion of the observations and conclusions is found in Section 5.