Sample records for jpeg image server

  1. Request redirection paradigm in medical image archive implementation.

    PubMed

    Dragan, Dinu; Ivetić, Dragan

    2012-08-01

    It is widely recognized that the JPEG2000 facilitates issues in medical imaging: storage, communication, sharing, remote access, interoperability, and presentation scalability. Therefore, JPEG2000 support was added to the DICOM standard Supplement 61. Two approaches to support JPEG2000 medical image are explicitly defined by the DICOM standard: replacing the DICOM image format with corresponding JPEG2000 codestream, or by the Pixel Data Provider service, DICOM supplement 106. The latest one supposes two-step retrieval of medical image: DICOM request and response from a DICOM server, and then JPIP request and response from a JPEG2000 server. We propose a novel strategy for transmission of scalable JPEG2000 images extracted from a single codestream over DICOM network using the DICOM Private Data Element without sacrificing system interoperability. It employs the request redirection paradigm: DICOM request and response from JPEG2000 server through DICOM server. The paper presents programming solution for implementation of request redirection paradigm in a DICOM transparent manner. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  2. Toward privacy-preserving JPEG image retrieval

    NASA Astrophysics Data System (ADS)

    Cheng, Hang; Wang, Jingyue; Wang, Meiqing; Zhong, Shangping

    2017-07-01

    This paper proposes a privacy-preserving retrieval scheme for JPEG images based on local variance. Three parties are involved in the scheme: the content owner, the server, and the authorized user. The content owner encrypts JPEG images for privacy protection by jointly using permutation cipher and stream cipher, and then, the encrypted versions are uploaded to the server. With an encrypted query image provided by an authorized user, the server may extract blockwise local variances in different directions without knowing the plaintext content. After that, it can calculate the similarity between the encrypted query image and each encrypted database image by a local variance-based feature comparison mechanism. The authorized user with the encryption key can decrypt the returned encrypted images with plaintext content similar to the query image. The experimental results show that the proposed scheme not only provides effective privacy-preserving retrieval service but also ensures both format compliance and file size preservation for encrypted JPEG images.

  3. Workflow opportunities using JPEG 2000

    NASA Astrophysics Data System (ADS)

    Foshee, Scott

    2002-11-01

    JPEG 2000 is a new image compression standard from ISO/IEC JTC1 SC29 WG1, the Joint Photographic Experts Group (JPEG) committee. Better thought of as a sibling to JPEG rather than descendant, the JPEG 2000 standard offers wavelet based compression as well as companion file formats and related standardized technology. This paper examines the JPEG 2000 standard for features in four specific areas-compression, file formats, client-server, and conformance/compliance that enable image workflows.

  4. On-demand rendering of an oblique slice through 3D volumetric data using JPEG2000 client-server framework

    NASA Astrophysics Data System (ADS)

    Joshi, Rajan L.

    2006-03-01

    In medical imaging, the popularity of image capture modalities such as multislice CT and MRI is resulting in an exponential increase in the amount of volumetric data that needs to be archived and transmitted. At the same time, the increased data is taxing the interpretation capabilities of radiologists. One of the workflow strategies recommended for radiologists to overcome the data overload is the use of volumetric navigation. This allows the radiologist to seek a series of oblique slices through the data. However, it might be inconvenient for a radiologist to wait until all the slices are transferred from the PACS server to a client, such as a diagnostic workstation. To overcome this problem, we propose a client-server architecture based on JPEG2000 and JPEG2000 Interactive Protocol (JPIP) for rendering oblique slices through 3D volumetric data stored remotely at a server. The client uses the JPIP protocol for obtaining JPEG2000 compressed data from the server on an as needed basis. In JPEG2000, the image pixels are wavelet-transformed and the wavelet coefficients are grouped into precincts. Based on the positioning of the oblique slice, compressed data from only certain precincts is needed to render the slice. The client communicates this information to the server so that the server can transmit only relevant compressed data. We also discuss the use of caching on the client side for further reduction in bandwidth requirements. Finally, we present simulation results to quantify the bandwidth savings for rendering a series of oblique slices.

  5. JPEG2000 and dissemination of cultural heritage over the Internet.

    PubMed

    Politou, Eugenia A; Pavlidis, George P; Chamzas, Christodoulos

    2004-03-01

    By applying the latest technologies in image compression for managing the storage of massive image data within cultural heritage databases and by exploiting the universality of the Internet we are now able not only to effectively digitize, record and preserve, but also to promote the dissemination of cultural heritage. In this work we present an application of the latest image compression standard JPEG2000 in managing and browsing image databases, focusing on the image transmission aspect rather than database management and indexing. We combine the technologies of JPEG2000 image compression with client-server socket connections and client browser plug-in, as to provide with an all-in-one package for remote browsing of JPEG2000 compressed image databases, suitable for the effective dissemination of cultural heritage.

  6. Optimal JPWL Forward Error Correction Rate Allocation for Robust JPEG 2000 Images and Video Streaming over Mobile Ad Hoc Networks

    NASA Astrophysics Data System (ADS)

    Agueh, Max; Diouris, Jean-François; Diop, Magaye; Devaux, François-Olivier; De Vleeschouwer, Christophe; Macq, Benoit

    2008-12-01

    Based on the analysis of real mobile ad hoc network (MANET) traces, we derive in this paper an optimal wireless JPEG 2000 compliant forward error correction (FEC) rate allocation scheme for a robust streaming of images and videos over MANET. The packet-based proposed scheme has a low complexity and is compliant to JPWL, the 11th part of the JPEG 2000 standard. The effectiveness of the proposed method is evaluated using a wireless Motion JPEG 2000 client/server application; and the ability of the optimal scheme to guarantee quality of service (QoS) to wireless clients is demonstrated.

  7. Region of interest and windowing-based progressive medical image delivery using JPEG2000

    NASA Astrophysics Data System (ADS)

    Nagaraj, Nithin; Mukhopadhyay, Sudipta; Wheeler, Frederick W.; Avila, Ricardo S.

    2003-05-01

    An important telemedicine application is the perusal of CT scans (digital format) from a central server housed in a healthcare enterprise across a bandwidth constrained network by radiologists situated at remote locations for medical diagnostic purposes. It is generally expected that a viewing station respond to an image request by displaying the image within 1-2 seconds. Owing to limited bandwidth, it may not be possible to deliver the complete image in such a short period of time with traditional techniques. In this paper, we investigate progressive image delivery solutions by using JPEG 2000. An estimate of the time taken in different network bandwidths is performed to compare their relative merits. We further make use of the fact that most medical images are 12-16 bits, but would ultimately be converted to an 8-bit image via windowing for display on the monitor. We propose a windowing progressive RoI technique to exploit this and investigate JPEG 2000 RoI based compression after applying a favorite or a default window setting on the original image. Subsequent requests for different RoIs and window settings would then be processed at the server. For the windowing progressive RoI mode, we report a 50% reduction in transmission time.

  8. Providing Internet Access to High-Resolution Lunar Images

    NASA Technical Reports Server (NTRS)

    Plesea, Lucian

    2008-01-01

    The OnMoon server is a computer program that provides Internet access to high-resolution Lunar images, maps, and elevation data, all suitable for use in geographical information system (GIS) software for generating images, maps, and computational models of the Moon. The OnMoon server implements the Open Geospatial Consortium (OGC) Web Map Service (WMS) server protocol and supports Moon-specific extensions. Unlike other Internet map servers that provide Lunar data using an Earth coordinate system, the OnMoon server supports encoding of data in Moon-specific coordinate systems. The OnMoon server offers access to most of the available high-resolution Lunar image and elevation data. This server can generate image and map files in the tagged image file format (TIFF) or the Joint Photographic Experts Group (JPEG), 8- or 16-bit Portable Network Graphics (PNG), or Keyhole Markup Language (KML) format. Image control is provided by use of the OGC Style Layer Descriptor (SLD) protocol. Full-precision spectral arithmetic processing is also available, by use of a custom SLD extension. This server can dynamically add shaded relief based on the Lunar elevation to any image layer. This server also implements tiled WMS protocol and super-overlay KML for high-performance client application programs.

  9. Providing Internet Access to High-Resolution Mars Images

    NASA Technical Reports Server (NTRS)

    Plesea, Lucian

    2008-01-01

    The OnMars server is a computer program that provides Internet access to high-resolution Mars images, maps, and elevation data, all suitable for use in geographical information system (GIS) software for generating images, maps, and computational models of Mars. The OnMars server is an implementation of the Open Geospatial Consortium (OGC) Web Map Service (WMS) server. Unlike other Mars Internet map servers that provide Martian data using an Earth coordinate system, the OnMars WMS server supports encoding of data in Mars-specific coordinate systems. The OnMars server offers access to most of the available high-resolution Martian image and elevation data, including an 8-meter-per-pixel uncontrolled mosaic of most of the Mars Global Surveyor (MGS) Mars Observer Camera Narrow Angle (MOCNA) image collection, which is not available elsewhere. This server can generate image and map files in the tagged image file format (TIFF), Joint Photographic Experts Group (JPEG), 8- or 16-bit Portable Network Graphics (PNG), or Keyhole Markup Language (KML) format. Image control is provided by use of the OGC Style Layer Descriptor (SLD) protocol. The OnMars server also implements tiled WMS protocol and super-overlay KML for high-performance client application programs.

  10. Design and evaluation of web-based image transmission and display with different protocols

    NASA Astrophysics Data System (ADS)

    Tan, Bin; Chen, Kuangyi; Zheng, Xichuan; Zhang, Jianguo

    2011-03-01

    There are many Web-based image accessing technologies used in medical imaging area, such as component-based (ActiveX Control) thick client Web display, Zerofootprint thin client Web viewer (or called server side processing Web viewer), Flash Rich Internet Application(RIA) ,or HTML5 based Web display. Different Web display methods have different peformance in different network environment. In this presenation, we give an evaluation on two developed Web based image display systems. The first one is used for thin client Web display. It works between a PACS Web server with WADO interface and thin client. The PACS Web server provides JPEG format images to HTML pages. The second one is for thick client Web display. It works between a PACS Web server with WADO interface and thick client running in browsers containing ActiveX control, Flash RIA program or HTML5 scripts. The PACS Web server provides native DICOM format images or JPIP stream for theses clients.

  11. Digital cinema system using JPEG2000 movie of 8-million pixel resolution

    NASA Astrophysics Data System (ADS)

    Fujii, Tatsuya; Nomura, Mitsuru; Shirai, Daisuke; Yamaguchi, Takahiro; Fujii, Tetsuro; Ono, Sadayasu

    2003-05-01

    We have developed a prototype digital cinema system that can store, transmit and display extra high quality movies of 8-million pixel resolution, using JPEG2000 coding algorithm. The image quality is 4 times better than HDTV in resolution, and enables us to replace conventional films with digital cinema archives. Using wide-area optical gigabit IP networks, cinema contents are distributed and played back as a video-on-demand (VoD) system. The system consists of three main devices, a video server, a real-time JPEG2000 decoder, and a large-venue LCD projector. All digital movie data are compressed by JPEG2000 and stored in advance. The coded streams of 300~500 Mbps can be continuously transmitted from the PC server using TCP/IP. The decoder can perform the real-time decompression at 24/48 frames per second, using 120 parallel JPEG2000 processing elements. The received streams are expanded into 4.5Gbps raw video signals. The prototype LCD projector uses 3 pieces of 3840×2048 pixel reflective LCD panels (D-ILA) to show RGB 30-bit color movies fed by the decoder. The brightness exceeds 3000 ANSI lumens for a 300-inch screen. The refresh rate is chosen to 96Hz to thoroughly eliminate flickers, while preserving compatibility to cinema movies of 24 frames per second.

  12. IIPImage: Large-image visualization

    NASA Astrophysics Data System (ADS)

    Pillay, Ruven

    2014-08-01

    IIPImage is an advanced high-performance feature-rich image server system that enables online access to full resolution floating point (as well as other bit depth) images at terabyte scales. Paired with the VisiOmatic (ascl:1408.010) celestial image viewer, the system can comfortably handle gigapixel size images as well as advanced image features such as both 8, 16 and 32 bit depths, CIELAB colorimetric images and scientific imagery such as multispectral images. Streaming is tile-based, which enables viewing, navigating and zooming in real-time around gigapixel size images. Source images can be in either TIFF or JPEG2000 format. Whole images or regions within images can also be rapidly and dynamically resized and exported by the server from a single source image without the need to store multiple files in various sizes.

  13. Implementation of image transmission server system using embedded Linux

    NASA Astrophysics Data System (ADS)

    Park, Jong-Hyun; Jung, Yeon Sung; Nam, Boo Hee

    2005-12-01

    In this paper, we performed the implementation of image transmission server system using embedded system that is for the specified object and easy to install and move. Since the embedded system has lower capability than the PC, we have to reduce the quantity of calculation of the baseline JPEG image compression and transmission. We used the Redhat Linux 9.0 OS at the host PC and the target board based on embedded Linux. The image sequences are obtained from the camera attached to the FPGA (Field Programmable Gate Array) board with ALTERA cooperation chip. For effectiveness and avoiding some constraints from the vendor's own, we made the device driver using kernel module.

  14. Web surveillance system using platform-based design

    NASA Astrophysics Data System (ADS)

    Lin, Shin-Yo; Tsai, Tsung-Han

    2004-04-01

    A revolutionary methodology of SOPC platform-based design environment for multimedia communications will be developed. We embed a softcore processor to perform the image compression in FPGA. Then, we plug-in an Ethernet daughter board in the SOPC development platform system. Afterward, a web surveillance platform system is presented. The web surveillance system consists of three parts: image capture, web server and JPEG compression. In this architecture, user can control the surveillance system by remote. By the IP address configures to Ethernet daughter board, the user can access the surveillance system via browser. When user access the surveillance system, the CMOS sensor presently capture the remote image. After that, it will feed the captured image with the embedded processor. The embedded processor immediately performs the JPEG compression. Afterward, the user receives the compressed data via Ethernet. To sum up of the above mentioned, the all system will be implemented on APEX20K200E484-2X device.

  15. View compensated compression of volume rendered images for remote visualization.

    PubMed

    Lalgudi, Hariharan G; Marcellin, Michael W; Bilgin, Ali; Oh, Han; Nadar, Mariappan S

    2009-07-01

    Remote visualization of volumetric images has gained importance over the past few years in medical and industrial applications. Volume visualization is a computationally intensive process, often requiring hardware acceleration to achieve a real time viewing experience. One remote visualization model that can accomplish this would transmit rendered images from a server, based on viewpoint requests from a client. For constrained server-client bandwidth, an efficient compression scheme is vital for transmitting high quality rendered images. In this paper, we present a new view compensation scheme that utilizes the geometric relationship between viewpoints to exploit the correlation between successive rendered images. The proposed method obviates motion estimation between rendered images, enabling significant reduction to the complexity of a compressor. Additionally, the view compensation scheme, in conjunction with JPEG2000 performs better than AVC, the state of the art video compression standard.

  16. Development of a mobile emergency patient information and imaging communication system based on CDMA-1X EVDO

    NASA Astrophysics Data System (ADS)

    Yang, Keon Ho; Jung, Haijo; Kang, Won-Suk; Jang, Bong Mun; Kim, Joong Il; Han, Dong Hoon; Yoo, Sun-Kook; Yoo, Hyung-Sik; Kim, Hee-Joung

    2006-03-01

    The wireless mobile service with a high bit rate using CDMA-1X EVDO is now widely used in Korea. Mobile devices are also increasingly being used as the conventional communication mechanism. We have developed a web-based mobile system that communicates patient information and images, using CDMA-1X EVDO for emergency diagnosis. It is composed of a Mobile web application system using the Microsoft Windows 2003 server and an internet information service. Also, a mobile web PACS used for a database managing patient information and images was developed by using Microsoft access 2003. A wireless mobile emergency patient information and imaging communication system is developed by using Microsoft Visual Studio.NET, and JPEG 2000 ActiveX control for PDA phone was developed by using the Microsoft Embedded Visual C++. Also, the CDMA-1X EVDO is used for connections between mobile web servers and the PDA phone. This system allows fast access to the patient information database, storing both medical images and patient information anytime and anywhere. Especially, images were compressed into a JPEG2000 format and transmitted from a mobile web PACS inside the hospital to the radiologist using a PDA phone located outside the hospital. Also, this system shows radiological images as well as physiological signal data, including blood pressure, vital signs and so on, in the web browser of the PDA phone so radiologists can diagnose more effectively. Also, we acquired good results using an RW-6100 PDA phone used in the university hospital system of the Sinchon Severance Hospital in Korea.

  17. Using applet-servlet communication for optimizing window, level and crop for DICOM to JPEG conversion.

    PubMed

    Kamauu, Aaron W C; DuVall, Scott L; Wiggins, Richard H; Avrin, David E

    2008-09-01

    In the creation of interesting radiological cases in a digital teaching file, it is necessary to adjust the window and level settings of an image to effectively display the educational focus. The web-based applet described in this paper presents an effective solution for real-time window and level adjustments without leaving the picture archiving and communications system workstation. Optimized images are created, as user-defined parameters are passed between the applet and a servlet on the Health Insurance Portability and Accountability Act-compliant teaching file server.

  18. WMS Server 2.0

    NASA Technical Reports Server (NTRS)

    Plesea, Lucian; Wood, James F.

    2012-01-01

    This software is a simple, yet flexible server of raster map products, compliant with the Open Geospatial Consortium (OGC) Web Map Service (WMS) 1.1.1 protocol. The server is a full implementation of the OGC WMS 1.1.1 as a fastCGI client and using Geospatial Data Abstraction Library (GDAL) for data access. The server can operate in a proxy mode, where all or part of the WMS requests are done on a back server. The server has explicit support for a colocated tiled WMS, including rapid response of black (no-data) requests. It generates JPEG and PNG images, including 16-bit PNG. The GDAL back-end support allows great flexibility on the data access. The server is a port to a Linux/GDAL platform from the original IRIX/IL platform. It is simpler to configure and use, and depending on the storage format used, it has better performance than other available implementations. The WMS server 2.0 is a high-performance WMS implementation due to the fastCGI architecture. The use of GDAL data back end allows for great flexibility. The configuration is relatively simple, based on a single XML file. It provides scaling and cropping, as well as blending of multiple layers based on layer transparency.

  19. Efficient transmission of compressed data for remote volume visualization.

    PubMed

    Krishnan, Karthik; Marcellin, Michael W; Bilgin, Ali; Nadar, Mariappan S

    2006-09-01

    One of the goals of telemedicine is to enable remote visualization and browsing of medical volumes. There is a need to employ scalable compression schemes and efficient client-server models to obtain interactivity and an enhanced viewing experience. First, we present a scheme that uses JPEG2000 and JPIP (JPEG2000 Interactive Protocol) to transmit data in a multi-resolution and progressive fashion. The server exploits the spatial locality offered by the wavelet transform and packet indexing information to transmit, in so far as possible, compressed volume data relevant to the clients query. Once the client identifies its volume of interest (VOI), the volume is refined progressively within the VOI from an initial lossy to a final lossless representation. Contextual background information can also be made available having quality fading away from the VOI. Second, we present a prioritization that enables the client to progressively visualize scene content from a compressed file. In our specific example, the client is able to make requests to progressively receive data corresponding to any tissue type. The server is now capable of reordering the same compressed data file on the fly to serve data packets prioritized as per the client's request. Lastly, we describe the effect of compression parameters on compression ratio, decoding times and interactivity. We also present suggestions for optimizing JPEG2000 for remote volume visualization and volume browsing applications. The resulting system is ideally suited for client-server applications with the server maintaining the compressed volume data, to be browsed by a client with a low bandwidth constraint.

  20. Integration of digital gross pathology images for enterprise-wide access.

    PubMed

    Amin, Milon; Sharma, Gaurav; Parwani, Anil V; Anderson, Ralph; Kolowitz, Brian J; Piccoli, Anthony; Shrestha, Rasu B; Lauro, Gonzalo Romero; Pantanowitz, Liron

    2012-01-01

    Sharing digital pathology images for enterprise- wide use into a picture archiving and communication system (PACS) is not yet widely adopted. We share our solution and 3-year experience of transmitting such images to an enterprise image server (EIS). Gross pathology images acquired by prosectors were integrated with clinical cases into the laboratory information system's image management module, and stored in JPEG2000 format on a networked image server. Automated daily searches for cases with gross images were used to compile an ASCII text file that was forwarded to a separate institutional Enterprise Digital Imaging and Communications in Medicine (DICOM) Wrapper (EDW) server. Concurrently, an HL7-based image order for these cases was generated, containing the locations of images and patient data, and forwarded to the EDW, which combined data in these locations to generate images with patient data, as required by DICOM standards. The image and data were then "wrapped" according to DICOM standards, transferred to the PACS servers, and made accessible on an institution-wide basis. In total, 26,966 gross images from 9,733 cases were transmitted over the 3-year period from the laboratory information system to the EIS. The average process time for cases with successful automatic uploads (n=9,688) to the EIS was 98 seconds. Only 45 cases (0.5%) failed requiring manual intervention. Uploaded images were immediately available to institution- wide PACS users. Since inception, user feedback has been positive. Enterprise- wide PACS- based sharing of pathology images is feasible, provides useful services to clinical staff, and utilizes existing information system and telecommunications infrastructure. PACS-shared pathology images, however, require a "DICOM wrapper" for multisystem compatibility.

  1. Integration of digital gross pathology images for enterprise-wide access

    PubMed Central

    Amin, Milon; Sharma, Gaurav; Parwani, Anil V.; Anderson, Ralph; Kolowitz, Brian J; Piccoli, Anthony; Shrestha, Rasu B.; Lauro, Gonzalo Romero; Pantanowitz, Liron

    2012-01-01

    Background: Sharing digital pathology images for enterprise- wide use into a picture archiving and communication system (PACS) is not yet widely adopted. We share our solution and 3-year experience of transmitting such images to an enterprise image server (EIS). Methods: Gross pathology images acquired by prosectors were integrated with clinical cases into the laboratory information system's image management module, and stored in JPEG2000 format on a networked image server. Automated daily searches for cases with gross images were used to compile an ASCII text file that was forwarded to a separate institutional Enterprise Digital Imaging and Communications in Medicine (DICOM) Wrapper (EDW) server. Concurrently, an HL7-based image order for these cases was generated, containing the locations of images and patient data, and forwarded to the EDW, which combined data in these locations to generate images with patient data, as required by DICOM standards. The image and data were then “wrapped” according to DICOM standards, transferred to the PACS servers, and made accessible on an institution-wide basis. Results: In total, 26,966 gross images from 9,733 cases were transmitted over the 3-year period from the laboratory information system to the EIS. The average process time for cases with successful automatic uploads (n=9,688) to the EIS was 98 seconds. Only 45 cases (0.5%) failed requiring manual intervention. Uploaded images were immediately available to institution- wide PACS users. Since inception, user feedback has been positive. Conclusions: Enterprise- wide PACS- based sharing of pathology images is feasible, provides useful services to clinical staff, and utilizes existing information system and telecommunications infrastructure. PACS-shared pathology images, however, require a “DICOM wrapper” for multisystem compatibility. PMID:22530178

  2. JPEG vs. JPEG 2000: an objective comparison of image encoding quality

    NASA Astrophysics Data System (ADS)

    Ebrahimi, Farzad; Chamik, Matthieu; Winkler, Stefan

    2004-11-01

    This paper describes an objective comparison of the image quality of different encoders. Our approach is based on estimating the visual impact of compression artifacts on perceived quality. We present a tool that measures these artifacts in an image and uses them to compute a prediction of the Mean Opinion Score (MOS) obtained in subjective experiments. We show that the MOS predictions by our proposed tool are a better indicator of perceived image quality than PSNR, especially for highly compressed images. For the encoder comparison, we compress a set of 29 test images with two JPEG encoders (Adobe Photoshop and IrfanView) and three JPEG2000 encoders (JasPer, Kakadu, and IrfanView) at various compression ratios. We compute blockiness, blur, and MOS predictions as well as PSNR of the compressed images. Our results show that the IrfanView JPEG encoder produces consistently better images than the Adobe Photoshop JPEG encoder at the same data rate. The differences between the JPEG2000 encoders in our test are less pronounced; JasPer comes out as the best codec, closely followed by IrfanView and Kakadu. Comparing the JPEG- and JPEG2000-encoding quality of IrfanView, we find that JPEG has a slight edge at low compression ratios, while JPEG2000 is the clear winner at medium and high compression ratios.

  3. A generalized Benford's law for JPEG coefficients and its applications in image forensics

    NASA Astrophysics Data System (ADS)

    Fu, Dongdong; Shi, Yun Q.; Su, Wei

    2007-02-01

    In this paper, a novel statistical model based on Benford's law for the probability distributions of the first digits of the block-DCT and quantized JPEG coefficients is presented. A parametric logarithmic law, i.e., the generalized Benford's law, is formulated. Furthermore, some potential applications of this model in image forensics are discussed in this paper, which include the detection of JPEG compression for images in bitmap format, the estimation of JPEG compression Qfactor for JPEG compressed bitmap image, and the detection of double compressed JPEG image. The results of our extensive experiments demonstrate the effectiveness of the proposed statistical model.

  4. Lossless compression of grayscale medical images: effectiveness of traditional and state-of-the-art approaches

    NASA Astrophysics Data System (ADS)

    Clunie, David A.

    2000-05-01

    Proprietary compression schemes have a cost and risk associated with their support, end of life and interoperability. Standards reduce this cost and risk. The new JPEG-LS process (ISO/IEC 14495-1), and the lossless mode of the proposed JPEG 2000 scheme (ISO/IEC CD15444-1), new standard schemes that may be incorporated into DICOM, are evaluated here. Three thousand, six hundred and seventy-nine (3,679) single frame grayscale images from multiple anatomical regions, modalities and vendors, were tested. For all images combined JPEG-LS and JPEG 2000 performed equally well (3.81), almost as well as CALIC (3.91), a complex predictive scheme used only as a benchmark. Both out-performed existing JPEG (3.04 with optimum predictor choice per image, 2.79 for previous pixel prediction as most commonly used in DICOM). Text dictionary schemes performed poorly (gzip 2.38), as did image dictionary schemes without statistical modeling (PNG 2.76). Proprietary transform based schemes did not perform as well as JPEG-LS or JPEG 2000 (S+P Arithmetic 3.4, CREW 3.56). Stratified by modality, JPEG-LS compressed CT images (4.00), MR (3.59), NM (5.98), US (3.4), IO (2.66), CR (3.64), DX (2.43), and MG (2.62). CALIC always achieved the highest compression except for one modality for which JPEG-LS did better (MG digital vendor A JPEG-LS 4.02, CALIC 4.01). JPEG-LS outperformed existing JPEG for all modalities. The use of standard schemes can achieve state of the art performance, regardless of modality, JPEG-LS is simple, easy to implement, consumes less memory, and is faster than JPEG 2000, though JPEG 2000 will offer lossy and progressive transmission. It is recommended that DICOM add transfer syntaxes for both JPEG-LS and JPEG 2000.

  5. Immunochromatographic diagnostic test analysis using Google Glass.

    PubMed

    Feng, Steve; Caire, Romain; Cortazar, Bingen; Turan, Mehmet; Wong, Andrew; Ozcan, Aydogan

    2014-03-25

    We demonstrate a Google Glass-based rapid diagnostic test (RDT) reader platform capable of qualitative and quantitative measurements of various lateral flow immunochromatographic assays and similar biomedical diagnostics tests. Using a custom-written Glass application and without any external hardware attachments, one or more RDTs labeled with Quick Response (QR) code identifiers are simultaneously imaged using the built-in camera of the Google Glass that is based on a hands-free and voice-controlled interface and digitally transmitted to a server for digital processing. The acquired JPEG images are automatically processed to locate all the RDTs and, for each RDT, to produce a quantitative diagnostic result, which is returned to the Google Glass (i.e., the user) and also stored on a central server along with the RDT image, QR code, and other related information (e.g., demographic data). The same server also provides a dynamic spatiotemporal map and real-time statistics for uploaded RDT results accessible through Internet browsers. We tested this Google Glass-based diagnostic platform using qualitative (i.e., yes/no) human immunodeficiency virus (HIV) and quantitative prostate-specific antigen (PSA) tests. For the quantitative RDTs, we measured activated tests at various concentrations ranging from 0 to 200 ng/mL for free and total PSA. This wearable RDT reader platform running on Google Glass combines a hands-free sensing and image capture interface with powerful servers running our custom image processing codes, and it can be quite useful for real-time spatiotemporal tracking of various diseases and personal medical conditions, providing a valuable tool for epidemiology and mobile health.

  6. Immunochromatographic Diagnostic Test Analysis Using Google Glass

    PubMed Central

    2014-01-01

    We demonstrate a Google Glass-based rapid diagnostic test (RDT) reader platform capable of qualitative and quantitative measurements of various lateral flow immunochromatographic assays and similar biomedical diagnostics tests. Using a custom-written Glass application and without any external hardware attachments, one or more RDTs labeled with Quick Response (QR) code identifiers are simultaneously imaged using the built-in camera of the Google Glass that is based on a hands-free and voice-controlled interface and digitally transmitted to a server for digital processing. The acquired JPEG images are automatically processed to locate all the RDTs and, for each RDT, to produce a quantitative diagnostic result, which is returned to the Google Glass (i.e., the user) and also stored on a central server along with the RDT image, QR code, and other related information (e.g., demographic data). The same server also provides a dynamic spatiotemporal map and real-time statistics for uploaded RDT results accessible through Internet browsers. We tested this Google Glass-based diagnostic platform using qualitative (i.e., yes/no) human immunodeficiency virus (HIV) and quantitative prostate-specific antigen (PSA) tests. For the quantitative RDTs, we measured activated tests at various concentrations ranging from 0 to 200 ng/mL for free and total PSA. This wearable RDT reader platform running on Google Glass combines a hands-free sensing and image capture interface with powerful servers running our custom image processing codes, and it can be quite useful for real-time spatiotemporal tracking of various diseases and personal medical conditions, providing a valuable tool for epidemiology and mobile health. PMID:24571349

  7. Reversible Watermarking Surviving JPEG Compression.

    PubMed

    Zain, J; Clarke, M

    2005-01-01

    This paper will discuss the properties of watermarking medical images. We will also discuss the possibility of such images being compressed by JPEG and give an overview of JPEG compression. We will then propose a watermarking scheme that is reversible and robust to JPEG compression. The purpose is to verify the integrity and authenticity of medical images. We used 800x600x8 bits ultrasound (US) images in our experiment. SHA-256 of the image is then embedded in the Least significant bits (LSB) of an 8x8 block in the Region of Non Interest (RONI). The image is then compressed using JPEG and decompressed using Photoshop 6.0. If the image has not been altered, the watermark extracted will match the hash (SHA256) of the original image. The result shown that the embedded watermark is robust to JPEG compression up to image quality 60 (~91% compressed).

  8. The JPEG XT suite of standards: status and future plans

    NASA Astrophysics Data System (ADS)

    Richter, Thomas; Bruylants, Tim; Schelkens, Peter; Ebrahimi, Touradj

    2015-09-01

    The JPEG standard has known an enormous market adoption. Daily, billions of pictures are created, stored and exchanged in this format. The JPEG committee acknowledges this success and spends continued efforts in maintaining and expanding the standard specifications. JPEG XT is a standardization effort targeting the extension of the JPEG features by enabling support for high dynamic range imaging, lossless and near-lossless coding, and alpha channel coding, while also guaranteeing backward and forward compatibility with the JPEG legacy format. This paper gives an overview of the current status of the JPEG XT standards suite. It discusses the JPEG legacy specification, and details how higher dynamic range support is facilitated both for integer and floating-point color representations. The paper shows how JPEG XT's support for lossless and near-lossless coding of low and high dynamic range images is achieved in combination with backward compatibility to JPEG legacy. In addition, the extensible boxed-based JPEG XT file format on which all following and future extensions of JPEG will be based is introduced. This paper also details how the lossy and lossless representations of alpha channels are supported to allow coding transparency information and arbitrarily shaped images. Finally, we conclude by giving prospects on upcoming JPEG standardization initiative JPEG Privacy & Security, and a number of other possible extensions in JPEG XT.

  9. Evaluation of image compression for computer-aided diagnosis of breast tumors in 3D sonography

    NASA Astrophysics Data System (ADS)

    Chen, We-Min; Huang, Yu-Len; Tao, Chi-Chuan; Chen, Dar-Ren; Moon, Woo-Kyung

    2006-03-01

    Medical imaging examinations form the basis for physicians diagnosing diseases, as evidenced by the increasing use of digital medical images for picture archiving and communications systems (PACS). However, with enlarged medical image databases and rapid growth of patients' case reports, PACS requires image compression to accelerate the image transmission rate and conserve disk space for diminishing implementation costs. For this purpose, JPEG and JPEG2000 have been accepted as legal formats for the digital imaging and communications in medicine (DICOM). The high compression ratio is felt to be useful for medical imagery. Therefore, this study evaluates the compression ratios of JPEG and JPEG2000 standards for computer-aided diagnosis (CAD) of breast tumors in 3-D medical ultrasound (US) images. The 3-D US data sets with various compression ratios are compressed using the two efficacious image compression standards. The reconstructed data sets are then diagnosed by a previous proposed CAD system. The diagnostic accuracy is measured based on receiver operating characteristic (ROC) analysis. Namely, the ROC curves are used to compare the diagnostic performance of two or more reconstructed images. Analysis results ensure a comparison of the compression ratios by using JPEG and JPEG2000 for 3-D US images. Results of this study provide the possible bit rates using JPEG and JPEG2000 for 3-D breast US images.

  10. Estimating JPEG2000 compression for image forensics using Benford's Law

    NASA Astrophysics Data System (ADS)

    Qadir, Ghulam; Zhao, Xi; Ho, Anthony T. S.

    2010-05-01

    With the tremendous growth and usage of digital images nowadays, the integrity and authenticity of digital content is becoming increasingly important, and a growing concern to many government and commercial sectors. Image Forensics, based on a passive statistical analysis of the image data only, is an alternative approach to the active embedding of data associated with Digital Watermarking. Benford's Law was first introduced to analyse the probability distribution of the 1st digit (1-9) numbers of natural data, and has since been applied to Accounting Forensics for detecting fraudulent income tax returns [9]. More recently, Benford's Law has been further applied to image processing and image forensics. For example, Fu et al. [5] proposed a Generalised Benford's Law technique for estimating the Quality Factor (QF) of JPEG compressed images. In our previous work, we proposed a framework incorporating the Generalised Benford's Law to accurately detect unknown JPEG compression rates of watermarked images in semi-fragile watermarking schemes. JPEG2000 (a relatively new image compression standard) offers higher compression rates and better image quality as compared to JPEG compression. In this paper, we propose the novel use of Benford's Law for estimating JPEG2000 compression for image forensics applications. By analysing the DWT coefficients and JPEG2000 compression on 1338 test images, the initial results indicate that the 1st digit probability of DWT coefficients follow the Benford's Law. The unknown JPEG2000 compression rates of the image can also be derived, and proved with the help of a divergence factor, which shows the deviation between the probabilities and Benford's Law. Based on 1338 test images, the mean divergence for DWT coefficients is approximately 0.0016, which is lower than DCT coefficients at 0.0034. However, the mean divergence for JPEG2000 images compression rate at 0.1 is 0.0108, which is much higher than uncompressed DWT coefficients. This result clearly indicates a presence of compression in the image. Moreover, we compare the results of 1st digit probability and divergence among JPEG2000 compression rates at 0.1, 0.3, 0.5 and 0.9. The initial results show that the expected difference among them could be used for further analysis to estimate the unknown JPEG2000 compression rates.

  11. JPEG and wavelet compression of ophthalmic images

    NASA Astrophysics Data System (ADS)

    Eikelboom, Robert H.; Yogesan, Kanagasingam; Constable, Ian J.; Barry, Christopher J.

    1999-05-01

    This study was designed to determine the degree and methods of digital image compression to produce ophthalmic imags of sufficient quality for transmission and diagnosis. The photographs of 15 subjects, which inclined eyes with normal, subtle and distinct pathologies, were digitized to produce 1.54MB images and compressed to five different methods: (i) objectively by calculating the RMS error between the uncompressed and compressed images, (ii) semi-subjectively by assessing the visibility of blood vessels, and (iii) subjectively by asking a number of experienced observers to assess the images for quality and clinical interpretation. Results showed that as a function of compressed image size, wavelet compressed images produced less RMS error than JPEG compressed images. Blood vessel branching could be observed to a greater extent after Wavelet compression compared to JPEG compression produced better images then a JPEG compression for a given image size. Overall, it was shown that images had to be compressed to below 2.5 percent for JPEG and 1.7 percent for Wavelet compression before fine detail was lost, or when image quality was too poor to make a reliable diagnosis.

  12. Performance of the JPEG Estimated Spectrum Adaptive Postfilter (JPEG-ESAP) for Low Bit Rates

    NASA Technical Reports Server (NTRS)

    Linares, Irving (Inventor)

    2016-01-01

    Frequency-based, pixel-adaptive filtering using the JPEG-ESAP algorithm for low bit rate JPEG formatted color images may allow for more compressed images while maintaining equivalent quality at a smaller file size or bitrate. For RGB, an image is decomposed into three color bands--red, green, and blue. The JPEG-ESAP algorithm is then applied to each band (e.g., once for red, once for green, and once for blue) and the output of each application of the algorithm is rebuilt as a single color image. The ESAP algorithm may be repeatedly applied to MPEG-2 video frames to reduce their bit rate by a factor of 2 or 3, while maintaining equivalent video quality, both perceptually, and objectively, as recorded in the computed PSNR values.

  13. Estimation of color filter array data from JPEG images for improved demosaicking

    NASA Astrophysics Data System (ADS)

    Feng, Wei; Reeves, Stanley J.

    2006-02-01

    On-camera demosaicking algorithms are necessarily simple and therefore do not yield the best possible images. However, off-camera demosaicking algorithms face the additional challenge that the data has been compressed and therefore corrupted by quantization noise. We propose a method to estimate the original color filter array (CFA) data from JPEG-compressed images so that more sophisticated (and better) demosaicking schemes can be applied to get higher-quality images. The JPEG image formation process, including simple demosaicking, color space transformation, chrominance channel decimation and DCT, is modeled as a series of matrix operations followed by quantization on the CFA data, which is estimated by least squares. An iterative method is used to conserve memory and speed computation. Our experiments show that the mean square error (MSE) with respect to the original CFA data is reduced significantly using our algorithm, compared to that of unprocessed JPEG and deblocked JPEG data.

  14. Steganalysis based on JPEG compatibility

    NASA Astrophysics Data System (ADS)

    Fridrich, Jessica; Goljan, Miroslav; Du, Rui

    2001-11-01

    In this paper, we introduce a new forensic tool that can reliably detect modifications in digital images, such as distortion due to steganography and watermarking, in images that were originally stored in the JPEG format. The JPEG compression leave unique fingerprints and serves as a fragile watermark enabling us to detect changes as small as modifying the LSB of one randomly chosen pixel. The detection of changes is based on investigating the compatibility of 8x8 blocks of pixels with JPEG compression with a given quantization matrix. The proposed steganalytic method is applicable to virtually all steganongraphic and watermarking algorithms with the exception of those that embed message bits into the quantized JPEG DCT coefficients. The method can also be used to estimate the size of the secret message and identify the pixels that carry message bits. As a consequence of our steganalysis, we strongly recommend avoiding using images that have been originally stored in the JPEG format as cover-images for spatial-domain steganography.

  15. Non-parametric adaptative JPEG fragments carving

    NASA Astrophysics Data System (ADS)

    Amrouche, Sabrina Cherifa; Salamani, Dalila

    2018-04-01

    The most challenging JPEG recovery tasks arise when the file header is missing. In this paper we propose to use a two layer machine learning model to restore headerless JPEG images. We first build a classifier able to identify the structural properties of the images/fragments and then use an AutoEncoder (AE) to learn the fragment features for the header prediction. We define a JPEG universal header and the remaining free image parameters (Height, Width) are predicted with a Gradient Boosting Classifier. Our approach resulted in 90% accuracy using the manually defined features and 78% accuracy using the AE features.

  16. The effect of JPEG compression on automated detection of microaneurysms in retinal images

    NASA Astrophysics Data System (ADS)

    Cree, M. J.; Jelinek, H. F.

    2008-02-01

    As JPEG compression at source is ubiquitous in retinal imaging, and the block artefacts introduced are known to be of similar size to microaneurysms (an important indicator of diabetic retinopathy) it is prudent to evaluate the effect of JPEG compression on automated detection of retinal pathology. Retinal images were acquired at high quality and then compressed to various lower qualities. An automated microaneurysm detector was run on the retinal images of various qualities of JPEG compression and the ability to predict the presence of diabetic retinopathy based on the detected presence of microaneurysms was evaluated with receiver operating characteristic (ROC) methodology. The negative effect of JPEG compression on automated detection was observed even at levels of compression sometimes used in retinal eye-screening programmes and these may have important clinical implications for deciding on acceptable levels of compression for a fully automated eye-screening programme.

  17. A block-based JPEG-LS compression technique with lossless region of interest

    NASA Astrophysics Data System (ADS)

    Deng, Lihua; Huang, Zhenghua; Yao, Shoukui

    2018-03-01

    JPEG-LS lossless compression algorithm is used in many specialized applications that emphasize on the attainment of high fidelity for its lower complexity and better compression ratios than the lossless JPEG standard. But it cannot prevent error diffusion because of the context dependence of the algorithm, and have low compression rate when compared to lossy compression. In this paper, we firstly divide the image into two parts: ROI regions and non-ROI regions. Then we adopt a block-based image compression technique to decrease the range of error diffusion. We provide JPEG-LS lossless compression for the image blocks which include the whole or part region of interest (ROI) and JPEG-LS near lossless compression for the image blocks which are included in the non-ROI (unimportant) regions. Finally, a set of experiments are designed to assess the effectiveness of the proposed compression method.

  18. A modified JPEG-LS lossless compression method for remote sensing images

    NASA Astrophysics Data System (ADS)

    Deng, Lihua; Huang, Zhenghua

    2015-12-01

    As many variable length source coders, JPEG-LS is highly vulnerable to channel errors which occur in the transmission of remote sensing images. The error diffusion is one of the important factors which infect its robustness. The common method of improving the error resilience of JPEG-LS is dividing the image into many strips or blocks, and then coding each of them independently, but this method reduces the coding efficiency. In this paper, a block based JPEP-LS lossless compression method with an adaptive parameter is proposed. In the modified scheme, the threshold parameter RESET is adapted to an image and the compression efficiency is close to that of the conventional JPEG-LS.

  19. Oblivious image watermarking combined with JPEG compression

    NASA Astrophysics Data System (ADS)

    Chen, Qing; Maitre, Henri; Pesquet-Popescu, Beatrice

    2003-06-01

    For most data hiding applications, the main source of concern is the effect of lossy compression on hidden information. The objective of watermarking is fundamentally in conflict with lossy compression. The latter attempts to remove all irrelevant and redundant information from a signal, while the former uses the irrelevant information to mask the presence of hidden data. Compression on a watermarked image can significantly affect the retrieval of the watermark. Past investigations of this problem have heavily relied on simulation. It is desirable not only to measure the effect of compression on embedded watermark, but also to control the embedding process to survive lossy compression. In this paper, we focus on oblivious watermarking by assuming that the watermarked image inevitably undergoes JPEG compression prior to watermark extraction. We propose an image-adaptive watermarking scheme where the watermarking algorithm and the JPEG compression standard are jointly considered. Watermark embedding takes into consideration the JPEG compression quality factor and exploits an HVS model to adaptively attain a proper trade-off among transparency, hiding data rate, and robustness to JPEG compression. The scheme estimates the image-dependent payload under JPEG compression to achieve the watermarking bit allocation in a determinate way, while maintaining consistent watermark retrieval performance.

  20. A comparison of the fractal and JPEG algorithms

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.; Shahshahani, M.

    1991-01-01

    A proprietary fractal image compression algorithm and the Joint Photographic Experts Group (JPEG) industry standard algorithm for image compression are compared. In every case, the JPEG algorithm was superior to the fractal method at a given compression ratio according to a root mean square criterion and a peak signal to noise criterion.

  1. An efficient multiple exposure image fusion in JPEG domain

    NASA Astrophysics Data System (ADS)

    Hebbalaguppe, Ramya; Kakarala, Ramakrishna

    2012-01-01

    In this paper, we describe a method to fuse multiple images taken with varying exposure times in the JPEG domain. The proposed algorithm finds its application in HDR image acquisition and image stabilization for hand-held devices like mobile phones, music players with cameras, digital cameras etc. Image acquisition at low light typically results in blurry and noisy images for hand-held camera's. Altering camera settings like ISO sensitivity, exposure times and aperture for low light image capture results in noise amplification, motion blur and reduction of depth-of-field respectively. The purpose of fusing multiple exposures is to combine the sharp details of the shorter exposure images with high signal-to-noise-ratio (SNR) of the longer exposure images. The algorithm requires only a single pass over all images, making it efficient. It comprises of - sigmoidal boosting of shorter exposed images, image fusion, artifact removal and saturation detection. Algorithm does not need more memory than a single JPEG macro block to be kept in memory making it feasible to be implemented as the part of a digital cameras hardware image processing engine. The Artifact removal step reuses the JPEGs built-in frequency analysis and hence benefits from the considerable optimization and design experience that is available for JPEG.

  2. The impact of skull bone intensity on the quality of compressed CT neuro images

    NASA Astrophysics Data System (ADS)

    Kowalik-Urbaniak, Ilona; Vrscay, Edward R.; Wang, Zhou; Cavaro-Menard, Christine; Koff, David; Wallace, Bill; Obara, Boguslaw

    2012-02-01

    The increasing use of technologies such as CT and MRI, along with a continuing improvement in their resolution, has contributed to the explosive growth of digital image data being generated. Medical communities around the world have recognized the need for efficient storage, transmission and display of medical images. For example, the Canadian Association of Radiologists (CAR) has recommended compression ratios for various modalities and anatomical regions to be employed by lossy JPEG and JPEG2000 compression in order to preserve diagnostic quality. Here we investigate the effects of the sharp skull edges present in CT neuro images on JPEG and JPEG2000 lossy compression. We conjecture that this atypical effect is caused by the sharp edges between the skull bone and the background regions as well as between the skull bone and the interior regions. These strong edges create large wavelet coefficients that consume an unnecessarily large number of bits in JPEG2000 compression because of its bitplane coding scheme, and thus result in reduced quality at the interior region, which contains most diagnostic information in the image. To validate the conjecture, we investigate a segmentation based compression algorithm based on simple thresholding and morphological operators. As expected, quality is improved in terms of PSNR as well as the structural similarity (SSIM) image quality measure, and its multiscale (MS-SSIM) and informationweighted (IW-SSIM) versions. This study not only supports our conjecture, but also provides a solution to improve the performance of JPEG and JPEG2000 compression for specific types of CT images.

  3. Implementation of remote monitoring and managing switches

    NASA Astrophysics Data System (ADS)

    Leng, Junmin; Fu, Guo

    2010-12-01

    In order to strengthen the safety performance of the network and provide the big convenience and efficiency for the operator and the manager, the system of remote monitoring and managing switches has been designed and achieved using the advanced network technology and present network resources. The fast speed Internet Protocol Cameras (FS IP Camera) is selected, which has 32-bit RSIC embedded processor and can support a number of protocols. An Optimal image compress algorithm Motion-JPEG is adopted so that high resolution images can be transmitted by narrow network bandwidth. The architecture of the whole monitoring and managing system is designed and implemented according to the current infrastructure of the network and switches. The control and administrative software is projected. The dynamical webpage Java Server Pages (JSP) development platform is utilized in the system. SQL (Structured Query Language) Server database is applied to save and access images information, network messages and users' data. The reliability and security of the system is further strengthened by the access control. The software in the system is made to be cross-platform so that multiple operating systems (UNIX, Linux and Windows operating systems) are supported. The application of the system can greatly reduce manpower cost, and can quickly find and solve problems.

  4. Progressive data transmission for anatomical landmark detection in a cloud.

    PubMed

    Sofka, M; Ralovich, K; Zhang, J; Zhou, S K; Comaniciu, D

    2012-01-01

    In the concept of cloud-computing-based systems, various authorized users have secure access to patient records from a number of care delivery organizations from any location. This creates a growing need for remote visualization, advanced image processing, state-of-the-art image analysis, and computer aided diagnosis. This paper proposes a system of algorithms for automatic detection of anatomical landmarks in 3D volumes in the cloud computing environment. The system addresses the inherent problem of limited bandwidth between a (thin) client, data center, and data analysis server. The problem of limited bandwidth is solved by a hierarchical sequential detection algorithm that obtains data by progressively transmitting only image regions required for processing. The client sends a request to detect a set of landmarks for region visualization or further analysis. The algorithm running on the data analysis server obtains a coarse level image from the data center and generates landmark location candidates. The candidates are then used to obtain image neighborhood regions at a finer resolution level for further detection. This way, the landmark locations are hierarchically and sequentially detected and refined. Only image regions surrounding landmark location candidates need to be trans- mitted during detection. Furthermore, the image regions are lossy compressed with JPEG 2000. Together, these properties amount to at least 30 times bandwidth reduction while achieving similar accuracy when compared to an algorithm using the original data. The hierarchical sequential algorithm with progressive data transmission considerably reduces bandwidth requirements in cloud-based detection systems.

  5. Camera-Model Identification Using Markovian Transition Probability Matrix

    NASA Astrophysics Data System (ADS)

    Xu, Guanshuo; Gao, Shang; Shi, Yun Qing; Hu, Ruimin; Su, Wei

    Detecting the (brands and) models of digital cameras from given digital images has become a popular research topic in the field of digital forensics. As most of images are JPEG compressed before they are output from cameras, we propose to use an effective image statistical model to characterize the difference JPEG 2-D arrays of Y and Cb components from the JPEG images taken by various camera models. Specifically, the transition probability matrices derived from four different directional Markov processes applied to the image difference JPEG 2-D arrays are used to identify statistical difference caused by image formation pipelines inside different camera models. All elements of the transition probability matrices, after a thresholding technique, are directly used as features for classification purpose. Multi-class support vector machines (SVM) are used as the classification tool. The effectiveness of our proposed statistical model is demonstrated by large-scale experimental results.

  6. Detection of shifted double JPEG compression by an adaptive DCT coefficient model

    NASA Astrophysics Data System (ADS)

    Wang, Shi-Lin; Liew, Alan Wee-Chung; Li, Sheng-Hong; Zhang, Yu-Jin; Li, Jian-Hua

    2014-12-01

    In many JPEG image splicing forgeries, the tampered image patch has been JPEG-compressed twice with different block alignments. Such phenomenon in JPEG image forgeries is called the shifted double JPEG (SDJPEG) compression effect. Detection of SDJPEG-compressed patches could help in detecting and locating the tampered region. However, the current SDJPEG detection methods do not provide satisfactory results especially when the tampered region is small. In this paper, we propose a new SDJPEG detection method based on an adaptive discrete cosine transform (DCT) coefficient model. DCT coefficient distributions for SDJPEG and non-SDJPEG patches have been analyzed and a discriminative feature has been proposed to perform the two-class classification. An adaptive approach is employed to select the most discriminative DCT modes for SDJPEG detection. The experimental results show that the proposed approach can achieve much better results compared with some existing approaches in SDJPEG patch detection especially when the patch size is small.

  7. Generalised Category Attack—Improving Histogram-Based Attack on JPEG LSB Embedding

    NASA Astrophysics Data System (ADS)

    Lee, Kwangsoo; Westfeld, Andreas; Lee, Sangjin

    We present a generalised and improved version of the category attack on LSB steganography in JPEG images with straddled embedding path. It detects more reliably low embedding rates and is also less disturbed by double compressed images. The proposed methods are evaluated on several thousand images. The results are compared to both recent blind and specific attacks for JPEG embedding. The proposed attack permits a more reliable detection, although it is based on first order statistics only. Its simple structure makes it very fast.

  8. Image Size Variation Influence on Corrupted and Non-viewable BMP Image

    NASA Astrophysics Data System (ADS)

    Azmi, Tengku Norsuhaila T.; Azma Abdullah, Nurul; Rahman, Nurul Hidayah Ab; Hamid, Isredza Rahmi A.; Chai Wen, Chuah

    2017-08-01

    Image is one of the evidence component seek in digital forensics. Joint Photographic Experts Group (JPEG) format is most popular used in the Internet because JPEG files are very lossy and easy to compress that can speed up Internet transmitting processes. However, corrupted JPEG images are hard to recover due to the complexities of determining corruption point. Nowadays Bitmap (BMP) images are preferred in image processing compared to another formats because BMP image contain all the image information in a simple format. Therefore, in order to investigate the corruption point in JPEG, the file is required to be converted into BMP format. Nevertheless, there are many things that can influence the corrupting of BMP image such as the changes of image size that make the file non-viewable. In this paper, the experiment indicates that the size of BMP file influences the changes in the image itself through three conditions, deleting, replacing and insertion. From the experiment, we learnt by correcting the file size, it can able to produce a viewable file though partially. Then, it can be investigated further to identify the corruption point.

  9. A novel high-frequency encoding algorithm for image compression

    NASA Astrophysics Data System (ADS)

    Siddeq, Mohammed M.; Rodrigues, Marcos A.

    2017-12-01

    In this paper, a new method for image compression is proposed whose quality is demonstrated through accurate 3D reconstruction from 2D images. The method is based on the discrete cosine transform (DCT) together with a high-frequency minimization encoding algorithm at compression stage and a new concurrent binary search algorithm at decompression stage. The proposed compression method consists of five main steps: (1) divide the image into blocks and apply DCT to each block; (2) apply a high-frequency minimization method to the AC-coefficients reducing each block by 2/3 resulting in a minimized array; (3) build a look up table of probability data to enable the recovery of the original high frequencies at decompression stage; (4) apply a delta or differential operator to the list of DC-components; and (5) apply arithmetic encoding to the outputs of steps (2) and (4). At decompression stage, the look up table and the concurrent binary search algorithm are used to reconstruct all high-frequency AC-coefficients while the DC-components are decoded by reversing the arithmetic coding. Finally, the inverse DCT recovers the original image. We tested the technique by compressing and decompressing 2D images including images with structured light patterns for 3D reconstruction. The technique is compared with JPEG and JPEG2000 through 2D and 3D RMSE. Results demonstrate that the proposed compression method is perceptually superior to JPEG with equivalent quality to JPEG2000. Concerning 3D surface reconstruction from images, it is demonstrated that the proposed method is superior to both JPEG and JPEG2000.

  10. High-quality JPEG compression history detection for fake uncompressed images

    NASA Astrophysics Data System (ADS)

    Zhang, Rong; Wang, Rang-Ding; Guo, Li-Jun; Jiang, Bao-Chuan

    2017-05-01

    Authenticity is one of the most important evaluation factors of images for photography competitions or journalism. Unusual compression history of an image often implies the illicit intent of its author. Our work aims at distinguishing real uncompressed images from fake uncompressed images that are saved in uncompressed formats but have been previously compressed. To detect the potential image JPEG compression, we analyze the JPEG compression artifacts based on the tetrolet covering, which corresponds to the local image geometrical structure. Since the compression can alter the structure information, the tetrolet covering indexes may be changed if a compression is performed on the test image. Such changes can provide valuable clues about the image compression history. To be specific, the test image is first compressed with different quality factors to generate a set of temporary images. Then, the test image is compared with each temporary image block-by-block to investigate whether the tetrolet covering index of each 4×4 block is different between them. The percentages of the changed tetrolet covering indexes corresponding to the quality factors (from low to high) are computed and used to form the p-curve, the local minimum of which may indicate the potential compression. Our experimental results demonstrate the advantage of our method to detect JPEG compressions of high quality, even the highest quality factors such as 98, 99, or 100 of the standard JPEG compression, from uncompressed-format images. At the same time, our detection algorithm can accurately identify the corresponding compression quality factor.

  11. Influence of image compression on the interpretation of spectral-domain optical coherence tomography in exudative age-related macular degeneration

    PubMed Central

    Kim, J H; Kang, S W; Kim, J-r; Chang, Y S

    2014-01-01

    Purpose To evaluate the effect of image compression of spectral-domain optical coherence tomography (OCT) images in the examination of eyes with exudative age-related macular degeneration (AMD). Methods Thirty eyes from 30 patients who were diagnosed with exudative AMD were included in this retrospective observational case series. The horizontal OCT scans centered at the center of the fovea were conducted using spectral-domain OCT. The images were exported to Tag Image File Format (TIFF) and 100, 75, 50, 25 and 10% quality of Joint Photographic Experts Group (JPEG) format. OCT images were taken before and after intravitreal ranibizumab injections, and after relapse. The prevalence of subretinal and intraretinal fluids was determined. Differences in choroidal thickness between the TIFF and JPEG images were compared with the intra-observer variability. Results The prevalence of subretinal and intraretinal fluids was comparable regardless of the degree of compression. However, the chorio–scleral interface was not clearly identified in many images with a high degree of compression. In images with 25 and 10% quality of JPEG, the difference in choroidal thickness between the TIFF images and the respective JPEG images was significantly greater than the intra-observer variability of the TIFF images (P=0.029 and P=0.024, respectively). Conclusions In OCT images of eyes with AMD, 50% of the quality of the JPEG format would be an optimal degree of compression for efficient data storage and transfer without sacrificing image quality. PMID:24788012

  12. Visually Lossless JPEG 2000 for Remote Image Browsing

    PubMed Central

    Oh, Han; Bilgin, Ali; Marcellin, Michael

    2017-01-01

    Image sizes have increased exponentially in recent years. The resulting high-resolution images are often viewed via remote image browsing. Zooming and panning are desirable features in this context, which result in disparate spatial regions of an image being displayed at a variety of (spatial) resolutions. When an image is displayed at a reduced resolution, the quantization step sizes needed for visually lossless quality generally increase. This paper investigates the quantization step sizes needed for visually lossless display as a function of resolution, and proposes a method that effectively incorporates the resulting (multiple) quantization step sizes into a single JPEG2000 codestream. This codestream is JPEG2000 Part 1 compliant and allows for visually lossless decoding at all resolutions natively supported by the wavelet transform as well as arbitrary intermediate resolutions, using only a fraction of the full-resolution codestream. When images are browsed remotely using the JPEG2000 Interactive Protocol (JPIP), the required bandwidth is significantly reduced, as demonstrated by extensive experimental results. PMID:28748112

  13. Adaptive image coding based on cubic-spline interpolation

    NASA Astrophysics Data System (ADS)

    Jiang, Jian-Xing; Hong, Shao-Hua; Lin, Tsung-Ching; Wang, Lin; Truong, Trieu-Kien

    2014-09-01

    It has been investigated that at low bit rates, downsampling prior to coding and upsampling after decoding can achieve better compression performance than standard coding algorithms, e.g., JPEG and H. 264/AVC. However, at high bit rates, the sampling-based schemes generate more distortion. Additionally, the maximum bit rate for the sampling-based scheme to outperform the standard algorithm is image-dependent. In this paper, a practical adaptive image coding algorithm based on the cubic-spline interpolation (CSI) is proposed. This proposed algorithm adaptively selects the image coding method from CSI-based modified JPEG and standard JPEG under a given target bit rate utilizing the so called ρ-domain analysis. The experimental results indicate that compared with the standard JPEG, the proposed algorithm can show better performance at low bit rates and maintain the same performance at high bit rates.

  14. Estimated spectrum adaptive postfilter and the iterative prepost filtering algirighms

    NASA Technical Reports Server (NTRS)

    Linares, Irving (Inventor)

    2004-01-01

    The invention presents The Estimated Spectrum Adaptive Postfilter (ESAP) and the Iterative Prepost Filter (IPF) algorithms. These algorithms model a number of image-adaptive post-filtering and pre-post filtering methods. They are designed to minimize Discrete Cosine Transform (DCT) blocking distortion caused when images are highly compressed with the Joint Photographic Expert Group (JPEG) standard. The ESAP and the IPF techniques of the present invention minimize the mean square error (MSE) to improve the objective and subjective quality of low-bit-rate JPEG gray-scale images while simultaneously enhancing perceptual visual quality with respect to baseline JPEG images.

  15. Embedding intensity image into a binary hologram with strong noise resistant capability

    NASA Astrophysics Data System (ADS)

    Zhuang, Zhaoyong; Jiao, Shuming; Zou, Wenbin; Li, Xia

    2017-11-01

    A digital hologram can be employed as a host image for image watermarking applications to protect information security. Past research demonstrates that a gray level intensity image can be embedded into a binary Fresnel hologram by error diffusion method or bit truncation coding method. However, the fidelity of the retrieved watermark image from binary hologram is generally not satisfactory, especially when the binary hologram is contaminated with noise. To address this problem, we propose a JPEG-BCH encoding method in this paper. First, we employ the JPEG standard to compress the intensity image into a binary bit stream. Next, we encode the binary bit stream with BCH code to obtain error correction capability. Finally, the JPEG-BCH code is embedded into the binary hologram. By this way, the intensity image can be retrieved with high fidelity by a BCH-JPEG decoder even if the binary hologram suffers from serious noise contamination. Numerical simulation results show that the image quality of retrieved intensity image with our proposed method is superior to the state-of-the-art work reported.

  16. Visualization of JPEG Metadata

    NASA Astrophysics Data System (ADS)

    Malik Mohamad, Kamaruddin; Deris, Mustafa Mat

    There are a lot of information embedded in JPEG image than just graphics. Visualization of its metadata would benefit digital forensic investigator to view embedded data including corrupted image where no graphics can be displayed in order to assist in evidence collection for cases such as child pornography or steganography. There are already available tools such as metadata readers, editors and extraction tools but mostly focusing on visualizing attribute information of JPEG Exif. However, none have been done to visualize metadata by consolidating markers summary, header structure, Huffman table and quantization table in a single program. In this paper, metadata visualization is done by developing a program that able to summarize all existing markers, header structure, Huffman table and quantization table in JPEG. The result shows that visualization of metadata helps viewing the hidden information within JPEG more easily.

  17. 77 FR 59692 - 2014 Diversity Immigrant Visa Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-28

    ... the E-DV system. The entry will not be accepted and must be resubmitted. Group or family photographs... must be in the Joint Photographic Experts Group (JPEG) format. Image File Size: The maximum file size...). Image File Format: The image must be in the Joint Photographic Experts Group (JPEG) format. Image File...

  18. Block selective redaction for minimizing loss during de-identification of burned in text in irreversibly compressed JPEG medical images.

    PubMed

    Clunie, David A; Gebow, Dan

    2015-01-01

    Deidentification of medical images requires attention to both header information as well as the pixel data itself, in which burned-in text may be present. If the pixel data to be deidentified is stored in a compressed form, traditionally it is decompressed, identifying text is redacted, and if necessary, pixel data are recompressed. Decompression without recompression may result in images of excessive or intractable size. Recompression with an irreversible scheme is undesirable because it may cause additional loss in the diagnostically relevant regions of the images. The irreversible (lossy) JPEG compression scheme works on small blocks of the image independently, hence, redaction can selectively be confined only to those blocks containing identifying text, leaving all other blocks unchanged. An open source implementation of selective redaction and a demonstration of its applicability to multiframe color ultrasound images is described. The process can be applied either to standalone JPEG images or JPEG bit streams encapsulated in other formats, which in the case of medical images, is usually DICOM.

  19. Privacy protection in surveillance systems based on JPEG DCT baseline compression and spectral domain watermarking

    NASA Astrophysics Data System (ADS)

    Sablik, Thomas; Velten, Jörg; Kummert, Anton

    2015-03-01

    An novel system for automatic privacy protection in digital media based on spectral domain watermarking and JPEG compression is described in the present paper. In a first step private areas are detected. Therefore a detection method is presented. The implemented method uses Haar cascades to detects faces. Integral images are used to speed up calculations and the detection. Multiple detections of one face are combined. Succeeding steps comprise embedding the data into the image as part of JPEG compression using spectral domain methods and protecting the area of privacy. The embedding process is integrated into and adapted to JPEG compression. A Spread Spectrum Watermarking method is used to embed the size and position of the private areas into the cover image. Different methods for embedding regarding their robustness are compared. Moreover the performance of the method concerning tampered images is presented.

  20. Applications of the JPEG standard in a medical environment

    NASA Astrophysics Data System (ADS)

    Wittenberg, Ulrich

    1993-10-01

    JPEG is a very versatile image coding and compression standard for single images. Medical images make a higher demand on image quality and precision than the usual 'pretty pictures'. In this paper the potential applications of the various JPEG coding modes in a medical environment are evaluated. Due to legal reasons the lossless modes are especially interesting. The spatial modes are equally important because medical data may well exceed the maximum of 12 bit precision allowed for the DCT modes. The performance of the spatial predictors is investigated. From the users point of view the progressive modes, which provide a fast but coarse approximation of the final image, reduce the subjective time one has to wait for it, so they also reduce the user's frustration. Even the lossy modes will find some applications, but they have to be handled with care, because repeated lossy coding and decoding leads to a degradation of the image quality. The amount of this degradation is investigated. The JPEG standard alone is not sufficient for a PACS because it does not store enough additional data such as creation data or details of the imaging modality. Therefore it will be an imbedded coding format in standards like TIFF or ACR/NEMA. It is concluded that the JPEG standard is versatile enough to match the requirements of the medical community.

  1. Improved JPEG anti-forensics with better image visual quality and forensic undetectability.

    PubMed

    Singh, Gurinder; Singh, Kulbir

    2017-08-01

    There is an immediate need to validate the authenticity of digital images due to the availability of powerful image processing tools that can easily manipulate the digital image information without leaving any traces. The digital image forensics most often employs the tampering detectors based on JPEG compression. Therefore, to evaluate the competency of the JPEG forensic detectors, an anti-forensic technique is required. In this paper, two improved JPEG anti-forensic techniques are proposed to remove the blocking artifacts left by the JPEG compression in both spatial and DCT domain. In the proposed framework, the grainy noise left by the perceptual histogram smoothing in DCT domain can be reduced significantly by applying the proposed de-noising operation. Two types of denoising algorithms are proposed, one is based on the constrained minimization problem of total variation of energy and other on the normalized weighted function. Subsequently, an improved TV based deblocking operation is proposed to eliminate the blocking artifacts in the spatial domain. Then, a decalibration operation is applied to bring the processed image statistics back to its standard position. The experimental results show that the proposed anti-forensic approaches outperform the existing state-of-the-art techniques in achieving enhanced tradeoff between image visual quality and forensic undetectability, but with high computational cost. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. An algorithm of a real time image tracking system using a camera with pan/tilt motors on an embedded system

    NASA Astrophysics Data System (ADS)

    Kim, Hie-Sik; Nam, Chul; Ha, Kwan-Yong; Ayurzana, Odgeral; Kwon, Jong-Won

    2005-12-01

    The embedded systems have been applied to many fields, including households and industrial sites. The user interface technology with simple display on the screen was implemented more and more. The user demands are increasing and the system has more various applicable fields due to a high penetration rate of the Internet. Therefore, the demand for embedded system is tend to rise. An embedded system for image tracking was implemented. This system is used a fixed IP for the reliable server operation on TCP/IP networks. Using an USB camera on the embedded Linux system developed a real time broadcasting of video image on the Internet. The digital camera is connected at the USB host port of the embedded board. All input images from the video camera are continuously stored as a compressed JPEG file in a directory at the Linux web-server. And each frame image data from web camera is compared for measurement of displacement Vector. That used Block matching algorithm and edge detection algorithm for past speed. And the displacement vector is used at pan/tilt motor control through RS232 serial cable. The embedded board utilized the S3C2410 MPU, which used the ARM 920T core form Samsung. The operating system was ported to embedded Linux kernel and mounted of root file system. And the stored images are sent to the client PC through the web browser. It used the network function of Linux and it developed a program with protocol of the TCP/IP.

  3. Color Facsimile.

    DTIC Science & Technology

    1995-02-01

    modification of existing JPEG compression and decompression software available from Independent JPEG Users Group to process CIELAB color images and to use...externally specificed Huffman tables. In addition a conversion program was written to convert CIELAB color space images to red, green, blue color space

  4. File Management In Space

    NASA Technical Reports Server (NTRS)

    Critchfield, Anna R.; Zepp, Robert H.

    2000-01-01

    We propose that the user interact with the spacecraft as if the spacecraft were a file server, so that the user can select and receive data as files in standard formats (e.g., tables or images, such as jpeg) via the Internet. Internet technology will be used end-to-end from the spacecraft to authorized users, such as the flight operation team, and project scientists. The proposed solution includes a ground system and spacecraft architecture, mission operations scenarios, and an implementation roadmap showing migration from current practice to the future, where distributed users request and receive files of spacecraft data from archives or spacecraft with equal ease. This solution will provide ground support personnel and scientists easy, direct, secure access to their authorized data without cumbersome processing, and can be extended to support autonomous communications with the spacecraft.

  5. 78 FR 59743 - Bureau of Consular Affairs; Registration for the Diversity Immigrant (DV-2015) Visa Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-27

    ... already a U.S. citizen or a Lawful Permanent Resident, but you will not be penalized if you do. Group... specifications: Image File Format: The miage must be in the Joint Photographic Experts Group (JPEG) format. Image... in the Joint Photographic Experts Group (JPEG) format. Image File Size: The maximum image file size...

  6. Wavelet-based compression of pathological images for telemedicine applications

    NASA Astrophysics Data System (ADS)

    Chen, Chang W.; Jiang, Jianfei; Zheng, Zhiyong; Wu, Xue G.; Yu, Lun

    2000-05-01

    In this paper, we present the performance evaluation of wavelet-based coding techniques as applied to the compression of pathological images for application in an Internet-based telemedicine system. We first study how well suited the wavelet-based coding is as it applies to the compression of pathological images, since these images often contain fine textures that are often critical to the diagnosis of potential diseases. We compare the wavelet-based compression with the DCT-based JPEG compression in the DICOM standard for medical imaging applications. Both objective and subjective measures have been studied in the evaluation of compression performance. These studies are performed in close collaboration with expert pathologists who have conducted the evaluation of the compressed pathological images and communication engineers and information scientists who designed the proposed telemedicine system. These performance evaluations have shown that the wavelet-based coding is suitable for the compression of various pathological images and can be integrated well with the Internet-based telemedicine systems. A prototype of the proposed telemedicine system has been developed in which the wavelet-based coding is adopted for the compression to achieve bandwidth efficient transmission and therefore speed up the communications between the remote terminal and the central server of the telemedicine system.

  7. Cell edge detection in JPEG2000 wavelet domain - analysis on sigmoid function edge model.

    PubMed

    Punys, Vytenis; Maknickas, Ramunas

    2011-01-01

    Big virtual microscopy images (80K x 60K pixels and larger) are usually stored using the JPEG2000 image compression scheme. Diagnostic quantification, based on image analysis, might be faster if performed on compressed data (approx. 20 times less the original amount), representing the coefficients of the wavelet transform. The analysis of possible edge detection without reverse wavelet transform is presented in the paper. Two edge detection methods, suitable for JPEG2000 bi-orthogonal wavelets, are proposed. The methods are adjusted according calculated parameters of sigmoid edge model. The results of model analysis indicate more suitable method for given bi-orthogonal wavelet.

  8. Digitized hand-wrist radiographs: comparison of subjective and software-derived image quality at various compression ratios.

    PubMed

    McCord, Layne K; Scarfe, William C; Naylor, Rachel H; Scheetz, James P; Silveira, Anibal; Gillespie, Kevin R

    2007-05-01

    The objectives of this study were to compare the effect of JPEG 2000 compression of hand-wrist radiographs on observer image quality qualitative assessment and to compare with a software-derived quantitative image quality index. Fifteen hand-wrist radiographs were digitized and saved as TIFF and JPEG 2000 images at 4 levels of compression (20:1, 40:1, 60:1, and 80:1). The images, including rereads, were viewed by 13 orthodontic residents who determined the image quality rating on a scale of 1 to 5. A quantitative analysis was also performed by using a readily available software based on the human visual system (Image Quality Measure Computer Program, version 6.2, Mitre, Bedford, Mass). ANOVA was used to determine the optimal compression level (P < or =.05). When we compared subjective indexes, JPEG compression greater than 60:1 significantly reduced image quality. When we used quantitative indexes, the JPEG 2000 images had lower quality at all compression ratios compared with the original TIFF images. There was excellent correlation (R2 >0.92) between qualitative and quantitative indexes. Image Quality Measure indexes are more sensitive than subjective image quality assessments in quantifying image degradation with compression. There is potential for this software-based quantitative method in determining the optimal compression ratio for any image without the use of subjective raters.

  9. Switching theory-based steganographic system for JPEG images

    NASA Astrophysics Data System (ADS)

    Cherukuri, Ravindranath C.; Agaian, Sos S.

    2007-04-01

    Cellular communications constitute a significant portion of the global telecommunications market. Therefore, the need for secured communication over a mobile platform has increased exponentially. Steganography is an art of hiding critical data into an innocuous signal, which provide answers to the above needs. The JPEG is one of commonly used format for storing and transmitting images on the web. In addition, the pictures captured using mobile cameras are in mostly in JPEG format. In this article, we introduce a switching theory based steganographic system for JPEG images which is applicable for mobile and computer platforms. The proposed algorithm uses the fact that energy distribution among the quantized AC coefficients varies from block to block and coefficient to coefficient. Existing approaches are effective with a part of these coefficients but when employed over all the coefficients they show there ineffectiveness. Therefore, we propose an approach that works each set of AC coefficients with different frame work thus enhancing the performance of the approach. The proposed system offers a high capacity and embedding efficiency simultaneously withstanding to simple statistical attacks. In addition, the embedded information could be retrieved without prior knowledge of the cover image. Based on simulation results, the proposed method demonstrates an improved embedding capacity over existing algorithms while maintaining a high embedding efficiency and preserving the statistics of the JPEG image after hiding information.

  10. 75 FR 60846 - Bureau of Consular Affairs; Registration for the Diversity Immigrant (DV-2012) Visa Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-01

    ... need to submit a photo for a child who is already a U.S. citizen or a Legal Permanent Resident. Group... Joint Photographic Experts Group (JPEG) format; it must have a maximum image file size of two hundred... (dpi); the image file format in Joint Photographic Experts Group (JPEG) format; the maximum image file...

  11. The Helioviewer Project: Solar Data Visualization and Exploration

    NASA Astrophysics Data System (ADS)

    Hughitt, V. Keith; Ireland, J.; Müller, D.; García Ortiz, J.; Dimitoglou, G.; Fleck, B.

    2011-05-01

    SDO has only been operating a little over a year, but in that short time it has already transmitted hundreds of terabytes of data, making it impossible for data providers to maintain a complete archive of data online. By storing an extremely efficiently compressed subset of the data, however, the Helioviewer project has been able to maintain a continuous record of high-quality SDO images starting from soon after the commissioning phase. The Helioviewer project was not designed to deal with SDO alone, however, and continues to add support for new types of data, the most recent of which are STEREO EUVI and COR1/COR2 images. In addition to adding support for new types of data, improvements have been made to both the server-side and client-side products that are part of the project. A new open-source JPEG2000 (JPIP) streaming server has been developed offering a vastly more flexible and reliable backend for the Java/OpenGL application JHelioviewer. Meanwhile the web front-end, Helioviewer.org, has also made great strides both in improving reliability, and also in adding new features such as the ability to create and share movies on YouTube. Helioviewer users are creating nearly two thousand movies a day from the over six million images that are available to them, and that number continues to grow each day. We provide an overview of recent progress with the various Helioviewer Project components and discuss plans for future development.

  12. A Novel 2D Image Compression Algorithm Based on Two Levels DWT and DCT Transforms with Enhanced Minimize-Matrix-Size Algorithm for High Resolution Structured Light 3D Surface Reconstruction

    NASA Astrophysics Data System (ADS)

    Siddeq, M. M.; Rodrigues, M. A.

    2015-09-01

    Image compression techniques are widely used on 2D image 2D video 3D images and 3D video. There are many types of compression techniques and among the most popular are JPEG and JPEG2000. In this research, we introduce a new compression method based on applying a two level discrete cosine transform (DCT) and a two level discrete wavelet transform (DWT) in connection with novel compression steps for high-resolution images. The proposed image compression algorithm consists of four steps. (1) Transform an image by a two level DWT followed by a DCT to produce two matrices: DC- and AC-Matrix, or low and high frequency matrix, respectively, (2) apply a second level DCT on the DC-Matrix to generate two arrays, namely nonzero-array and zero-array, (3) apply the Minimize-Matrix-Size algorithm to the AC-Matrix and to the other high-frequencies generated by the second level DWT, (4) apply arithmetic coding to the output of previous steps. A novel decompression algorithm, Fast-Match-Search algorithm (FMS), is used to reconstruct all high-frequency matrices. The FMS-algorithm computes all compressed data probabilities by using a table of data, and then using a binary search algorithm for finding decompressed data inside the table. Thereafter, all decoded DC-values with the decoded AC-coefficients are combined in one matrix followed by inverse two levels DCT with two levels DWT. The technique is tested by compression and reconstruction of 3D surface patches. Additionally, this technique is compared with JPEG and JPEG2000 algorithm through 2D and 3D root-mean-square-error following reconstruction. The results demonstrate that the proposed compression method has better visual properties than JPEG and JPEG2000 and is able to more accurately reconstruct surface patches in 3D.

  13. Evaluation of the robustness of the preprocessing technique improving reversible compressibility of CT images: Tested on various CT examinations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeon, Chang Ho; Kim, Bohyoung; Gu, Bon Seung

    2013-10-15

    Purpose: To modify the preprocessing technique, which was previously proposed, improving compressibility of computed tomography (CT) images to cover the diversity of three dimensional configurations of different body parts and to evaluate the robustness of the technique in terms of segmentation correctness and increase in reversible compression ratio (CR) for various CT examinations.Methods: This study had institutional review board approval with waiver of informed patient consent. A preprocessing technique was previously proposed to improve the compressibility of CT images by replacing pixel values outside the body region with a constant value resulting in maximizing data redundancy. Since the technique wasmore » developed aiming at only chest CT images, the authors modified the segmentation method to cover the diversity of three dimensional configurations of different body parts. The modified version was evaluated as follows. In randomly selected 368 CT examinations (352 787 images), each image was preprocessed by using the modified preprocessing technique. Radiologists visually confirmed whether the segmented region covers the body region or not. The images with and without the preprocessing were reversibly compressed using Joint Photographic Experts Group (JPEG), JPEG2000 two-dimensional (2D), and JPEG2000 three-dimensional (3D) compressions. The percentage increase in CR per examination (CR{sub I}) was measured.Results: The rate of correct segmentation was 100.0% (95% CI: 99.9%, 100.0%) for all the examinations. The median of CR{sub I} were 26.1% (95% CI: 24.9%, 27.1%), 40.2% (38.5%, 41.1%), and 34.5% (32.7%, 36.2%) in JPEG, JPEG2000 2D, and JPEG2000 3D, respectively.Conclusions: In various CT examinations, the modified preprocessing technique can increase in the CR by 25% or more without concerning about degradation of diagnostic information.« less

  14. Codestream-Based Identification of JPEG 2000 Images with Different Coding Parameters

    NASA Astrophysics Data System (ADS)

    Watanabe, Osamu; Fukuhara, Takahiro; Kiya, Hitoshi

    A method of identifying JPEG 2000 images with different coding parameters, such as code-block sizes, quantization-step sizes, and resolution levels, is presented. It does not produce false-negative matches regardless of different coding parameters (compression rate, code-block size, and discrete wavelet transform (DWT) resolutions levels) or quantization step sizes. This feature is not provided by conventional methods. Moreover, the proposed approach is fast because it uses the number of zero-bit-planes that can be extracted from the JPEG 2000 codestream by only parsing the header information without embedded block coding with optimized truncation (EBCOT) decoding. The experimental results revealed the effectiveness of image identification based on the new method.

  15. Vulnerability Analysis of HD Photo Image Viewer Applications

    DTIC Science & Technology

    2007-09-01

    the successor to the ubiquitous JPEG image format, as well as the eventual de facto standard in the digital photography market. With massive efforts...renamed to HD Photo in November of 2006, is being touted as the successor to the ubiquitous JPEG image format, as well as the eventual de facto standard...associated state-of-the-art compression algorithm “specifically designed [for] all types of continuous tone photographic” images [HDPhotoFeatureSpec

  16. Lossless Compression of JPEG Coded Photo Collections.

    PubMed

    Wu, Hao; Sun, Xiaoyan; Yang, Jingyu; Zeng, Wenjun; Wu, Feng

    2016-04-06

    The explosion of digital photos has posed a significant challenge to photo storage and transmission for both personal devices and cloud platforms. In this paper, we propose a novel lossless compression method to further reduce the size of a set of JPEG coded correlated images without any loss of information. The proposed method jointly removes inter/intra image redundancy in the feature, spatial, and frequency domains. For each collection, we first organize the images into a pseudo video by minimizing the global prediction cost in the feature domain. We then present a hybrid disparity compensation method to better exploit both the global and local correlations among the images in the spatial domain. Furthermore, the redundancy between each compensated signal and the corresponding target image is adaptively reduced in the frequency domain. Experimental results demonstrate the effectiveness of the proposed lossless compression method. Compared to the JPEG coded image collections, our method achieves average bit savings of more than 31%.

  17. Tampered Region Localization of Digital Color Images Based on JPEG Compression Noise

    NASA Astrophysics Data System (ADS)

    Wang, Wei; Dong, Jing; Tan, Tieniu

    With the availability of various digital image edit tools, seeing is no longer believing. In this paper, we focus on tampered region localization for image forensics. We propose an algorithm which can locate tampered region(s) in a lossless compressed tampered image when its unchanged region is output of JPEG decompressor. We find the tampered region and the unchanged region have different responses for JPEG compression. The tampered region has stronger high frequency quantization noise than the unchanged region. We employ PCA to separate different spatial frequencies quantization noises, i.e. low, medium and high frequency quantization noise, and extract high frequency quantization noise for tampered region localization. Post-processing is involved to get final localization result. The experimental results prove the effectiveness of our proposed method.

  18. Toward objective image quality metrics: the AIC Eval Program of the JPEG

    NASA Astrophysics Data System (ADS)

    Richter, Thomas; Larabi, Chaker

    2008-08-01

    Objective quality assessment of lossy image compression codecs is an important part of the recent call of the JPEG for Advanced Image Coding. The target of the AIC ad-hoc group is twofold: First, to receive state-of-the-art still image codecs and to propose suitable technology for standardization; and second, to study objective image quality metrics to evaluate the performance of such codes. Even tthough the performance of an objective metric is defined by how well it predicts the outcome of a subjective assessment, one can also study the usefulness of a metric in a non-traditional way indirectly, namely by measuring the subjective quality improvement of a codec that has been optimized for a specific objective metric. This approach shall be demonstrated here on the recently proposed HDPhoto format14 introduced by Microsoft and a SSIM-tuned17 version of it by one of the authors. We compare these two implementations with JPEG1 in two variations and a visual and PSNR optimal JPEG200013 implementation. To this end, we use subjective and objective tests based on the multiscale SSIM and a new DCT based metric.

  19. A threshold-based fixed predictor for JPEG-LS image compression

    NASA Astrophysics Data System (ADS)

    Deng, Lihua; Huang, Zhenghua; Yao, Shoukui

    2018-03-01

    In JPEG-LS, fixed predictor based on median edge detector (MED) only detect horizontal and vertical edges, and thus produces large prediction errors in the locality of diagonal edges. In this paper, we propose a threshold-based edge detection scheme for the fixed predictor. The proposed scheme can detect not only the horizontal and vertical edges, but also diagonal edges. For some certain thresholds, the proposed scheme can be simplified to other existing schemes. So, it can also be regarded as the integration of these existing schemes. For a suitable threshold, the accuracy of horizontal and vertical edges detection is higher than the existing median edge detection in JPEG-LS. Thus, the proposed fixed predictor outperforms the existing JPEG-LS predictors for all images tested, while the complexity of the overall algorithm is maintained at a similar level.

  20. An evaluation of the effect of JPEG, JPEG2000, and H.264/AVC on CQR codes decoding process

    NASA Astrophysics Data System (ADS)

    Vizcarra Melgar, Max E.; Farias, Mylène C. Q.; Zaghetto, Alexandre

    2015-02-01

    This paper presents a binarymatrix code based on QR Code (Quick Response Code), denoted as CQR Code (Colored Quick Response Code), and evaluates the effect of JPEG, JPEG2000 and H.264/AVC compression on the decoding process. The proposed CQR Code has three additional colors (red, green and blue), what enables twice as much storage capacity when compared to the traditional black and white QR Code. Using the Reed-Solomon error-correcting code, the CQR Code model has a theoretical correction capability of 38.41%. The goal of this paper is to evaluate the effect that degradations inserted by common image compression algorithms have on the decoding process. Results show that a successful decoding process can be achieved for compression rates up to 0.3877 bits/pixel, 0.1093 bits/pixel and 0.3808 bits/pixel for JPEG, JPEG2000 and H.264/AVC formats, respectively. The algorithm that presents the best performance is the H.264/AVC, followed by the JPEG2000, and JPEG.

  1. Illumination-tolerant face verification of low-bit-rate JPEG2000 wavelet images with advanced correlation filters for handheld devices

    NASA Astrophysics Data System (ADS)

    Wijaya, Surya Li; Savvides, Marios; Vijaya Kumar, B. V. K.

    2005-02-01

    Face recognition on mobile devices, such as personal digital assistants and cell phones, is a big challenge owing to the limited computational resources available to run verifications on the devices themselves. One approach is to transmit the captured face images by use of the cell-phone connection and to run the verification on a remote station. However, owing to limitations in communication bandwidth, it may be necessary to transmit a compressed version of the image. We propose using the image compression standard JPEG2000, which is a wavelet-based compression engine used to compress the face images to low bit rates suitable for transmission over low-bandwidth communication channels. At the receiver end, the face images are reconstructed with a JPEG2000 decoder and are fed into the verification engine. We explore how advanced correlation filters, such as the minimum average correlation energy filter [Appl. Opt. 26, 3633 (1987)] and its variants, perform by using face images captured under different illumination conditions and encoded with different bit rates under the JPEG2000 wavelet-encoding standard. We evaluate the performance of these filters by using illumination variations from the Carnegie Mellon University's Pose, Illumination, and Expression (PIE) face database. We also demonstrate the tolerance of these filters to noisy versions of images with illumination variations.

  2. Clinical evaluation of JPEG2000 compression for digital mammography

    NASA Astrophysics Data System (ADS)

    Sung, Min-Mo; Kim, Hee-Joung; Kim, Eun-Kyung; Kwak, Jin-Young; Yoo, Jae-Kyung; Yoo, Hyung-Sik

    2002-06-01

    Medical images, such as computed radiography (CR), and digital mammographic images will require large storage facilities and long transmission times for picture archiving and communications system (PACS) implementation. American College of Radiology and National Equipment Manufacturers Association (ACR/NEMA) group is planning to adopt a JPEG2000 compression algorithm in digital imaging and communications in medicine (DICOM) standard to better utilize medical images. The purpose of the study was to evaluate the compression ratios of JPEG2000 for digital mammographic images using peak signal-to-noise ratio (PSNR), receiver operating characteristic (ROC) analysis, and the t-test. The traditional statistical quality measures such as PSNR, which is a commonly used measure for the evaluation of reconstructed images, measures how the reconstructed image differs from the original by making pixel-by-pixel comparisons. The ability to accurately discriminate diseased cases from normal cases is evaluated using ROC curve analysis. ROC curves can be used to compare the diagnostic performance of two or more reconstructed images. The t test can be also used to evaluate the subjective image quality of reconstructed images. The results of the t test suggested that the possible compression ratios using JPEG2000 for digital mammographic images may be as much as 15:1 without visual loss or with preserving significant medical information at a confidence level of 99%, although both PSNR and ROC analyses suggest as much as 80:1 compression ratio can be achieved without affecting clinical diagnostic performance.

  3. JPEG2000 still image coding quality.

    PubMed

    Chen, Tzong-Jer; Lin, Sheng-Chieh; Lin, You-Chen; Cheng, Ren-Gui; Lin, Li-Hui; Wu, Wei

    2013-10-01

    This work demonstrates the image qualities between two popular JPEG2000 programs. Two medical image compression algorithms are both coded using JPEG2000, but they are different regarding the interface, convenience, speed of computation, and their characteristic options influenced by the encoder, quantization, tiling, etc. The differences in image quality and compression ratio are also affected by the modality and compression algorithm implementation. Do they provide the same quality? The qualities of compressed medical images from two image compression programs named Apollo and JJ2000 were evaluated extensively using objective metrics. These algorithms were applied to three medical image modalities at various compression ratios ranging from 10:1 to 100:1. Following that, the quality of the reconstructed images was evaluated using five objective metrics. The Spearman rank correlation coefficients were measured under every metric in the two programs. We found that JJ2000 and Apollo exhibited indistinguishable image quality for all images evaluated using the above five metrics (r > 0.98, p < 0.001). It can be concluded that the image quality of the JJ2000 and Apollo algorithms is statistically equivalent for medical image compression.

  4. Dynamic code block size for JPEG 2000

    NASA Astrophysics Data System (ADS)

    Tsai, Ping-Sing; LeCornec, Yann

    2008-02-01

    Since the standardization of the JPEG 2000, it has found its way into many different applications such as DICOM (digital imaging and communication in medicine), satellite photography, military surveillance, digital cinema initiative, professional video cameras, and so on. The unified framework of the JPEG 2000 architecture makes practical high quality real-time compression possible even in video mode, i.e. motion JPEG 2000. In this paper, we present a study of the compression impact using dynamic code block size instead of fixed code block size as specified in the JPEG 2000 standard. The simulation results show that there is no significant impact on compression if dynamic code block sizes are used. In this study, we also unveil the advantages of using dynamic code block sizes.

  5. Interhospital network system using the worldwide web and the common gateway interface.

    PubMed

    Oka, A; Harima, Y; Nakano, Y; Tanaka, Y; Watanabe, A; Kihara, H; Sawada, S

    1999-05-01

    We constructed an interhospital network system using the worldwide web (WWW) and the Common Gateway Interface (CGI). Original clinical images are digitized and stored as a database for educational and research purposes. Personal computers (PCs) are available for data treatment and browsing. Our system is simple, as digitized images are stored into a Unix server machine. Images of important and interesting clinical cases are selected and registered into the image database using CGI. The main image format is 8- or 12-bit Joint Photographic Experts Group (JPEG) image. Original clinical images are finally stored in CD-ROM using a CD recorder. The image viewer can browse all of the images for one case at once as thumbnail pictures; image quality can be selected depending on the user's purpose. Using the network system, clinical images of interesting cases can be rapidly transmitted and discussed with other related hospitals. Data transmission from relational hospitals takes 1 to 2 minutes per 500 Kbyte of data. More distant hospitals (e.g., Rakusai Hospital, Kyoto) takes 1 minute more. The mean number of accesses our image database in a recent 3-month period was 470. There is a total about 200 cases in our image database, acquired over the past 2 years. Our system is useful for communication and image treatment between hospitals and we will describe the elements of our system and image database.

  6. Application of reversible denoising and lifting steps with step skipping to color space transforms for improved lossless compression

    NASA Astrophysics Data System (ADS)

    Starosolski, Roman

    2016-07-01

    Reversible denoising and lifting steps (RDLS) are lifting steps integrated with denoising filters in such a way that, despite the inherently irreversible nature of denoising, they are perfectly reversible. We investigated the application of RDLS to reversible color space transforms: RCT, YCoCg-R, RDgDb, and LDgEb. In order to improve RDLS effects, we propose a heuristic for image-adaptive denoising filter selection, a fast estimator of the compressed image bitrate, and a special filter that may result in skipping of the steps. We analyzed the properties of the presented methods, paying special attention to their usefulness from a practical standpoint. For a diverse image test-set and lossless JPEG-LS, JPEG 2000, and JPEG XR algorithms, RDLS improves the bitrates of all the examined transforms. The most interesting results were obtained for an estimation-based heuristic filter selection out of a set of seven filters; the cost of this variant was similar to or lower than the transform cost, and it improved the average lossless JPEG 2000 bitrates by 2.65% for RDgDb and by over 1% for other transforms; bitrates of certain images were improved to a significantly greater extent.

  7. Interband coding extension of the new lossless JPEG standard

    NASA Astrophysics Data System (ADS)

    Memon, Nasir D.; Wu, Xiaolin; Sippy, V.; Miller, G.

    1997-01-01

    Due to the perceived inadequacy of current standards for lossless image compression, the JPEG committee of the International Standards Organization (ISO) has been developing a new standard. A baseline algorithm, called JPEG-LS, has already been completed and is awaiting approval by national bodies. The JPEG-LS baseline algorithm despite being simple is surprisingly efficient, and provides compression performance that is within a few percent of the best and more sophisticated techniques reported in the literature. Extensive experimentations performed by the authors seem to indicate that an overall improvement by more than 10 percent in compression performance will be difficult to obtain even at the cost of great complexity; at least not with traditional approaches to lossless image compression. However, if we allow inter-band decorrelation and modeling in the baseline algorithm, nearly 30 percent improvement in compression gains for specific images in the test set become possible with a modest computational cost. In this paper we propose and investigate a few techniques for exploiting inter-band correlations in multi-band images. These techniques have been designed within the framework of the baseline algorithm, and require minimal changes to the basic architecture of the baseline, retaining its essential simplicity.

  8. Astronomical database and VO-tools of Nikolaev Astronomical Observatory

    NASA Astrophysics Data System (ADS)

    Mazhaev, A. E.; Protsyuk, Yu. I.

    2010-05-01

    Results of work in 2006-2009 on creation of astronomical databases aiming at development of Nikolaev Virtual Observatory (NVO) are presented in this abstract. Results of observations and theirreduction, which were obtained during the whole history of Nikolaev Astronomical Observatory (NAO), are included in the databases. The databases may be considered as a basis for construction of a data centre. Images of different regions of the celestial sphere have been stored in NAO since 1929. About 8000 photo plates were obtained during observations in the 20th century. Observations with CCD have been started since 1996. Annually, telescopes of NAO, using CCD cameras, create data volume of several tens of gigabytes (GB) in the form of CCD images and up to 100 GB of video records. At the end of 2008, the volume of accumulated data in the form of CCD images was about 300 GB. Problems of data volume growth are common in astronomy, nuclear physics and bioinformatics. Therefore, the astronomical community needs to use archives, databases and distributed grid computing to cope with this problem in astronomy. The International Virtual Observatory Alliance (IVOA) was formed in June 2002 with a mission to "enable the international utilization of astronomical archives..." The NVO was created at the NAO website in 2008, and consists of three main parts. The first part contains 27 astrometric stellar catalogues with short descriptions. The files of catalogues were compiled in the standard VOTable format using eXtensible Markup Language (XML), and they are available for downloading. This is an example of the so-called science-ready product. The VOTable format was developed by the International Virtual Observatory Alliance (IVOA) for exchange of tabular data. A user may download these catalogues and open them using any standalone application that supports standards of the IVOA. There are several directions of development for such applications, for example, search of catalogues and images, search and visualisation of spectra, spectral energy distribution (SED) building, search of cross-correlation between objects in different catalogues, statistical data processing of large data volumes etc. The second part includes database of observations, accumulated in NAO, with access via a browser. The database has a common interface for searching of textual and graphical information concerning photographic and CCD observations. The database contains: textual information about 7437 plates as well as 2700 preview images in JPEG format with resolution of 300 DPI (dots per inch); textual information about 16660 CCD frames as well as 1100 preview images in JPEG format. Absent preview images will be added to the database as soon as they will be ready after plates scanning and CCD frames processing. The user has to define the equatorial coordinates of search centre, a search radius and a period of observations. Then he or she may also specify additional filters, such as: any combination of objects given separately for plates and CCD frames, output parameters for plates, telescope names for CCD observations. Results of search are generated in the form of two tables for photographic and CCD observations. To obtain access to the source images in FITS format with support of World Coordinate System (WCS), the user has to fill and submit electronic form given after the tables. The third part includes database of observations with access via a standalone application such as Aladin, which has been developed by Strasbourg Astronomical Data Centre. To obtain access to the database, the user has to perform a series of simple actions, which are described on a corresponding site page. Then he or she may get access to the database via a server selector of Aladin, which has a menu with wide range of image and catalogue servers located world wide, including two menu items for photographic and CCD observations of a NVO image server. The user has to define the equatorial coordinates of search centre and a search radius. The search results are outputted into a main window of Aladin in textual and graphical forms using XML and Simple Object Access Protocol (SOAP). In this way, the NVO image server is integrated with other astronomical servers, using a special configuration file. The user may conveniently request information from many servers using the same server selector of Aladin, although the servers are located in different countries. Aladin has a wide range of special tools for data analysis and handling, including connection with other standalone applications. As a conclusion, we should note that a research team of a data centre, which provides the infrastructure for data output to the internet, is responsible for creation of corresponding archives. Therefore, each observatory or data centre has to provide an access to its archives in accordance with the IVOA standards and a resolution adopted by the IAU XXV General Assembly #B.1, titled: Public Access to Astronomical Archives. A research team of NAO copes successfully with this task and continues to develop the NVO. Using our databases and VO-tools, we also take part in development of the Ukrainian Virtual Observatory (UkrVO). All three main parts of the NVO are used as prototypes for the UkrVO. Informational resources provided by other astronomical institutions from Ukraine will be included in corresponding databases and VO interfaces.

  9. Privacy enabling technology for video surveillance

    NASA Astrophysics Data System (ADS)

    Dufaux, Frédéric; Ouaret, Mourad; Abdeljaoued, Yousri; Navarro, Alfonso; Vergnenègre, Fabrice; Ebrahimi, Touradj

    2006-05-01

    In this paper, we address the problem privacy in video surveillance. We propose an efficient solution based on transformdomain scrambling of regions of interest in a video sequence. More specifically, the sign of selected transform coefficients is flipped during encoding. We address more specifically the case of Motion JPEG 2000. Simulation results show that the technique can be successfully applied to conceal information in regions of interest in the scene while providing with a good level of security. Furthermore, the scrambling is flexible and allows adjusting the amount of distortion introduced. This is achieved with a small impact on coding performance and negligible computational complexity increase. In the proposed video surveillance system, heterogeneous clients can remotely access the system through the Internet or 2G/3G mobile phone network. Thanks to the inherently scalable Motion JPEG 2000 codestream, the server is able to adapt the resolution and bandwidth of the delivered video depending on the usage environment of the client.

  10. Image quality (IQ) guided multispectral image compression

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik

    2016-05-01

    Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.

  11. Image compression/decompression based on mathematical transform, reduction/expansion, and image sharpening

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1997-01-01

    An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described.

  12. Scan-Based Implementation of JPEG 2000 Extensions

    NASA Technical Reports Server (NTRS)

    Rountree, Janet C.; Webb, Brian N.; Flohr, Thomas J.; Marcellin, Michael W.

    2001-01-01

    JPEG 2000 Part 2 (Extensions) contains a number of technologies that are of potential interest in remote sensing applications. These include arbitrary wavelet transforms, techniques to limit boundary artifacts in tiles, multiple component transforms, and trellis-coded quantization (TCQ). We are investigating the addition of these features to the low-memory (scan-based) implementation of JPEG 2000 Part 1. A scan-based implementation of TCQ has been realized and tested, with a very small performance loss as compared with the full image (frame-based) version. A proposed amendment to JPEG 2000 Part 2 will effect the syntax changes required to make scan-based TCQ compatible with the standard.

  13. Wavelet-based compression of M-FISH images.

    PubMed

    Hua, Jianping; Xiong, Zixiang; Wu, Qiang; Castleman, Kenneth R

    2005-05-01

    Multiplex fluorescence in situ hybridization (M-FISH) is a recently developed technology that enables multi-color chromosome karyotyping for molecular cytogenetic analysis. Each M-FISH image set consists of a number of aligned images of the same chromosome specimen captured at different optical wavelength. This paper presents embedded M-FISH image coding (EMIC), where the foreground objects/chromosomes and the background objects/images are coded separately. We first apply critically sampled integer wavelet transforms to both the foreground and the background. We then use object-based bit-plane coding to compress each object and generate separate embedded bitstreams that allow continuous lossy-to-lossless compression of the foreground and the background. For efficient arithmetic coding of bit planes, we propose a method of designing an optimal context model that specifically exploits the statistical characteristics of M-FISH images in the wavelet domain. Our experiments show that EMIC achieves nearly twice as much compression as Lempel-Ziv-Welch coding. EMIC also performs much better than JPEG-LS and JPEG-2000 for lossless coding. The lossy performance of EMIC is significantly better than that of coding each M-FISH image with JPEG-2000.

  14. Performance comparison of leading image codecs: H.264/AVC Intra, JPEG2000, and Microsoft HD Photo

    NASA Astrophysics Data System (ADS)

    Tran, Trac D.; Liu, Lijie; Topiwala, Pankaj

    2007-09-01

    This paper provides a detailed rate-distortion performance comparison between JPEG2000, Microsoft HD Photo, and H.264/AVC High Profile 4:4:4 I-frame coding for high-resolution still images and high-definition (HD) 1080p video sequences. This work is an extension to our previous comparative study published in previous SPIE conferences [1, 2]. Here we further optimize all three codecs for compression performance. Coding simulations are performed on a set of large-format color images captured from mainstream digital cameras and 1080p HD video sequences commonly used for H.264/AVC standardization work. Overall, our experimental results show that all three codecs offer very similar coding performances at the high-quality, high-resolution setting. Differences tend to be data-dependent: JPEG2000 with the wavelet technology tends to be the best performer with smooth spatial data; H.264/AVC High-Profile with advanced spatial prediction modes tends to cope best with more complex visual content; Microsoft HD Photo tends to be the most consistent across the board. For the still-image data sets, JPEG2000 offers the best R-D performance gains (around 0.2 to 1 dB in peak signal-to-noise ratio) over H.264/AVC High-Profile intra coding and Microsoft HD Photo. For the 1080p video data set, all three codecs offer very similar coding performance. As in [1, 2], neither do we consider scalability nor complexity in this study (JPEG2000 is operating in non-scalable, but optimal performance mode).

  15. SHD digital cinema distribution over a long distance network of Internet2

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Takahiro; Shirai, Daisuke; Fujii, Tatsuya; Nomura, Mitsuru; Fujii, Tetsuro; Ono, Sadayasu

    2003-06-01

    We have developed a prototype SHD (Super High Definition) digital cinema distribution system that can store, transmit and display eight-million-pixel motion pictures that have the image quality of a 35-mm film movie. The system contains a video server, a real-time decoder, and a D-ILA projector. Using a gigabit Ethernet link and TCP/IP, the server transmits JPEG2000 compressed motion picture data streams to the decoder at transmission speeds as high as 300 Mbps. The received data streams are decompressed by the decoder, and then projected onto a screen via the projector. With this system, digital cinema contents can be distributed over a wide-area optical gigabit IP network. However, when digital cinema contents are delivered over long distances by using a gigabit IP network and TCP, the round-trip time increases and network throughput either stops rising or diminishes. In a long-distance SHD digital cinema transmission experiment performed on the Internet2 network in October 2002, we adopted enlargement of the TCP window, multiple TCP connections, and shaping function to control the data transmission quantity. As a result, we succeeded in transmitting the SHD digital cinema content data at about 300 Mbps between Chicago and Los Angeles, a distance of more than 3000 km.

  16. JPEG 2000 Encoding with Perceptual Distortion Control

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Liu, Zhen; Karam, Lina J.

    2008-01-01

    An alternative approach has been devised for encoding image data in compliance with JPEG 2000, the most recent still-image data-compression standard of the Joint Photographic Experts Group. Heretofore, JPEG 2000 encoding has been implemented by several related schemes classified as rate-based distortion-minimization encoding. In each of these schemes, the end user specifies a desired bit rate and the encoding algorithm strives to attain that rate while minimizing a mean squared error (MSE). While rate-based distortion minimization is appropriate for transmitting data over a limited-bandwidth channel, it is not the best approach for applications in which the perceptual quality of reconstructed images is a major consideration. A better approach for such applications is the present alternative one, denoted perceptual distortion control, in which the encoding algorithm strives to compress data to the lowest bit rate that yields at least a specified level of perceptual image quality. Some additional background information on JPEG 2000 is prerequisite to a meaningful summary of JPEG encoding with perceptual distortion control. The JPEG 2000 encoding process includes two subprocesses known as tier-1 and tier-2 coding. In order to minimize the MSE for the desired bit rate, a rate-distortion- optimization subprocess is introduced between the tier-1 and tier-2 subprocesses. In tier-1 coding, each coding block is independently bit-plane coded from the most-significant-bit (MSB) plane to the least-significant-bit (LSB) plane, using three coding passes (except for the MSB plane, which is coded using only one "clean up" coding pass). For M bit planes, this subprocess involves a total number of (3M - 2) coding passes. An embedded bit stream is then generated for each coding block. Information on the reduction in distortion and the increase in the bit rate associated with each coding pass is collected. This information is then used in a rate-control procedure to determine the contribution of each coding block to the output compressed bit stream.

  17. Digital Semaphore: Technical Feasibility of QR Code Optical Signaling for Fleet Communications

    DTIC Science & Technology

    2013-06-01

    Standards (http://www.iso.org) JIS Japanese Industrial Standard JPEG Joint Photographic Experts Group (digital image format; http://www.jpeg.org) LED...Denso Wave corporation in the 1990s for the Japanese automotive manufacturing industry. See Appendix A for full details. Reed-Solomon Error...eliminates camera blur induced by the shutter, providing clear images at extremely high frame rates. Thusly, digital cinema cameras are more suitable

  18. Image compression/decompression based on mathematical transform, reduction/expansion, and image sharpening

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1997-12-30

    An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described. 22 figs.

  19. JPEG2000 encoding with perceptual distortion control.

    PubMed

    Liu, Zhen; Karam, Lina J; Watson, Andrew B

    2006-07-01

    In this paper, a new encoding approach is proposed to control the JPEG2000 encoding in order to reach a desired perceptual quality. The new method is based on a vision model that incorporates various masking effects of human visual perception and a perceptual distortion metric that takes spatial and spectral summation of individual quantization errors into account. Compared with the conventional rate-based distortion minimization JPEG2000 encoding, the new method provides a way to generate consistent quality images at a lower bit rate.

  20. Fingerprint recognition of wavelet-based compressed images by neuro-fuzzy clustering

    NASA Astrophysics Data System (ADS)

    Liu, Ti C.; Mitra, Sunanda

    1996-06-01

    Image compression plays a crucial role in many important and diverse applications requiring efficient storage and transmission. This work mainly focuses on a wavelet transform (WT) based compression of fingerprint images and the subsequent classification of the reconstructed images. The algorithm developed involves multiresolution wavelet decomposition, uniform scalar quantization, entropy and run- length encoder/decoder and K-means clustering of the invariant moments as fingerprint features. The performance of the WT-based compression algorithm has been compared with JPEG current image compression standard. Simulation results show that WT outperforms JPEG in high compression ratio region and the reconstructed fingerprint image yields proper classification.

  1. Parallel design of JPEG-LS encoder on graphics processing units

    NASA Astrophysics Data System (ADS)

    Duan, Hao; Fang, Yong; Huang, Bormin

    2012-01-01

    With recent technical advances in graphic processing units (GPUs), GPUs have outperformed CPUs in terms of compute capability and memory bandwidth. Many successful GPU applications to high performance computing have been reported. JPEG-LS is an ISO/IEC standard for lossless image compression which utilizes adaptive context modeling and run-length coding to improve compression ratio. However, adaptive context modeling causes data dependency among adjacent pixels and the run-length coding has to be performed in a sequential way. Hence, using JPEG-LS to compress large-volume hyperspectral image data is quite time-consuming. We implement an efficient parallel JPEG-LS encoder for lossless hyperspectral compression on a NVIDIA GPU using the computer unified device architecture (CUDA) programming technology. We use the block parallel strategy, as well as such CUDA techniques as coalesced global memory access, parallel prefix sum, and asynchronous data transfer. We also show the relation between GPU speedup and AVIRIS block size, as well as the relation between compression ratio and AVIRIS block size. When AVIRIS images are divided into blocks, each with 64×64 pixels, we gain the best GPU performance with 26.3x speedup over its original CPU code.

  2. Perceptually-Based Adaptive JPEG Coding

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Rosenholtz, Ruth; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    An extension to the JPEG standard (ISO/IEC DIS 10918-3) allows spatial adaptive coding of still images. As with baseline JPEG coding, one quantization matrix applies to an entire image channel, but in addition the user may specify a multiplier for each 8 x 8 block, which multiplies the quantization matrix, yielding the new matrix for the block. MPEG 1 and 2 use much the same scheme, except there the multiplier changes only on macroblock boundaries. We propose a method for perceptual optimization of the set of multipliers. We compute the perceptual error for each block based upon DCT quantization error adjusted according to contrast sensitivity, light adaptation, and contrast masking, and pick the set of multipliers which yield maximally flat perceptual error over the blocks of the image. We investigate the bitrate savings due to this adaptive coding scheme and the relative importance of the different sorts of masking on adaptive coding.

  3. Halftoning processing on a JPEG-compressed image

    NASA Astrophysics Data System (ADS)

    Sibade, Cedric; Barizien, Stephane; Akil, Mohamed; Perroton, Laurent

    2003-12-01

    Digital image processing algorithms are usually designed for the raw format, that is on an uncompressed representation of the image. Therefore prior to transforming or processing a compressed format, decompression is applied; then, the result of the processing application is finally re-compressed for further transfer or storage. The change of data representation is resource-consuming in terms of computation, time and memory usage. In the wide format printing industry, this problem becomes an important issue: e.g. a 1 m2 input color image, scanned at 600 dpi exceeds 1.6 GB in its raw representation. However, some image processing algorithms can be performed in the compressed-domain, by applying an equivalent operation on the compressed format. This paper is presenting an innovative application of the halftoning processing operation by screening, to be applied on JPEG-compressed image. This compressed-domain transform is performed by computing the threshold operation of the screening algorithm in the DCT domain. This algorithm is illustrated by examples for different halftone masks. A pre-sharpening operation, applied on a JPEG-compressed low quality image is also described; it allows to de-noise and to enhance the contours of this image.

  4. Unequal power allocation for JPEG transmission over MIMO systems.

    PubMed

    Sabir, Muhammad Farooq; Bovik, Alan Conrad; Heath, Robert W

    2010-02-01

    With the introduction of multiple transmit and receive antennas in next generation wireless systems, real-time image and video communication are expected to become quite common, since very high data rates will become available along with improved data reliability. New joint transmission and coding schemes that explore advantages of multiple antenna systems matched with source statistics are expected to be developed. Based on this idea, we present an unequal power allocation scheme for transmission of JPEG compressed images over multiple-input multiple-output systems employing spatial multiplexing. The JPEG-compressed image is divided into different quality layers, and different layers are transmitted simultaneously from different transmit antennas using unequal transmit power, with a constraint on the total transmit power during any symbol period. Results show that our unequal power allocation scheme provides significant image quality improvement as compared to different equal power allocations schemes, with the peak-signal-to-noise-ratio gain as high as 14 dB at low signal-to-noise-ratios.

  5. Metadata requirements for results of diagnostic imaging procedures: a BIIF profile to support user applications

    NASA Astrophysics Data System (ADS)

    Brown, Nicholas J.; Lloyd, David S.; Reynolds, Melvin I.; Plummer, David L.

    2002-05-01

    A visible digital image is rendered from a set of digital image data. Medical digital image data can be stored as either: (a) pre-rendered format, corresponding to a photographic print, or (b) un-rendered format, corresponding to a photographic negative. The appropriate image data storage format and associated header data (metadata) required by a user of the results of a diagnostic procedure recorded electronically depends on the task(s) to be performed. The DICOM standard provides a rich set of metadata that supports the needs of complex applications. Many end user applications, such as simple report text viewing and display of a selected image, are not so demanding and generic image formats such as JPEG are sometimes used. However, these are lacking some basic identification requirements. In this paper we make specific proposals for minimal extensions to generic image metadata of value in various domains, which enable safe use in the case of two simple healthcare end user scenarios: (a) viewing of text and a selected JPEG image activated by a hyperlink and (b) viewing of one or more JPEG images together with superimposed text and graphics annotation using a file specified by a profile of the ISO/IEC Basic Image Interchange Format (BIIF).

  6. Diagnostic accuracy of chest X-rays acquired using a digital camera for low-cost teleradiology.

    PubMed

    Szot, Agnieszka; Jacobson, Francine L; Munn, Samson; Jazayeri, Darius; Nardell, Edward; Harrison, David; Drosten, Ralph; Ohno-Machado, Lucila; Smeaton, Laura M; Fraser, Hamish S F

    2004-02-01

    Store-and-forward telemedicine, using e-mail to send clinical data and digital images, offers a low-cost alternative for physicians in developing countries to obtain second opinions from specialists. To explore the potential usefulness of this technique, 91 chest X-ray images were photographed using a digital camera and a view box. Four independent readers (three radiologists and one pulmonologist) read two types of digital (JPEG and JPEG2000) and original film images and indicated their confidence in the presence of eight features known to be radiological indicators of tuberculosis (TB). The results were compared to a "gold standard" established by two different radiologists, and assessed using receiver operating characteristic (ROC) curve analysis. There was no statistical difference in the overall performance between the readings from the original films and both types of digital images. The size of JPEG2000 images was approximately 120KB, making this technique feasible for slow internet connections. Our preliminary results show the potential usefulness of this technique particularly for tuberculosis and lung disease, but further studies are required to refine its potential.

  7. Development of ultrasound/endoscopy PACS (picture archiving and communication system) and investigation of compression method for cine images

    NASA Astrophysics Data System (ADS)

    Osada, Masakazu; Tsukui, Hideki

    2002-09-01

    ABSTRACT Picture Archiving and Communication System (PACS) is a system which connects imaging modalities, image archives, and image workstations to reduce film handling cost and improve hospital workflow. Handling diagnostic ultrasound and endoscopy images is challenging, because it produces large amount of data such as motion (cine) images of 30 frames per second, 640 x 480 in resolution, with 24-bit color. Also, it requires enough image quality for clinical review. We have developed PACS which is able to manage ultrasound and endoscopy cine images with above resolution and frame rate, and investigate suitable compression method and compression rate for clinical image review. Results show that clinicians require capability for frame-by-frame forward and backward review of cine images because they carefully look through motion images to find certain color patterns which may appear in one frame. In order to satisfy this quality, we have chosen motion JPEG, installed and confirmed that we could capture this specific pattern. As for acceptable image compression rate, we have performed subjective evaluation. No subjects could tell the difference between original non-compressed images and 1:10 lossy compressed JPEG images. One subject could tell the difference between original and 1:20 lossy compressed JPEG images although it is acceptable. Thus, ratios of 1:10 to 1:20 are acceptable to reduce data amount and cost while maintaining quality for clinical review.

  8. A JPEG backward-compatible HDR image compression

    NASA Astrophysics Data System (ADS)

    Korshunov, Pavel; Ebrahimi, Touradj

    2012-10-01

    High Dynamic Range (HDR) imaging is expected to become one of the technologies that could shape next generation of consumer digital photography. Manufacturers are rolling out cameras and displays capable of capturing and rendering HDR images. The popularity and full public adoption of HDR content is however hindered by the lack of standards in evaluation of quality, file formats, and compression, as well as large legacy base of Low Dynamic Range (LDR) displays that are unable to render HDR. To facilitate wide spread of HDR usage, the backward compatibility of HDR technology with commonly used legacy image storage, rendering, and compression is necessary. Although many tone-mapping algorithms were developed for generating viewable LDR images from HDR content, there is no consensus on which algorithm to use and under which conditions. This paper, via a series of subjective evaluations, demonstrates the dependency of perceived quality of the tone-mapped LDR images on environmental parameters and image content. Based on the results of subjective tests, it proposes to extend JPEG file format, as the most popular image format, in a backward compatible manner to also deal with HDR pictures. To this end, the paper provides an architecture to achieve such backward compatibility with JPEG and demonstrates efficiency of a simple implementation of this framework when compared to the state of the art HDR image compression.

  9. Design of a motion JPEG (M/JPEG) adapter card

    NASA Astrophysics Data System (ADS)

    Lee, D. H.; Sudharsanan, Subramania I.

    1994-05-01

    In this paper we describe a design of a high performance JPEG (Joint Photographic Experts Group) Micro Channel adapter card. The card, tested on a range of PS/2 platforms (models 50 to 95), can complete JPEG operations on a 640 by 240 pixel image within 1/60 of a second, thus enabling real-time capture and display of high quality digital video. The card accepts digital pixels for either a YUV 4:2:2 or an RGB 4:4:4 pixel bus and has been shown to handle up to 2.05 MBytes/second of compressed data. The compressed data is transmitted to a host memory area by Direct Memory Access operations. The card uses a single C-Cube's CL550 JPEG processor that complies with the baseline JPEG. We give broad descriptions of the hardware that controls the video interface, CL550, and the system interface. Some critical design points that enhance the overall performance of the M/JPEG systems are pointed out. The control of the adapter card is achieved by an interrupt driven software that runs under DOS. The software performs a variety of tasks that include change of color space (RGB or YUV), change of quantization and Huffman tables, odd and even field control and some diagnostic operations.

  10. History of the Universe Poster

    Science.gov Websites

    History of the Universe Poster You are free to use these images if you give credit to: Particle Data Group at Lawrence Berkeley National Lab. New Version (2014) History of the Universe Poster Download: JPEG version PDF version Old Version (2013) History of the Universe Poster Download: JPEG version

  11. A high-throughput two channel discrete wavelet transform architecture for the JPEG2000 standard

    NASA Astrophysics Data System (ADS)

    Badakhshannoory, Hossein; Hashemi, Mahmoud R.; Aminlou, Alireza; Fatemi, Omid

    2005-07-01

    The Discrete Wavelet Transform (DWT) is increasingly recognized in image and video compression standards, as indicated by its use in JPEG2000. The lifting scheme algorithm is an alternative DWT implementation that has a lower computational complexity and reduced resource requirement. In the JPEG2000 standard two lifting scheme based filter banks are introduced: the 5/3 and 9/7. In this paper a high throughput, two channel DWT architecture for both of the JPEG2000 DWT filters is presented. The proposed pipelined architecture has two separate input channels that process the incoming samples simultaneously with minimum memory requirement for each channel. The architecture had been implemented in VHDL and synthesized on a Xilinx Virtex2 XCV1000. The proposed architecture applies DWT on a 2K by 1K image at 33 fps with a 75 MHZ clock frequency. This performance is achieved with 70% less resources than two independent single channel modules. The high throughput and reduced resource requirement has made this architecture the proper choice for real time applications such as Digital Cinema.

  12. A new JPEG-based steganographic algorithm for mobile devices

    NASA Astrophysics Data System (ADS)

    Agaian, Sos S.; Cherukuri, Ravindranath C.; Schneider, Erik C.; White, Gregory B.

    2006-05-01

    Currently, cellular phones constitute a significant portion of the global telecommunications market. Modern cellular phones offer sophisticated features such as Internet access, on-board cameras, and expandable memory which provide these devices with excellent multimedia capabilities. Because of the high volume of cellular traffic, as well as the ability of these devices to transmit nearly all forms of data. The need for an increased level of security in wireless communications is becoming a growing concern. Steganography could provide a solution to this important problem. In this article, we present a new algorithm for JPEG-compressed images which is applicable to mobile platforms. This algorithm embeds sensitive information into quantized discrete cosine transform coefficients obtained from the cover JPEG. These coefficients are rearranged based on certain statistical properties and the inherent processing and memory constraints of mobile devices. Based on the energy variation and block characteristics of the cover image, the sensitive data is hidden by using a switching embedding technique proposed in this article. The proposed system offers high capacity while simultaneously withstanding visual and statistical attacks. Based on simulation results, the proposed method demonstrates an improved retention of first-order statistics when compared to existing JPEG-based steganographic algorithms, while maintaining a capacity which is comparable to F5 for certain cover images.

  13. Steganographic embedding in containers-images

    NASA Astrophysics Data System (ADS)

    Nikishova, A. V.; Omelchenko, T. A.; Makedonskij, S. A.

    2018-05-01

    Steganography is one of the approaches to ensuring the protection of information transmitted over the network. But a steganographic method should vary depending on a used container. According to statistics, the most widely used containers are images and the most common image format is JPEG. Authors propose a method of data embedding into a frequency area of images in format JPEG 2000. It is proposed to use the method of Benham-Memon- Yeo-Yeung, in which instead of discrete cosine transform, discrete wavelet transform is used. Two requirements for images are formulated. Structure similarity is chosen to obtain quality assessment of data embedding. Experiments confirm that requirements satisfaction allows achieving high quality assessment of data embedding.

  14. JPIC-Rad-Hard JPEG2000 Image Compression ASIC

    NASA Astrophysics Data System (ADS)

    Zervas, Nikos; Ginosar, Ran; Broyde, Amitai; Alon, Dov

    2010-08-01

    JPIC is a rad-hard high-performance image compression ASIC for the aerospace market. JPIC implements tier 1 of the ISO/IEC 15444-1 JPEG2000 (a.k.a. J2K) image compression standard [1] as well as the post compression rate-distortion algorithm, which is part of tier 2 coding. A modular architecture enables employing a single JPIC or multiple coordinated JPIC units. JPIC is designed to support wide data sources of imager in optical, panchromatic and multi-spectral space and airborne sensors. JPIC has been developed as a collaboration of Alma Technologies S.A. (Greece), MBT/IAI Ltd (Israel) and Ramon Chips Ltd (Israel). MBT IAI defined the system architecture requirements and interfaces, The JPEG2K-E IP core from Alma implements the compression algorithm [2]. Ramon Chips adds SERDES interfaces and host interfaces and integrates the ASIC. MBT has demonstrated the full chip on an FPGA board and created system boards employing multiple JPIC units. The ASIC implementation, based on Ramon Chips' 180nm CMOS RadSafe[TM] RH cell library enables superior radiation hardness.

  15. An Efficient Image Compressor for Charge Coupled Devices Camera

    PubMed Central

    Li, Jin; Xing, Fei; You, Zheng

    2014-01-01

    Recently, the discrete wavelet transforms- (DWT-) based compressor, such as JPEG2000 and CCSDS-IDC, is widely seen as the state of the art compression scheme for charge coupled devices (CCD) camera. However, CCD images project on the DWT basis to produce a large number of large amplitude high-frequency coefficients because these images have a large number of complex texture and contour information, which are disadvantage for the later coding. In this paper, we proposed a low-complexity posttransform coupled with compressing sensing (PT-CS) compression approach for remote sensing image. First, the DWT is applied to the remote sensing image. Then, a pair base posttransform is applied to the DWT coefficients. The pair base are DCT base and Hadamard base, which can be used on the high and low bit-rate, respectively. The best posttransform is selected by the l p-norm-based approach. The posttransform is considered as the sparse representation stage of CS. The posttransform coefficients are resampled by sensing measurement matrix. Experimental results on on-board CCD camera images show that the proposed approach significantly outperforms the CCSDS-IDC-based coder, and its performance is comparable to that of the JPEG2000 at low bit rate and it does not have the high excessive implementation complexity of JPEG2000. PMID:25114977

  16. A new security solution to JPEG using hyper-chaotic system and modified zigzag scan coding

    NASA Astrophysics Data System (ADS)

    Ji, Xiao-yong; Bai, Sen; Guo, Yu; Guo, Hui

    2015-05-01

    Though JPEG is an excellent compression standard of images, it does not provide any security performance. Thus, a security solution to JPEG was proposed in Zhang et al. (2014). But there are some flaws in Zhang's scheme and in this paper we propose a new scheme based on discrete hyper-chaotic system and modified zigzag scan coding. By shuffling the identifiers of zigzag scan encoded sequence with hyper-chaotic sequence and accurately encrypting the certain coefficients which have little relationship with the correlation of the plain image in zigzag scan encoded domain, we achieve high compression performance and robust security simultaneously. Meanwhile we present and analyze the flaws in Zhang's scheme through theoretical analysis and experimental verification, and give the comparisons between our scheme and Zhang's. Simulation results verify that our method has better performance in security and efficiency.

  17. Compression of electromyographic signals using image compression techniques.

    PubMed

    Costa, Marcus Vinícius Chaffim; Berger, Pedro de Azevedo; da Rocha, Adson Ferreira; de Carvalho, João Luiz Azevedo; Nascimento, Francisco Assis de Oliveira

    2008-01-01

    Despite the growing interest in the transmission and storage of electromyographic signals for long periods of time, few studies have addressed the compression of such signals. In this article we present an algorithm for compression of electromyographic signals based on the JPEG2000 coding system. Although the JPEG2000 codec was originally designed for compression of still images, we show that it can also be used to compress EMG signals for both isotonic and isometric contractions. For EMG signals acquired during isometric contractions, the proposed algorithm provided compression factors ranging from 75 to 90%, with an average PRD ranging from 3.75% to 13.7%. For isotonic EMG signals, the algorithm provided compression factors ranging from 75 to 90%, with an average PRD ranging from 3.4% to 7%. The compression results using the JPEG2000 algorithm were compared to those using other algorithms based on the wavelet transform.

  18. High-speed low-complexity video coding with EDiCTius: a DCT coding proposal for JPEG XS

    NASA Astrophysics Data System (ADS)

    Richter, Thomas; Fößel, Siegfried; Keinert, Joachim; Scherl, Christian

    2017-09-01

    In its 71th meeting, the JPEG committee issued a call for low complexity, high speed image coding, designed to address the needs of low-cost video-over-ip applications. As an answer to this call, Fraunhofer IIS and the Computing Center of the University of Stuttgart jointly developed an embedded DCT image codec requiring only minimal resources while maximizing throughput on FPGA and GPU implementations. Objective and subjective tests performed for the 73rd meeting confirmed its excellent performance and suitability for its purpose, and it was selected as one of the two key contributions for the development of a joined test model. In this paper, its authors describe the design principles of the codec, provide a high-level overview of the encoder and decoder chain and provide evaluation results on the test corpus selected by the JPEG committee.

  19. Prior-Based Quantization Bin Matching for Cloud Storage of JPEG Images.

    PubMed

    Liu, Xianming; Cheung, Gene; Lin, Chia-Wen; Zhao, Debin; Gao, Wen

    2018-07-01

    Millions of user-generated images are uploaded to social media sites like Facebook daily, which translate to a large storage cost. However, there exists an asymmetry in upload and download data: only a fraction of the uploaded images are subsequently retrieved for viewing. In this paper, we propose a cloud storage system that reduces the storage cost of all uploaded JPEG photos, at the expense of a controlled increase in computation mainly during download of requested image subset. Specifically, the system first selectively re-encodes code blocks of uploaded JPEG images using coarser quantization parameters for smaller storage sizes. Then during download, the system exploits known signal priors-sparsity prior and graph-signal smoothness prior-for reverse mapping to recover original fine quantization bin indices, with either deterministic guarantee (lossless mode) or statistical guarantee (near-lossless mode). For fast reverse mapping, we use small dictionaries and sparse graphs that are tailored for specific clusters of similar blocks, which are classified via tree-structured vector quantizer. During image upload, cluster indices identifying the appropriate dictionaries and graphs for the re-quantized blocks are encoded as side information using a differential distributed source coding scheme to facilitate reverse mapping during image download. Experimental results show that our system can reap significant storage savings (up to 12.05%) at roughly the same image PSNR (within 0.18 dB).

  20. A new approach of objective quality evaluation on JPEG2000 lossy-compressed lung cancer CT images

    NASA Astrophysics Data System (ADS)

    Cai, Weihua; Tan, Yongqiang; Zhang, Jianguo

    2007-03-01

    Image compression has been used to increase the communication efficiency and storage capacity. JPEG 2000 compression, based on the wavelet transformation, has its advantages comparing to other compression methods, such as ROI coding, error resilience, adaptive binary arithmetic coding and embedded bit-stream. However it is still difficult to find an objective method to evaluate the image quality of lossy-compressed medical images so far. In this paper, we present an approach to evaluate the image quality by using a computer aided diagnosis (CAD) system. We selected 77 cases of CT images, bearing benign and malignant lung nodules with confirmed pathology, from our clinical Picture Archiving and Communication System (PACS). We have developed a prototype of CAD system to classify these images into benign ones and malignant ones, the performance of which was evaluated by the receiver operator characteristics (ROC) curves. We first used JPEG 2000 to compress these cases of images with different compression ratio from lossless to lossy, and used the CAD system to classify the cases with different compressed ratio, then compared the ROC curves from the CAD classification results. Support vector machine (SVM) and neural networks (NN) were used to classify the malignancy of input nodules. In each approach, we found that the area under ROC (AUC) decreases with the increment of compression ratio with small fluctuations.

  1. Report about the Solar Eclipse on August 11, 1999

    NASA Astrophysics Data System (ADS)

    1999-08-01

    This webpage provides information about the total eclipse on Wednesday, August 11, 1999, as it was seen by ESO staff, mostly at or near the ESO Headquarters in Garching (Bavaria, Germany). The zone of totality was about 108 km wide and the ESO HQ were located only 8 km south of the line of maximum totality. The duration of the phase of totality was about 2 min 17 sec. The weather was quite troublesome in this geographical area. Heavy clouds moved across the sky during the entire event, but there were also some holes in between. Consequently, sites that were only a few kilometres from each other had very different viewing conditions. Some photos and spectra of the eclipsed Sun are displayed below, with short texts about the circumstances under which they were made. Please note that reproduction of pictures on this webpage is only permitted, if the author is mentioned as source. Information made available before the eclipse is available here. Eclipse Impressions at the ESO HQ Photo by Eddy Pomaroli Preparing for the Eclipse Photo: Eddy Pomaroli [JEG: 400 x 239 pix - 116k] [JPEG: 800 x 477 pix - 481k] [JPEG: 3000 x 1789 pix - 3.9M] Photo by Eddy Pomaroli During the 1st Partial Phase Photo: Eddy Pomaroli [JPEG: 400 x 275 pix - 135k] [JPEG: 800 x 549 pix - 434k] [JPEG: 2908 x 1997 pix - 5.9M] Photo by Hamid Mehrgan Heavy Clouds Above Digital Photo: Hamid Mehrgan [JPEG: 400 x 320 pix - 140k] [JPEG: 800 x 640 pix - 540k] [JPEG: 1280 x 1024 pix - 631k] Photo by Olaf Iwert Totality Approaching Digital Photo: Olaf Iwert [JPEG: 400 x 320 pix - 149k] [JPEG: 800 x 640 pix - 380k] [JPEG: 1280 x 1024 pix - 536k] Photo by Olaf Iwert Beginning of Totality Digital Photo: Olaf Iwert [JPEG: 400 x 236 pix - 86k] [JPEG: 800 x 471 pix - 184k] [JPEG: 1280 x 753 pix - 217k] Photo by Olaf Iwert A Happy Eclipse Watcher Digital Photo: Olaf Iwert [JPEG: 400 x 311 pix - 144k] [JPEG: 800 x 622 pix - 333k] [JPEG: 1280 x 995 pix - 644k] ESO HQ Eclipse Video Clip [MPEG-version] ESO HQ Eclipse Video Clip (2425 frames/01:37 min) [MPEG Video; 160x120 pix; 2.2M] [MPEG Video; 320x240 pix; 4.4Mb] [RealMedia; streaming; 33kps] [RealMedia; streaming; 200kps] This Video Clip was prepared from a "reportage" of the event at the ESO HQ that was transmitted in real-time to ESO-Chile via ESO's satellite link. It begins with some sequences of the first partial phase and the eclipse watchers. Clouds move over and the landscape darkens as the phase of totality approaches. The Sun is again visible at the very moment this phase ends. Some further sequences from the second partial phase follow. Produced by Herbert Zodet. Dire Forecasts The weather predictions in the days before the eclipse were not good for Munich and surroundings. A heavy front with rain and thick clouds that completely covered the sky moved across Bavaria the day before and the meteorologists predicted a 20% chance of seeing anything at all. On August 10, it seemed that the chances were best in France and in the western parts of Germany, and much less close to the Alps. This changed to the opposite during the night before the eclipse. Now the main concern in Munich was a weather front approaching from the west - would it reach this area before the eclipse? The better chances were then further east, nearer the Austrian border. Many people travelled back and forth along the German highways, many of which quickly became heavily congested. Preparations About 500 persons, mostly ESO staff with their families and friends, were present at the ESO HQ in the morning of August 11. Prior to the eclipse, they received information about the various aspects of solar eclipses and about the specific conditions of this one in the auditorium. Protective glasses were handed out and it was the idea that they would then follow the eclipse from outside. In view of the pessimistic weather forecasts, TV sets had been set up in two large rooms, but in the end most chose to watch the eclipse from the terasse in front of the cafeteria and from the area south of the building. Several telescopes were set up among the trees and on the adjoining field (just harvested). Clouds and Holes It was an unusual solar eclipse experience. Heavy clouds were passing by with sudden rainshowers, but fortunately there were also some holes with blue sky in between. While much of the first partial phase was visible through these, some really heavy clouds moved in a few minutes before the total phase, when the light had begun to fade. They drifted slowly - too slowly! - towards the east and the corona was never seen from the ESO HQ site. From here, the view towards the eclipsed Sun only cleared at the very instant of the second "diamond ring" phenomenon. This was beautiful, however, and evidently took most of the photographers by surprise, so very few, if any, photos were made of this memorable moment. Temperature Curve by Benoit Pirenne Temperature Curve on August 11 [JPEG: 646 x 395 pix - 35k] Measured by Benoit Pirenne - see also his meteorological webpage Nevertheless, the entire experience was fantastic - there were all the expected effects, the darkness, the cool air, the wind and the silence. It was very impressive indeed! And it was certainly a unique day in ESO history! Carolyn Collins Petersen from "Sky & Telescope" participated in the conference at ESO in the days before and watched the eclipse from the "Bürgerplatz" in Garching, about 1.5 km south of the ESO HQ. She managed to see part of the totality phase and filed some dramatic reports at the S&T Eclipse Expedition website. They describe very well the feelings of those in this area! Eclipse Photos Several members of the ESO staff went elsewhere and had more luck with the weather, especially at the moment of totality. Below are some of their impressive pictures. Eclipse Photo by Philippe Duhoux First "Diamond Ring" [JPEG: 400 x 292 pix - 34k] [JPEG: 800 x 583 pix - 144k] [JPEG: 2531 x 1846 pix - 1.3M] Eclipse Photo by Philippe Duhoux Totality [JPEG: 400 x 306 pix - 49k] [JPEG: 800 x 612 pix - 262k] [JPEG: 3039 x 1846 pix - 3.6M] Eclipse Photo by Philippe Duhoux Second "Diamond Ring" [JPEG: 400 x 301 pix - 34k] [JPEG: 800 x 601 pix - 163k] [JPEG: 2905 x 2181 pix - 2.0M] The Corona (Philippe Duhoux) "For the observation of the eclipse, I chose a field on a hill offering a wide view towards the western horizon and located about 10 kilometers north west of Garching." "While the partial phase was mostly cloudy, the sky went clear 3 minutes before the totality and remained so for about 15 minutes. Enough to enjoy the event!" "The images were taken on Agfa CT100 colour slide film with an Olympus OM-20 at the focus of a Maksutov telescope (f = 1000 mm, f/D = 10). The exposure times were automatically set by the camera. During the partial phase, I used an off-axis mask of 40 mm diameter with a mylar filter ND = 3.6, which I removed for the diamond rings and the corona." Note in particular the strong, detached protuberances to the right of the rim, particularly noticeable in the last photo. Eclipse Photo by Cyril Cavadore Totality [JPEG: 400 x 360 pix - 45k] [JPEG: 800 x 719 pix - 144k] [JPEG: 908 x 816 pix - 207k] The Corona (Cyril Cavadore) "We (C.Cavadore from ESO and L. Bernasconi and B. Gaillard from Obs. de la Cote d'Azur) took this photo in France at Vouzier (Champagne-Ardennes), between Reims and Nancy. A large blue opening developed in the sky at 10 o'clock and we decided to set up the telescope and the camera at that time. During the partial phase, a lot of clouds passed over, making it hard to focus properly. Nevertheless, 5 min before totality, a deep blue sky opened above us, allowing us to watch it and to take this picture. 5-10 Minutes after the totality, the sky was almost overcast up to the 4th contact". "The image was taken with a 2x2K (14 µm pixels) Thomson "homemade" CCD camera mounted on a CN212 Takahashi (200 mm diameter telescope) with a 1/10.000 neutral filter. The acquisition software set exposure time (2 sec) and took images in a complete automated way, allowing us to observe the eclipse by naked eye or with binoculars. To get as many images as possible during totality, we use binning 2x2 to reduce the readout time to 19 sec. Afterward, one of the best image was flat-fielded and processed with a special algorithm that modelled a fit the continuous component of the corona and then subtracted from the original image. The remaining details were enhanced by unsharp masking and added to the original image. Finally, gaussian histogram equalization was applied". Eclipse Photo by Eddy Pomaroli Second "Diamond Ring" [JPEG: 400 x 438 pix - 129k] [JPEG: 731 x 800 pix - 277k] [JPEG: 1940 x 2123 pix - 2.3M] Diamond Ring at ESO HQ (Eddy Pomaroli) "Despite the clouds, we saw the second "diamond ring" from the ESO HQ. In a sense, we were quite lucky, since the clouds were very heavy during the total phase and we might easily have missed it all!". "I used an old Minolta SRT-101 camera and a teleobjective (450 mm; f/8). The exposure was 1/125 sec on Kodak Elite 100 (pushed to 200 ASA). I had the feeling that the Sun would become visible and had the camera pointed, by good luck in the correct direction, as soon as the cloud moved away". Eclipse Photo by Roland Reiss First Partial Phase [JPEG: 400 x 330 pix - 94k] [JPEG: 800 x 660 pix - 492k] [JPEG: 3000 x 2475 pix - 4.5M] End of First Partial Phase (Roland Reiss) "I observed the eclipse from my home in Garching. The clouds kept moving and this was the last photo I was able to obtain during the first partial phase, before they blocked everything". "The photo is interesting, because it shows two more images of the eclipsed Sun, below the overexposed central part. In one of them, the remaining, narrow crescent is particularly well visible. They are caused by reflections in the camera. I used a Minolta camera and a Fuji colour slide film". Eclipse Spectra Some ESO people went a step further and obtained spectra of the Sun at the time of the eclipse. Eclipse Spectrum by Roland Reiss Coronal Spectrum [JPEG: 400 x 273 pix - 94k] [JPEG: 800 x 546 pix - 492k] [JPEG: 3000 x 2046 pix - 4.5M] Coronal Spectrum (CAOS Group) The Club of Amateurs in Optical Spectroscopy (with Carlos Guirao Sanchez, Gerardo Avila and Jesus Rodriguez) obtained a spectrum of the solar corona from a site in Garching, about 2 km south of the ESO HQ. "This is a plot of the spectrum and the corresponding CCD image that we took during the total eclipse. The main coronal lines are well visible and have been identified in the figure. Note in particular one at 6374 Angstrom that was first ascribed to the mysterious substance "Coronium". We now know that it is emitted by iron atoms that have lost nine electrons (Fe X)". The equipment was: * Telescope: Schmidt Cassegrain F/6.3; Diameter: 250 mm * FIASCO Spectrograph: Fibre: 135 micron core diameter F = 100 mm collimator, f = 80 mm camera; Grating: 1300 gr/mm blazed at 500 nm; SBIG ST8E CCD camera; Exposure time was 20 sec. Eclipse Spectrum by Bob Fosbury Chromospheric Spectrum [JPEG: 120 x 549 pix - 20k] Chromospheric and Coronal Spectra (Bob Fosbury) "The 11 August 1999 total solar eclipse was seen from a small farm complex called Wolfersberg in open fields some 20km ESE of the centre of Munich. It was chosen to be within the 2min band of totality but likely to be relatively unpopulated". "There were intermittent views of the Sun between first and second contact with quite a heavy rainshower which stopped 9min before totality. A large clear patch of sky revealed a perfect view of the Sun just 2min before second contact and it remained clear for at least half an hour after third contact". "The principal project was to photograph the spectrum of the chromosphere during totality using a transmission grating in front of a moderate telephoto lens. The desire to do this was stimulated by a view of the 1976 eclipse in Australia when I held the same grating up to the eclipsed Sun and was thrilled by the view of the emission line spectrum. The trick now was to get the exposure right!". "A sequence of 13 H-alpha images was combined into a looping movie. The exposure times were different, but some attempt has been made to equalise the intensities. The last two frames show the low chromosphere and then the photosphere emerging at 3rd contact. The [FeX] coronal line can be seen on the left in the middle of the sequence. I used a Hasselblad camera and Agfa slide film (RSX II 100)".

  2. A software platform for the analysis of dermatology images

    NASA Astrophysics Data System (ADS)

    Vlassi, Maria; Mavraganis, Vlasios; Asvestas, Panteleimon

    2017-11-01

    The purpose of this paper is to present a software platform developed in Python programming environment that can be used for the processing and analysis of dermatology images. The platform provides the capability for reading a file that contains a dermatology image. The platform supports image formats such as Windows bitmaps, JPEG, JPEG2000, portable network graphics, TIFF. Furthermore, it provides suitable tools for selecting, either manually or automatically, a region of interest (ROI) on the image. The automated selection of a ROI includes filtering for smoothing the image and thresholding. The proposed software platform has a friendly and clear graphical user interface and could be a useful second-opinion tool to a dermatologist. Furthermore, it could be used to classify images including from other anatomical parts such as breast or lung, after proper re-training of the classification algorithms.

  3. High bit depth infrared image compression via low bit depth codecs

    NASA Astrophysics Data System (ADS)

    Belyaev, Evgeny; Mantel, Claire; Forchhammer, Søren

    2017-08-01

    Future infrared remote sensing systems, such as monitoring of the Earth's environment by satellites, infrastructure inspection by unmanned airborne vehicles etc., will require 16 bit depth infrared images to be compressed and stored or transmitted for further analysis. Such systems are equipped with low power embedded platforms where image or video data is compressed by a hardware block called the video processing unit (VPU). However, in many cases using two 8-bit VPUs can provide advantages compared with using higher bit depth image compression directly. We propose to compress 16 bit depth images via 8 bit depth codecs in the following way. First, an input 16 bit depth image is mapped into 8 bit depth images, e.g., the first image contains only the most significant bytes (MSB image) and the second one contains only the least significant bytes (LSB image). Then each image is compressed by an image or video codec with 8 bits per pixel input format. We analyze how the compression parameters for both MSB and LSB images should be chosen to provide the maximum objective quality for a given compression ratio. Finally, we apply the proposed infrared image compression method utilizing JPEG and H.264/AVC codecs, which are usually available in efficient implementations, and compare their rate-distortion performance with JPEG2000, JPEG-XT and H.265/HEVC codecs supporting direct compression of infrared images in 16 bit depth format. A preliminary result shows that two 8 bit H.264/AVC codecs can achieve similar result as 16 bit HEVC codec.

  4. Cloud Optimized Image Format and Compression

    NASA Astrophysics Data System (ADS)

    Becker, P.; Plesea, L.; Maurer, T.

    2015-04-01

    Cloud based image storage and processing requires revaluation of formats and processing methods. For the true value of the massive volumes of earth observation data to be realized, the image data needs to be accessible from the cloud. Traditional file formats such as TIF and NITF were developed in the hay day of the desktop and assumed fast low latency file access. Other formats such as JPEG2000 provide for streaming protocols for pixel data, but still require a server to have file access. These concepts no longer truly hold in cloud based elastic storage and computation environments. This paper will provide details of a newly evolving image storage format (MRF) and compression that is optimized for cloud environments. Although the cost of storage continues to fall for large data volumes, there is still significant value in compression. For imagery data to be used in analysis and exploit the extended dynamic range of the new sensors, lossless or controlled lossy compression is of high value. Compression decreases the data volumes stored and reduces the data transferred, but the reduced data size must be balanced with the CPU required to decompress. The paper also outlines a new compression algorithm (LERC) for imagery and elevation data that optimizes this balance. Advantages of the compression include its simple to implement algorithm that enables it to be efficiently accessed using JavaScript. Combing this new cloud based image storage format and compression will help resolve some of the challenges of big image data on the internet.

  5. Edge-Based Image Compression with Homogeneous Diffusion

    NASA Astrophysics Data System (ADS)

    Mainberger, Markus; Weickert, Joachim

    It is well-known that edges contain semantically important image information. In this paper we present a lossy compression method for cartoon-like images that exploits information at image edges. These edges are extracted with the Marr-Hildreth operator followed by hysteresis thresholding. Their locations are stored in a lossless way using JBIG. Moreover, we encode the grey or colour values at both sides of each edge by applying quantisation, subsampling and PAQ coding. In the decoding step, information outside these encoded data is recovered by solving the Laplace equation, i.e. we inpaint with the steady state of a homogeneous diffusion process. Our experiments show that the suggested method outperforms the widely-used JPEG standard and can even beat the advanced JPEG2000 standard for cartoon-like images.

  6. Novel approach to multispectral image compression on the Internet

    NASA Astrophysics Data System (ADS)

    Zhu, Yanqiu; Jin, Jesse S.

    2000-10-01

    Still image coding techniques such as JPEG have been always applied onto intra-plane images. Coding fidelity is always utilized in measuring the performance of intra-plane coding methods. In many imaging applications, it is more and more necessary to deal with multi-spectral images, such as the color images. In this paper, a novel approach to multi-spectral image compression is proposed by using transformations among planes for further compression of spectral planes. Moreover, a mechanism of introducing human visual system to the transformation is provided for exploiting the psycho visual redundancy. The new technique for multi-spectral image compression, which is designed to be compatible with the JPEG standard, is demonstrated on extracting correlation among planes based on human visual system. A high measure of compactness in the data representation and compression can be seen with the power of the scheme taken into account.

  7. Context-dependent JPEG backward-compatible high-dynamic range image compression

    NASA Astrophysics Data System (ADS)

    Korshunov, Pavel; Ebrahimi, Touradj

    2013-10-01

    High-dynamic range (HDR) imaging is expected, together with ultrahigh definition and high-frame rate video, to become a technology that may change photo, TV, and film industries. Many cameras and displays capable of capturing and rendering both HDR images and video are already available in the market. The popularity and full-public adoption of HDR content is, however, hindered by the lack of standards in evaluation of quality, file formats, and compression, as well as large legacy base of low-dynamic range (LDR) displays that are unable to render HDR. To facilitate the wide spread of HDR usage, the backward compatibility of HDR with commonly used legacy technologies for storage, rendering, and compression of video and images are necessary. Although many tone-mapping algorithms are developed for generating viewable LDR content from HDR, there is no consensus of which algorithm to use and under which conditions. We, via a series of subjective evaluations, demonstrate the dependency of the perceptual quality of the tone-mapped LDR images on the context: environmental factors, display parameters, and image content itself. Based on the results of subjective tests, it proposes to extend JPEG file format, the most popular image format, in a backward compatible manner to deal with HDR images also. An architecture to achieve such backward compatibility with JPEG is proposed. A simple implementation of lossy compression demonstrates the efficiency of the proposed architecture compared with the state-of-the-art HDR image compression.

  8. Image enhancement using the hypothesis selection filter: theory and application to JPEG decoding.

    PubMed

    Wong, Tak-Shing; Bouman, Charles A; Pollak, Ilya

    2013-03-01

    We introduce the hypothesis selection filter (HSF) as a new approach for image quality enhancement. We assume that a set of filters has been selected a priori to improve the quality of a distorted image containing regions with different characteristics. At each pixel, HSF uses a locally computed feature vector to predict the relative performance of the filters in estimating the corresponding pixel intensity in the original undistorted image. The prediction result then determines the proportion of each filter used to obtain the final processed output. In this way, the HSF serves as a framework for combining the outputs of a number of different user selected filters, each best suited for a different region of an image. We formulate our scheme in a probabilistic framework where the HSF output is obtained as the Bayesian minimum mean square error estimate of the original image. Maximum likelihood estimates of the model parameters are determined from an offline fully unsupervised training procedure that is derived from the expectation-maximization algorithm. To illustrate how to apply the HSF and to demonstrate its potential, we apply our scheme as a post-processing step to improve the decoding quality of JPEG-encoded document images. The scheme consistently improves the quality of the decoded image over a variety of image content with different characteristics. We show that our scheme results in quantitative improvements over several other state-of-the-art JPEG decoding methods.

  9. Fast and accurate face recognition based on image compression

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Blasch, Erik

    2017-05-01

    Image compression is desired for many image-related applications especially for network-based applications with bandwidth and storage constraints. The face recognition community typical reports concentrate on the maximal compression rate that would not decrease the recognition accuracy. In general, the wavelet-based face recognition methods such as EBGM (elastic bunch graph matching) and FPB (face pattern byte) are of high performance but run slowly due to their high computation demands. The PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis) algorithms run fast but perform poorly in face recognition. In this paper, we propose a novel face recognition method based on standard image compression algorithm, which is termed as compression-based (CPB) face recognition. First, all gallery images are compressed by the selected compression algorithm. Second, a mixed image is formed with the probe and gallery images and then compressed. Third, a composite compression ratio (CCR) is computed with three compression ratios calculated from: probe, gallery and mixed images. Finally, the CCR values are compared and the largest CCR corresponds to the matched face. The time cost of each face matching is about the time of compressing the mixed face image. We tested the proposed CPB method on the "ASUMSS face database" (visible and thermal images) from 105 subjects. The face recognition accuracy with visible images is 94.76% when using JPEG compression. On the same face dataset, the accuracy of FPB algorithm was reported as 91.43%. The JPEG-compressionbased (JPEG-CPB) face recognition is standard and fast, which may be integrated into a real-time imaging device.

  10. JPEG 2000-based compression of fringe patterns for digital holographic microscopy

    NASA Astrophysics Data System (ADS)

    Blinder, David; Bruylants, Tim; Ottevaere, Heidi; Munteanu, Adrian; Schelkens, Peter

    2014-12-01

    With the advent of modern computing and imaging technologies, digital holography is becoming widespread in various scientific disciplines such as microscopy, interferometry, surface shape measurements, vibration analysis, data encoding, and certification. Therefore, designing an efficient data representation technology is of particular importance. Off-axis holograms have very different signal properties with respect to regular imagery, because they represent a recorded interference pattern with its energy biased toward the high-frequency bands. This causes traditional images' coders, which assume an underlying 1/f2 power spectral density distribution, to perform suboptimally for this type of imagery. We propose a JPEG 2000-based codec framework that provides a generic architecture suitable for the compression of many types of off-axis holograms. This framework has a JPEG 2000 codec at its core, extended with (1) fully arbitrary wavelet decomposition styles and (2) directional wavelet transforms. Using this codec, we report significant improvements in coding performance for off-axis holography relative to the conventional JPEG 2000 standard, with Bjøntegaard delta-peak signal-to-noise ratio improvements ranging from 1.3 to 11.6 dB for lossy compression in the 0.125 to 2.00 bpp range and bit-rate reductions of up to 1.6 bpp for lossless compression.

  11. Mutual information-based analysis of JPEG2000 contexts.

    PubMed

    Liu, Zhen; Karam, Lina J

    2005-04-01

    Context-based arithmetic coding has been widely adopted in image and video compression and is a key component of the new JPEG2000 image compression standard. In this paper, the contexts used in JPEG2000 are analyzed using the mutual information, which is closely related to the compression performance. We first show that, when combining the contexts, the mutual information between the contexts and the encoded data will decrease unless the conditional probability distributions of the combined contexts are the same. Given I, the initial number of contexts, and F, the final desired number of contexts, there are S(I, F) possible context classification schemes where S(I, F) is called the Stirling number of the second kind. The optimal classification scheme is the one that gives the maximum mutual information. Instead of using an exhaustive search, the optimal classification scheme can be obtained through a modified generalized Lloyd algorithm with the relative entropy as the distortion metric. For binary arithmetic coding, the search complexity can be reduced by using dynamic programming. Our experimental results show that the JPEG2000 contexts capture the correlations among the wavelet coefficients very well. At the same time, the number of contexts used as part of the standard can be reduced without loss in the coding performance.

  12. 6 CFR 37.31 - Source document retention.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...

  13. 6 CFR 37.31 - Source document retention.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...

  14. 6 CFR 37.31 - Source document retention.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...

  15. 6 CFR 37.31 - Source document retention.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...

  16. 6 CFR 37.31 - Source document retention.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...

  17. Fragmentation Point Detection of JPEG Images at DHT Using Validator

    NASA Astrophysics Data System (ADS)

    Mohamad, Kamaruddin Malik; Deris, Mustafa Mat

    File carving is an important, practical technique for data recovery in digital forensics investigation and is particularly useful when filesystem metadata is unavailable or damaged. The research on reassembly of JPEG files with RST markers, fragmented within the scan area have been done before. However, fragmentation within Define Huffman Table (DHT) segment is yet to be resolved. This paper analyzes the fragmentation within the DHT area and list out all the fragmentation possibilities. Two main contributions are made in this paper. Firstly, three fragmentation points within DHT area are listed. Secondly, few novel validators are proposed to detect these fragmentations. The result obtained from tests done on manually fragmented JPEG files, showed that all three fragmentation points within DHT are successfully detected using validators.

  18. A Java viewer to publish Digital Imaging and Communications in Medicine (DICOM) radiologic images on the World Wide Web.

    PubMed

    Setti, E; Musumeci, R

    2001-06-01

    The world wide web is an exciting service that allows one to publish electronic documents made of text and images on the internet. Client software called a web browser can access these documents, and display and print them. The most popular browsers are currently Microsoft Internet Explorer (Microsoft, Redmond, WA) and Netscape Communicator (Netscape Communications, Mountain View, CA). These browsers can display text in hypertext markup language (HTML) format and images in Joint Photographic Expert Group (JPEG) and Graphic Interchange Format (GIF). Currently, neither browser can display radiologic images in native Digital Imaging and Communications in Medicine (DICOM) format. With the aim to publish radiologic images on the internet, we wrote a dedicated Java applet. Our software can display radiologic and histologic images in DICOM, JPEG, and GIF formats, and provides a a number of functions like windowing and magnification lens. The applet is compatible with some web browsers, even the older versions. The software is free and available from the author.

  19. Integration and acceleration of virtual microscopy as the key to successful implementation into the routine diagnostic process.

    PubMed

    Wienert, Stephan; Beil, Michael; Saeger, Kai; Hufnagl, Peter; Schrader, Thomas

    2009-01-09

    The virtual microscopy is widely accepted in Pathology for educational purposes and teleconsultation but is far from the routine use in surgical pathology due to the technical requirements and some limitations. A technical problem is the limited bandwidth of a usual network and the delayed transmission rate and presentation time on the screen. In this study the process of secondary diagnostic was evaluated using the "T.Konsult Pathologie" service of the Professional Association of German Pathologists within the German breast cancer screening program. The characteristics of the access to the WSI (Whole Slide Images) have been analyzed to explore the possibilities of prefetching and caching to reduce the presentation and transfer time with the goal to increase user acceptance. The log files of the web server were analyzed to reconstruct the movements of the pathologist on the WSI and to create the observation path. Using a specialized tool the observation paths were extracted automatically from the log files. The attributes linearity, 3-point-linearity, changes per request, and number of consecutive requests were calculated to design, develop and evaluate different caching and prefetching strategies. The analysis of the observation paths showed that a complete accordance of two image requests is a very rare event. But more frequently a partial covering of two requested image areas can be found. In total 257 diagnostic paths from 131 WSI have been extracted and analysed. On average a diagnostic path consists of 16 image requests and takes 189 seconds between first and last image request. The mean linearity was 0,41 and the mean 3-point-linearity 0,85. Three different caching algorithms have been compared with respect to hit rate and additional image requests on the WSI server. Tests demonstrated that 95% of the diagnostic paths could be loaded without any deletion of entries in the cache (cache size 12,2 Megapixel). If the image parts are stored after JPEG compression this complies with less than 2 MB. WSI telepathology is a technology which offers the possibility to break the limitations of conventional static telepathology. The complete histological slide may be investigated instead of sets of images of lesions sampled by the presenting pathologist. The benefit is demonstrated by the high diagnostic security of 95% accordance between first and second diagnosis.

  20. Integration and acceleration of virtual microscopy as the key to successful implementation into the routine diagnostic process

    PubMed Central

    Wienert, Stephan; Beil, Michael; Saeger, Kai; Hufnagl, Peter; Schrader, Thomas

    2009-01-01

    Background The virtual microscopy is widely accepted in Pathology for educational purposes and teleconsultation but is far from the routine use in surgical pathology due to the technical requirements and some limitations. A technical problem is the limited bandwidth of a usual network and the delayed transmission rate and presentation time on the screen. Methods In this study the process of secondary diagnostic was evaluated using the "T.Konsult Pathologie" service of the Professional Association of German Pathologists within the German breast cancer screening program. The characteristics of the access to the WSI (Whole Slide Images) have been analyzed to explore the possibilities of prefetching and caching to reduce the presentation and transfer time with the goal to increase user acceptance. The log files of the web server were analyzed to reconstruct the movements of the pathologist on the WSI and to create the observation path. Using a specialized tool the observation paths were extracted automatically from the log files. The attributes linearity, 3-point-linearity, changes per request, and number of consecutive requests were calculated to design, develop and evaluate different caching and prefetching strategies. Results The analysis of the observation paths showed that a complete accordance of two image requests is a very rare event. But more frequently a partial covering of two requested image areas can be found. In total 257 diagnostic paths from 131 WSI have been extracted and analysed. On average a diagnostic path consists of 16 image requests and takes 189 seconds between first and last image request. The mean linearity was 0,41 and the mean 3-point-linearity 0,85. Three different caching algorithms have been compared with respect to hit rate and additional image requests on the WSI server. Tests demonstrated that 95% of the diagnostic paths could be loaded without any deletion of entries in the cache (cache size 12,2 Megapixel). If the image parts are stored after JPEG compression this complies with less than 2 MB. Discussion WSI telepathology is a technology which offers the possibility to break the limitations of conventional static telepathology. The complete histological slide may be investigated instead of sets of images of lesions sampled by the presenting pathologist. The benefit is demonstrated by the high diagnostic security of 95% accordance between first and second diagnosis. PMID:19134181

  1. New Mexico: Los Alamos

    Atmospheric Science Data Center

    2014-05-15

    article title:  Los Alamos, New Mexico     View Larger JPEG image ... kb) Multi-angle views of the Fire in Los Alamos, New Mexico, May 9, 2000. These true-color images covering north-central New Mexico ...

  2. An Implementation of Privacy Protection for a Surveillance Camera Using ROI Coding of JPEG2000 with Face Detection

    NASA Astrophysics Data System (ADS)

    Muneyasu, Mitsuji; Odani, Shuhei; Kitaura, Yoshihiro; Namba, Hitoshi

    On the use of a surveillance camera, there is a case where privacy protection should be considered. This paper proposes a new privacy protection method by automatically degrading the face region in surveillance images. The proposed method consists of ROI coding of JPEG2000 and a face detection method based on template matching. The experimental result shows that the face region can be detected and hidden correctly.

  3. SU-D-BRD-02: A Web-Based Image Processing and Plan Evaluation Platform (WIPPEP) for Future Cloud-Based Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chai, X; Liu, L; Xing, L

    Purpose: Visualization and processing of medical images and radiation treatment plan evaluation have traditionally been constrained to local workstations with limited computation power and ability of data sharing and software update. We present a web-based image processing and planning evaluation platform (WIPPEP) for radiotherapy applications with high efficiency, ubiquitous web access, and real-time data sharing. Methods: This software platform consists of three parts: web server, image server and computation server. Each independent server communicates with each other through HTTP requests. The web server is the key component that provides visualizations and user interface through front-end web browsers and relay informationmore » to the backend to process user requests. The image server serves as a PACS system. The computation server performs the actual image processing and dose calculation. The web server backend is developed using Java Servlets and the frontend is developed using HTML5, Javascript, and jQuery. The image server is based on open source DCME4CHEE PACS system. The computation server can be written in any programming language as long as it can send/receive HTTP requests. Our computation server was implemented in Delphi, Python and PHP, which can process data directly or via a C++ program DLL. Results: This software platform is running on a 32-core CPU server virtually hosting the web server, image server, and computation servers separately. Users can visit our internal website with Chrome browser, select a specific patient, visualize image and RT structures belonging to this patient and perform image segmentation running Delphi computation server and Monte Carlo dose calculation on Python or PHP computation server. Conclusion: We have developed a webbased image processing and plan evaluation platform prototype for radiotherapy. This system has clearly demonstrated the feasibility of performing image processing and plan evaluation platform through a web browser and exhibited potential for future cloud based radiotherapy.« less

  4. Image Quality Assessment of JPEG Compressed Mars Science Laboratory Mastcam Images using Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Kerner, H. R.; Bell, J. F., III; Ben Amor, H.

    2017-12-01

    The Mastcam color imaging system on the Mars Science Laboratory Curiosity rover acquires images within Gale crater for a variety of geologic and atmospheric studies. Images are often JPEG compressed before being downlinked to Earth. While critical for transmitting images on a low-bandwidth connection, this compression can result in image artifacts most noticeable as anomalous brightness or color changes within or near JPEG compression block boundaries. In images with significant high-frequency detail (e.g., in regions showing fine layering or lamination in sedimentary rocks), the image might need to be re-transmitted losslessly to enable accurate scientific interpretation of the data. The process of identifying which images have been adversely affected by compression artifacts is performed manually by the Mastcam science team, costing significant expert human time. To streamline the tedious process of identifying which images might need to be re-transmitted, we present an input-efficient neural network solution for predicting the perceived quality of a compressed Mastcam image. Most neural network solutions require large amounts of hand-labeled training data for the model to learn the target mapping between input (e.g. distorted images) and output (e.g. quality assessment). We propose an automatic labeling method using joint entropy between a compressed and uncompressed image to avoid the need for domain experts to label thousands of training examples by hand. We use automatically labeled data to train a convolutional neural network to estimate the probability that a Mastcam user would find the quality of a given compressed image acceptable for science analysis. We tested our model on a variety of Mastcam images and found that the proposed method correlates well with image quality perception by science team members. When assisted by our proposed method, we estimate that a Mastcam investigator could reduce the time spent reviewing images by a minimum of 70%.

  5. Roundness variation in JPEG images affects the automated process of nuclear immunohistochemical quantification: correction with a linear regression model.

    PubMed

    López, Carlos; Jaén Martinez, Joaquín; Lejeune, Marylène; Escrivà, Patricia; Salvadó, Maria T; Pons, Lluis E; Alvaro, Tomás; Baucells, Jordi; García-Rojo, Marcial; Cugat, Xavier; Bosch, Ramón

    2009-10-01

    The volume of digital image (DI) storage continues to be an important problem in computer-assisted pathology. DI compression enables the size of files to be reduced but with the disadvantage of loss of quality. Previous results indicated that the efficiency of computer-assisted quantification of immunohistochemically stained cell nuclei may be significantly reduced when compressed DIs are used. This study attempts to show, with respect to immunohistochemically stained nuclei, which morphometric parameters may be altered by the different levels of JPEG compression, and the implications of these alterations for automated nuclear counts, and further, develops a method for correcting this discrepancy in the nuclear count. For this purpose, 47 DIs from different tissues were captured in uncompressed TIFF format and converted to 1:3, 1:23 and 1:46 compression JPEG images. Sixty-five positive objects were selected from these images, and six morphological parameters were measured and compared for each object in TIFF images and those of the different compression levels using a set of previously developed and tested macros. Roundness proved to be the only morphological parameter that was significantly affected by image compression. Factors to correct the discrepancy in the roundness estimate were derived from linear regression models for each compression level, thereby eliminating the statistically significant differences between measurements in the equivalent images. These correction factors were incorporated in the automated macros, where they reduced the nuclear quantification differences arising from image compression. Our results demonstrate that it is possible to carry out unbiased automated immunohistochemical nuclear quantification in compressed DIs with a methodology that could be easily incorporated in different systems of digital image analysis.

  6. Image acquisition system using on sensor compressed sampling technique

    NASA Astrophysics Data System (ADS)

    Gupta, Pravir Singh; Choi, Gwan Seong

    2018-01-01

    Advances in CMOS technology have made high-resolution image sensors possible. These image sensors pose significant challenges in terms of the amount of raw data generated, energy efficiency, and frame rate. This paper presents a design methodology for an imaging system and a simplified image sensor pixel design to be used in the system so that the compressed sensing (CS) technique can be implemented easily at the sensor level. This results in significant energy savings as it not only cuts the raw data rate but also reduces transistor count per pixel; decreases pixel size; increases fill factor; simplifies analog-to-digital converter, JPEG encoder, and JPEG decoder design; decreases wiring; and reduces the decoder size by half. Thus, CS has the potential to increase the resolution of image sensors for a given technology and die size while significantly decreasing the power consumption and design complexity. We show that it has potential to reduce power consumption by about 23% to 65%.

  7. Lossless medical image compression using geometry-adaptive partitioning and least square-based prediction.

    PubMed

    Song, Xiaoying; Huang, Qijun; Chang, Sheng; He, Jin; Wang, Hao

    2018-06-01

    To improve the compression rates for lossless compression of medical images, an efficient algorithm, based on irregular segmentation and region-based prediction, is proposed in this paper. Considering that the first step of a region-based compression algorithm is segmentation, this paper proposes a hybrid method by combining geometry-adaptive partitioning and quadtree partitioning to achieve adaptive irregular segmentation for medical images. Then, least square (LS)-based predictors are adaptively designed for each region (regular subblock or irregular subregion). The proposed adaptive algorithm not only exploits spatial correlation between pixels but it utilizes local structure similarity, resulting in efficient compression performance. Experimental results show that the average compression performance of the proposed algorithm is 10.48, 4.86, 3.58, and 0.10% better than that of JPEG 2000, CALIC, EDP, and JPEG-LS, respectively. Graphical abstract ᅟ.

  8. Two VLT 8.2-m Unit Telescopes in Action

    NASA Astrophysics Data System (ADS)

    1999-04-01

    Visitors at ANTU - Astronomical Images from KUEYEN The VLT Control Room at the Paranal Observatory is becoming a busy place indeed. From here, two specialist teams of ESO astronomers and engineers now operate two VLT 8.2-m Unit Telescopes in parallel, ANTU and KUEYEN (formerly UT1 and UT2, for more information about the naming and the pronunciation, see ESO Press Release 06/99 ). Regular science observations have just started with the first of these giant telescopes, while impressive astronomical images are being obtained with the second. The work is hard, but the mood in the control room is good. Insiders claim that there have even been occasions on which the groups have had a friendly "competition" about which telescope makes the "best" images! The ANTU-team has worked with the FORS multi-mode instrument , their colleagues at KUEYEN use the VLT Test Camera for the ongoing tests of this new telescope. While the first is a highly developed astronomical instrument with a large-field CCD imager (6.8 x 6.8 arcmin 2 in the normal mode; 3.4 x 3.4 arcmin 2 in the high-resolution mode), the other is a less complex CCD camera with a smaller field (1.5 x 1.5 arcmin 2 ), suited to verify the optical performance of the telescope. As these images demonstrate, the performance of the second VLT Unit Telescope is steadily improving and it may not be too long before its optical quality will approach that of the first. First KUEYEN photos of stars and galaxies We present here some of the first astronomical images, taken with the second telescope, KUEYEN, in late March and early April 1999. They reflect the current status of the optical, electronic and mechanical systems, still in the process of being tuned. As expected, the experience gained from ANTU last year has turned out to be invaluable and has allowed good progress during this extremely delicate process. ESO PR Photo 19a/99 ESO PR Photo 19a/99 [Preview - JPEG: 400 x 433 pix - 160k] [Normal - JPEG: 800 x 866 pix - 457k] [High-Res - JPEG: 1985 x 2148 pix - 2.0M] ESO PR Photo 19b/99 ESO PR Photo 19b/99 [Preview - JPEG: 400 x 478 pix - 165k] [Normal - JPEG: 800 x 956 pix - 594k] [High-Res - JPEG: 3000 x 3583 pix - 7.1M] Caption to PR Photo 19a/99 : This photo was obtained with VLT KUEYEN on April 4, 1999. It is reproduced from an excellent 60-second R(ed)-band exposure of the innermost region of a globular cluster, Messier 68 (NGC 4590) , in the southern constellation Hydra (The Water-Snake). The distance to this 8-mag cluster is about 35,000 light years, and the diameter is about 140 light-years. The excellent image quality is 0.38 arcsec , demonstrating a good optical and mechanical state of the telescope, already at this early stage of the commissioning phase. The field measures about 90 x 90 arcsec 2. The original scale is 0.0455 pix/arcsec and there are 2048x2048 pixels in one frame. North is up and East is left. Caption to PR Photo 19b/99 : This photo shows the central region of spiral galaxy ESO 269-57 , located in the southern constellation Centaurus at a distance of about 150 million light-years. Many galaxies are seen in this direction at about the same distance, forming a loose cluster; there are also some fainter, more distant ones in the background. The designation refers to the ESO/Uppsala Survey of the Southern Sky in the 1970's during which over 15,000 southern galaxies were catalogued. ESO 269-57 is a tightly bound object of type Sar , the "r" referring to the "ring" that surrounds the bright centre, that is overexposed here. The photo is a composite, based on three exposures (Blue - 600 sec; Yellow-Green - 300 sec; Red - 300 sec) obtained with KUEYEN on March 28, 1999. The image quality is 0.7 arcsec and the field is 90 x 90 arcsec 2. North is up and East is left. ESO PR Photo 19c/99 ESO PR Photo 19c/99 [Preview - JPEG: 400 x 478 pix - 132k] [Normal - JPEG: 800 x 956 pix - 446k] [High-Res - JPEG: 3000 x 3583 pix - 4.6M] ESO PR Photo 19d/99 ESO PR Photo 19d/99 [Preview - JPEG: 400 x 454 pix - 86k] [Normal - JPEG: 800 x 907 pix - 301k] [High-Res - JPEG: 978 x 1109 pix - 282k] Caption to PR Photo 19c/99 : Somewhat further out in space, and right on the border between the southern constellations Hydra and Centaurus lies this knotty spiral galaxy, IC 4248 ; the distance is about 210 million light-years. It was imaged with KUEYEN on March 28, 1999, with the same filters and exposure times as used for Photo 19b/99. The image quality is 0.75 arcsec and the field is 90 x 90 arcsec 2. North is up and East is left. Caption to PR Photo 19d/99 : This is a close-up view of the double galaxy NGC 5090 (right) and NGC 5091 (left), in the southern constellation Centaurus. The first is a typical S0 galaxy with a bright diffuse centre, surrounded by a fainter envelope of stars (not resolved in this picture). However, some of the starlike objects seen in this region may be globular clusters (or dwarf galaxies) in orbit around NGC 5090. The other galaxy is of type Sa (the spiral structure is more developed) and is seen at a steep angle. The three-colour composite is based on frames obtained with KUEYEN on March 29, 1999, with the same filters and exposure times as used for Photo 19b/99. The image quality is 0.7 arcsec and the field is 90 x 90 arcsec 2. North is up and East is left. ( Note inserted on April 26: The original caption text identified the second galaxy as NGC 5090B - this error has now been corrected. ESO PR Photo 19e/99 ESO PR Photo 19e/99 [Preview - JPEG: 400 x 441 pix - 282k] [Normal - JPEG: 800 x 882 pix - 966k] [High-Res - JPEG: 3000 x 3307 pix - 6,4M] Caption to PR Photo 19e/99 : Wide-angle photo of the second 8.2-m VLT Unit Telescope, KUEYEN , obtained on March 10, 1999, with the main mirror and its cell in place at the bottom of the telescope structure. The Test Camera with which the astronomical images above were made, is positioned at the Cassegrain focus, inside this mirror cell. The Paranal Inauguration on March 5, 1999, took place under this telescope that was tilted towards the horizon to accommodate nearly 300 persons on the observing floor. Astronomical observations with ANTU have started On April 1, 1999, the first 8.2-m VLT Unit Telescope, ANTU , was "handed over" to the astronomers. Last year, about 270 observing proposals competed about the first, precious observing time at Europe's largest optical telescope and more than 100 of these were accommodated within the six-month period until the end of September 1999. The complete observing schedule is available on the web. These observations will be carried out in two different modes. During the Visitor Mode , the astronomers will be present at the telescope, while in the Service Mode , ESO observers perform the observations. The latter procedure allows a greater degree of flexibility and the possibility to assign periods of particularly good observing conditions to programmes whose success is critically dependent on this. The first ten nights at ANTU were allocated to service mode observations. After some initial technical problems with the instruments, these have now started. Already in the first night, programmes at ISAAC requiring 0.4 arcsec conditions could be satisfied, and some images better than 0.3 arcsec were obtained in the near-infrared . The first astronomers to use the telescope in visitors mode will be Professors Immo Appenzeller (Heidelberg, Germany; "Photo-polarimetry of pulsars") and George Miley (Leiden, The Netherlands; "Distant radio galaxies") with their respective team colleagues. How to obtain ESO Press Information ESO Press Information is made available on the World-Wide Web (URL: http://www.eso.org../ ). ESO Press Photos may be reproduced, if credit is given to the European Southern Observatory. Note also the dedicated webarea with VLT Information.

  9. Overview of the JPEG XS objective evaluation procedures

    NASA Astrophysics Data System (ADS)

    Willème, Alexandre; Richter, Thomas; Rosewarne, Chris; Macq, Benoit

    2017-09-01

    JPEG XS is a standardization activity conducted by the Joint Photographic Experts Group (JPEG), formally known as ISO/IEC SC29 WG1 group that aims at standardizing a low-latency, lightweight and visually lossless video compression scheme. This codec is intended to be used in applications where image sequences would otherwise be transmitted or stored in uncompressed form, such as in live production (through SDI or IP transport), display links, or frame buffers. Support for compression ratios ranging from 2:1 to 6:1 allows significant bandwidth and power reduction for signal propagation. This paper describes the objective quality assessment procedures conducted as part of the JPEG XS standardization activity. Firstly, this paper discusses the objective part of the experiments that led to the technology selection during the 73th WG1 meeting in late 2016. This assessment consists of PSNR measurements after a single and multiple compression decompression cycles at various compression ratios. After this assessment phase, two proposals among the six responses to the CfP were selected and merged to form the first JPEG XS test model (XSM). Later, this paper describes the core experiments (CEs) conducted so far on the XSM. These experiments are intended to evaluate its performance in more challenging scenarios, such as insertion of picture overlays, robustness to frame editing, assess the impact of the different algorithmic choices, and also to measure the XSM performance using the HDR VDP metric.

  10. Lossless and lossy compression of quantitative phase images of red blood cells obtained by digital holographic imaging.

    PubMed

    Jaferzadeh, Keyvan; Gholami, Samaneh; Moon, Inkyu

    2016-12-20

    In this paper, we evaluate lossless and lossy compression techniques to compress quantitative phase images of red blood cells (RBCs) obtained by an off-axis digital holographic microscopy (DHM). The RBC phase images are numerically reconstructed from their digital holograms and are stored in 16-bit unsigned integer format. In the case of lossless compression, predictive coding of JPEG lossless (JPEG-LS), JPEG2000, and JP3D are evaluated, and compression ratio (CR) and complexity (compression time) are compared against each other. It turns out that JP2k can outperform other methods by having the best CR. In the lossy case, JP2k and JP3D with different CRs are examined. Because some data is lost in a lossy way, the degradation level is measured by comparing different morphological and biochemical parameters of RBC before and after compression. Morphological parameters are volume, surface area, RBC diameter, sphericity index, and the biochemical cell parameter is mean corpuscular hemoglobin (MCH). Experimental results show that JP2k outperforms JP3D not only in terms of mean square error (MSE) when CR increases, but also in compression time in the lossy compression way. In addition, our compression results with both algorithms demonstrate that with high CR values the three-dimensional profile of RBC can be preserved and morphological and biochemical parameters can still be within the range of reported values.

  11. Quality Scalability Aware Watermarking for Visual Content.

    PubMed

    Bhowmik, Deepayan; Abhayaratne, Charith

    2016-11-01

    Scalable coding-based content adaptation poses serious challenges to traditional watermarking algorithms, which do not consider the scalable coding structure and hence cannot guarantee correct watermark extraction in media consumption chain. In this paper, we propose a novel concept of scalable blind watermarking that ensures more robust watermark extraction at various compression ratios while not effecting the visual quality of host media. The proposed algorithm generates scalable and robust watermarked image code-stream that allows the user to constrain embedding distortion for target content adaptations. The watermarked image code-stream consists of hierarchically nested joint distortion-robustness coding atoms. The code-stream is generated by proposing a new wavelet domain blind watermarking algorithm guided by a quantization based binary tree. The code-stream can be truncated at any distortion-robustness atom to generate the watermarked image with the desired distortion-robustness requirements. A blind extractor is capable of extracting watermark data from the watermarked images. The algorithm is further extended to incorporate a bit-plane discarding-based quantization model used in scalable coding-based content adaptation, e.g., JPEG2000. This improves the robustness against quality scalability of JPEG2000 compression. The simulation results verify the feasibility of the proposed concept, its applications, and its improved robustness against quality scalable content adaptation. Our proposed algorithm also outperforms existing methods showing 35% improvement. In terms of robustness to quality scalable video content adaptation using Motion JPEG2000 and wavelet-based scalable video coding, the proposed method shows major improvement for video watermarking.

  12. Mixed raster content (MRC) model for compound image compression

    NASA Astrophysics Data System (ADS)

    de Queiroz, Ricardo L.; Buckley, Robert R.; Xu, Ming

    1998-12-01

    This paper will describe the Mixed Raster Content (MRC) method for compressing compound images, containing both binary test and continuous-tone images. A single compression algorithm that simultaneously meets the requirements for both text and image compression has been elusive. MRC takes a different approach. Rather than using a single algorithm, MRC uses a multi-layered imaging model for representing the results of multiple compression algorithms, including ones developed specifically for text and for images. As a result, MRC can combine the best of existing or new compression algorithms and offer different quality-compression ratio tradeoffs. The algorithms used by MRC set the lower bound on its compression performance. Compared to existing algorithms, MRC has some image-processing overhead to manage multiple algorithms and the imaging model. This paper will develop the rationale for the MRC approach by describing the multi-layered imaging model in light of a rate-distortion trade-off. Results will be presented comparing images compressed using MRC, JPEG and state-of-the-art wavelet algorithms such as SPIHT. MRC has been approved or proposed as an architectural model for several standards, including ITU Color Fax, IETF Internet Fax, and JPEG 2000.

  13. Random Walk Graph Laplacian-Based Smoothness Prior for Soft Decoding of JPEG Images.

    PubMed

    Liu, Xianming; Cheung, Gene; Wu, Xiaolin; Zhao, Debin

    2017-02-01

    Given the prevalence of joint photographic experts group (JPEG) compressed images, optimizing image reconstruction from the compressed format remains an important problem. Instead of simply reconstructing a pixel block from the centers of indexed discrete cosine transform (DCT) coefficient quantization bins (hard decoding), soft decoding reconstructs a block by selecting appropriate coefficient values within the indexed bins with the help of signal priors. The challenge thus lies in how to define suitable priors and apply them effectively. In this paper, we combine three image priors-Laplacian prior for DCT coefficients, sparsity prior, and graph-signal smoothness prior for image patches-to construct an efficient JPEG soft decoding algorithm. Specifically, we first use the Laplacian prior to compute a minimum mean square error initial solution for each code block. Next, we show that while the sparsity prior can reduce block artifacts, limiting the size of the overcomplete dictionary (to lower computation) would lead to poor recovery of high DCT frequencies. To alleviate this problem, we design a new graph-signal smoothness prior (desired signal has mainly low graph frequencies) based on the left eigenvectors of the random walk graph Laplacian matrix (LERaG). Compared with the previous graph-signal smoothness priors, LERaG has desirable image filtering properties with low computation overhead. We demonstrate how LERaG can facilitate recovery of high DCT frequencies of a piecewise smooth signal via an interpretation of low graph frequency components as relaxed solutions to normalized cut in spectral clustering. Finally, we construct a soft decoding algorithm using the three signal priors with appropriate prior weights. Experimental results show that our proposal outperforms the state-of-the-art soft decoding algorithms in both objective and subjective evaluations noticeably.

  14. Image steganalysis using Artificial Bee Colony algorithm

    NASA Astrophysics Data System (ADS)

    Sajedi, Hedieh

    2017-09-01

    Steganography is the science of secure communication where the presence of the communication cannot be detected while steganalysis is the art of discovering the existence of the secret communication. Processing a huge amount of information takes extensive execution time and computational sources most of the time. As a result, it is needed to employ a phase of preprocessing, which can moderate the execution time and computational sources. In this paper, we propose a new feature-based blind steganalysis method for detecting stego images from the cover (clean) images with JPEG format. In this regard, we present a feature selection technique based on an improved Artificial Bee Colony (ABC). ABC algorithm is inspired by honeybees' social behaviour in their search for perfect food sources. In the proposed method, classifier performance and the dimension of the selected feature vector depend on using wrapper-based methods. The experiments are performed using two large data-sets of JPEG images. Experimental results demonstrate the effectiveness of the proposed steganalysis technique compared to the other existing techniques.

  15. Compressing images for the Internet

    NASA Astrophysics Data System (ADS)

    Beretta, Giordano B.

    1998-01-01

    The World Wide Web has rapidly become the hot new mass communications medium. Content creators are using similar design and layout styles as in printed magazines, i.e., with many color images and graphics. The information is transmitted over plain telephone lines, where the speed/price trade-off is much more severe than in the case of printed media. The standard design approach is to use palettized color and to limit as much as possible the number of colors used, so that the images can be encoded with a small number of bits per pixel using the Graphics Interchange Format (GIF) file format. The World Wide Web standards contemplate a second data encoding method (JPEG) that allows color fidelity but usually performs poorly on text, which is a critical element of information communicated on this medium. We analyze the spatial compression of color images and describe a methodology for using the JPEG method in a way that allows a compact representation while preserving full color fidelity.

  16. Forensic Analysis of Digital Image Tampering

    DTIC Science & Technology

    2004-12-01

    analysis of when each method fails, which Chapter 4 discusses. Finally, a test image containing an invisible watermark using LSB steganography is...2.2 – Example of invisible watermark using Steganography Software F5 ............. 8 Figure 2.3 – Example of copy-move image forgery [12...Figure 3.11 – Algorithm for JPEG Block Technique ....................................................... 54 Figure 3.12 – “Forged” Image with Result

  17. A Unified Steganalysis Framework

    DTIC Science & Technology

    2013-04-01

    contains more than 1800 images of different scenes. In the experiments, we used four JPEG based steganography techniques: Out- guess [13], F5 [16], model...also compressed these images again since some of the steganography meth- ods are double compressing the images . Stego- images are generated by embedding...randomly chosen messages (in bits) into 1600 grayscale images using each of the four steganography techniques. A random message length was determined

  18. A secure online image trading system for untrusted cloud environments.

    PubMed

    Munadi, Khairul; Arnia, Fitri; Syaryadhi, Mohd; Fujiyoshi, Masaaki; Kiya, Hitoshi

    2015-01-01

    In conventional image trading systems, images are usually stored unprotected on a server, rendering them vulnerable to untrusted server providers and malicious intruders. This paper proposes a conceptual image trading framework that enables secure storage and retrieval over Internet services. The process involves three parties: an image publisher, a server provider, and an image buyer. The aim is to facilitate secure storage and retrieval of original images for commercial transactions, while preventing untrusted server providers and unauthorized users from gaining access to true contents. The framework exploits the Discrete Cosine Transform (DCT) coefficients and the moment invariants of images. Original images are visually protected in the DCT domain, and stored on a repository server. Small representation of the original images, called thumbnails, are generated and made publicly accessible for browsing. When a buyer is interested in a thumbnail, he/she sends a query to retrieve the visually protected image. The thumbnails and protected images are matched using the DC component of the DCT coefficients and the moment invariant feature. After the matching process, the server returns the corresponding protected image to the buyer. However, the image remains visually protected unless a key is granted. Our target application is the online market, where publishers sell their stock images over the Internet using public cloud servers.

  19. Reevaluation of JPEG image compression to digitalized gastrointestinal endoscopic color images: a pilot study

    NASA Astrophysics Data System (ADS)

    Kim, Christopher Y.

    1999-05-01

    Endoscopic images p lay an important role in describing many gastrointestinal (GI) disorders. The field of radiology has been on the leading edge of creating, archiving and transmitting digital images. With the advent of digital videoendoscopy, endoscopists now have the ability to generate images for storage and transmission. X-rays can be compressed 30-40X without appreciable decline in quality. We reported results of a pilot study using JPEG compression of 24-bit color endoscopic images. For that study, the result indicated that adequate compression ratios vary according to the lesion and that images could be compressed to between 31- and 99-fold smaller than the original size without an appreciable decline in quality. The purpose of this study was to expand upon the methodology of the previous sty with an eye towards application for the WWW, a medium which would expand both clinical and educational purposes of color medical imags. The results indicate that endoscopists are able to tolerate very significant compression of endoscopic images without loss of clinical image quality. This finding suggests that even 1 MB color images can be compressed to well under 30KB, which is considered a maximal tolerable image size for downloading on the WWW.

  20. DICOM image integration into an electronic medical record using thin viewing clients

    NASA Astrophysics Data System (ADS)

    Stewart, Brent K.; Langer, Steven G.; Taira, Ricky K.

    1998-07-01

    Purpose -- To integrate radiological DICOM images into our currently existing web-browsable Electronic Medical Record (MINDscape). Over the last five years the University of Washington has created a clinical data repository combining in a distributed relational database information from multiple departmental databases (MIND). A text-based view of this data called the Mini Medical Record (MMR) has been available for three years. MINDscape, unlike the text based MMR, provides a platform independent, web browser view of the MIND dataset that can easily be linked to other information resources on the network. We have now added the integration of radiological images into MINDscape through a DICOM webserver. Methods/New Work -- we have integrated a commercial webserver that acts as a DICOM Storage Class Provider to our, computed radiography (CR), computed tomography (CT), digital fluoroscopy (DF), magnetic resonance (MR) and ultrasound (US) scanning devices. These images can be accessed through CGI queries or by linking the image server database using ODBC or SQL gateways. This allows the use of dynamic HTML links to the images on the DICOM webserver from MINDscape, so that the radiology reports already resident in the MIND repository can be married with the associated images through the unique examination accession number generated by our Radiology Information System (RIS). The web browser plug-in used provides a wavelet decompression engine (up to 16-bits per pixel) and performs the following image manipulation functions: window/level, flip, invert, sort, rotate, zoom, cine-loop and save as JPEG. Results -- Radiological DICOM image sets (CR, CT, MR and US) are displayed with associated exam reports for referring physician and clinicians anywhere within the widespread academic medical center on PCs, Macs, X-terminals and Unix computers. This system is also being used for home teleradiology application. Conclusion -- Radiological DICOM images can be made available medical center wide to physicians quickly using low-cost and ubiquitous, thin client browsing technology and wavelet compression.

  1. Helioviewer.org: Browsing Very Large Image Archives Online Using JPEG 2000

    NASA Astrophysics Data System (ADS)

    Hughitt, V. K.; Ireland, J.; Mueller, D.; Dimitoglou, G.; Garcia Ortiz, J.; Schmidt, L.; Wamsler, B.; Beck, J.; Alexanderian, A.; Fleck, B.

    2009-12-01

    As the amount of solar data available to scientists continues to increase at faster and faster rates, it is important that there exist simple tools for navigating this data quickly with a minimal amount of effort. By combining heterogeneous solar physics datatypes such as full-disk images and coronagraphs, along with feature and event information, Helioviewer offers a simple and intuitive way to browse multiple datasets simultaneously. Images are stored in a repository using the JPEG 2000 format and tiled dynamically upon a client's request. By tiling images and serving only the portions of the image requested, it is possible for the client to work with very large images without having to fetch all of the data at once. In addition to a focus on intercommunication with other virtual observatories and browsers (VSO, HEK, etc), Helioviewer will offer a number of externally-available application programming interfaces (APIs) to enable easy third party use, adoption and extension. Recent efforts have resulted in increased performance, dynamic movie generation, and improved support for mobile web browsers. Future functionality will include: support for additional data-sources including RHESSI, SDO, STEREO, and TRACE, a navigable timeline of recorded solar events, social annotation, and basic client-side image processing.

  2. 21 CFR 892.2030 - Medical image digitizer.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Medical image digitizer. 892.2030 Section 892.2030 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED... Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std.). [63 FR 23387, Apr. 29...

  3. 21 CFR 892.2040 - Medical image hardcopy device.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Medical image hardcopy device. 892.2040 Section 892.2040 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES... Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture...

  4. ImageJ: Image processing and analysis in Java

    NASA Astrophysics Data System (ADS)

    Rasband, W. S.

    2012-06-01

    ImageJ is a public domain Java image processing program inspired by NIH Image. It can display, edit, analyze, process, save and print 8-bit, 16-bit and 32-bit images. It can read many image formats including TIFF, GIF, JPEG, BMP, DICOM, FITS and "raw". It supports "stacks", a series of images that share a single window. It is multithreaded, so time-consuming operations such as image file reading can be performed in parallel with other operations.

  5. An RBF-based compression method for image-based relighting.

    PubMed

    Leung, Chi-Sing; Wong, Tien-Tsin; Lam, Ping-Man; Choy, Kwok-Hung

    2006-04-01

    In image-based relighting, a pixel is associated with a number of sampled radiance values. This paper presents a two-level compression method. In the first level, the plenoptic property of a pixel is approximated by a spherical radial basis function (SRBF) network. That means that the spherical plenoptic function of each pixel is represented by a number of SRBF weights. In the second level, we apply a wavelet-based method to compress these SRBF weights. To reduce the visual artifact due to quantization noise, we develop a constrained method for estimating the SRBF weights. Our proposed approach is superior to JPEG, JPEG2000, and MPEG. Compared with the spherical harmonics approach, our approach has a lower complexity, while the visual quality is comparable. The real-time rendering method for our SRBF representation is also discussed.

  6. Modeling of video compression effects on target acquisition performance

    NASA Astrophysics Data System (ADS)

    Cha, Jae H.; Preece, Bradley; Espinola, Richard L.

    2009-05-01

    The effect of video compression on image quality was investigated from the perspective of target acquisition performance modeling. Human perception tests were conducted recently at the U.S. Army RDECOM CERDEC NVESD, measuring identification (ID) performance on simulated military vehicle targets at various ranges. These videos were compressed with different quality and/or quantization levels utilizing motion JPEG, motion JPEG2000, and MPEG-4 encoding. To model the degradation on task performance, the loss in image quality is fit to an equivalent Gaussian MTF scaled by the Structural Similarity Image Metric (SSIM). Residual compression artifacts are treated as 3-D spatio-temporal noise. This 3-D noise is found by taking the difference of the uncompressed frame, with the estimated equivalent blur applied, and the corresponding compressed frame. Results show good agreement between the experimental data and the model prediction. This method has led to a predictive performance model for video compression by correlating various compression levels to particular blur and noise input parameters for NVESD target acquisition performance model suite.

  7. Quantization Distortion in Block Transform-Compressed Data

    NASA Technical Reports Server (NTRS)

    Boden, A. F.

    1995-01-01

    The popular JPEG image compression standard is an example of a block transform-based compression scheme; the image is systematically subdivided into block that are individually transformed, quantized, and encoded. The compression is achieved by quantizing the transformed data, reducing the data entropy and thus facilitating efficient encoding. A generic block transform model is introduced.

  8. Phoenix Telemetry Processor

    NASA Technical Reports Server (NTRS)

    Stanboli, Alice

    2013-01-01

    Phxtelemproc is a C/C++ based telemetry processing program that processes SFDU telemetry packets from the Telemetry Data System (TDS). It generates Experiment Data Records (EDRs) for several instruments including surface stereo imager (SSI); robotic arm camera (RAC); robotic arm (RA); microscopy, electrochemistry, and conductivity analyzer (MECA); and the optical microscope (OM). It processes both uncompressed and compressed telemetry, and incorporates unique subroutines for the following compression algorithms: JPEG Arithmetic, JPEG Huffman, Rice, LUT3, RA, and SX4. This program was in the critical path for the daily command cycle of the Phoenix mission. The products generated by this program were part of the RA commanding process, as well as the SSI, RAC, OM, and MECA image and science analysis process. Its output products were used to advance science of the near polar regions of Mars, and were used to prove that water is found in abundance there. Phxtelemproc is part of the MIPL (Multi-mission Image Processing Laboratory) system. This software produced Level 1 products used to analyze images returned by in situ spacecraft. It ultimately assisted in operations, planning, commanding, science, and outreach.

  9. Client-Side Image Maps: Achieving Accessibility and Section 508 Compliance

    ERIC Educational Resources Information Center

    Beasley, William; Jarvis, Moana

    2004-01-01

    Image maps are a means of making a picture "clickable", so that different portions of the image can be hyperlinked to different URLS. There are two basic types of image maps: server-side and client-side. Besides requiring access to a CGI on the server, server-side image maps are undesirable from the standpoint of accessibility--creating…

  10. Color image lossy compression based on blind evaluation and prediction of noise characteristics

    NASA Astrophysics Data System (ADS)

    Ponomarenko, Nikolay N.; Lukin, Vladimir V.; Egiazarian, Karen O.; Lepisto, Leena

    2011-03-01

    The paper deals with JPEG adaptive lossy compression of color images formed by digital cameras. Adaptation to noise characteristics and blur estimated for each given image is carried out. The dominant factor degrading image quality is determined in a blind manner. Characteristics of this dominant factor are then estimated. Finally, a scaling factor that determines quantization steps for default JPEG table is adaptively set (selected). Within this general framework, two possible strategies are considered. A first one presumes blind estimation for an image after all operations in digital image processing chain just before compressing a given raster image. A second strategy is based on prediction of noise and blur parameters from analysis of RAW image under quite general assumptions concerning characteristics parameters of transformations an image will be subject to at further processing stages. The advantages of both strategies are discussed. The first strategy provides more accurate estimation and larger benefit in image compression ratio (CR) compared to super-high quality (SHQ) mode. However, it is more complicated and requires more resources. The second strategy is simpler but less beneficial. The proposed approaches are tested for quite many real life color images acquired by digital cameras and shown to provide more than two time increase of average CR compared to SHQ mode without introducing visible distortions with respect to SHQ compressed images.

  11. Confidential storage and transmission of medical image data.

    PubMed

    Norcen, R; Podesser, M; Pommer, A; Schmidt, H-P; Uhl, A

    2003-05-01

    We discuss computationally efficient techniques for confidential storage and transmission of medical image data. Two types of partial encryption techniques based on AES are proposed. The first encrypts a subset of bitplanes of plain image data whereas the second encrypts parts of the JPEG2000 bitstream. We find that encrypting between 20% and 50% of the visual data is sufficient to provide high confidentiality.

  12. Basic Investigation on Medical Ultrasonic Echo Image Compression by JPEG2000 - Availability of Wavelet Transform and ROI Method

    DTIC Science & Technology

    2001-10-25

    Table III. In spite of the same quality in ROI, it is decided that the images in the cases where QF is 1.3, 1.5 or 2.0 are not good for diagnosis. Of...but (b) is not good for diagnosis by decision of ultrasonographer. Results reveal that wavelet transform achieves higher quality of image compared

  13. Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain

    PubMed Central

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality. PMID:23049544

  14. Medical image compression based on vector quantization with variable block sizes in wavelet domain.

    PubMed

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.

  15. Mount Shasta Snowpack

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Full-size images June 17, 2001 (2.0 MB JPEG) June 14, 2000 (2.1 MB JPEG) Light snowfall in the winter of 2000-01 led to a dry summer in the Pacific Northwest. The drought led to a conflict between farmers and fishing communities in the Klamath River Basin over water rights, and a series of forest fires in Washington, Oregon, and Northern California. The pair of images above, both acquired by the Enhanced Thematic Mapper Plus (ETM+) aboard the Landsat 7 satellite, show the snowpack on Mt. Shasta in June 2000 and 2001. On June 14, 2000, the snow extends to the lower slopes of the 4,317-meter (14,162-foot) volcano. At nearly the same time this year (June 17, 2001) the snow had retreated well above the tree-line. The drought in the region was categorized as moderate to severe by the National Oceanographic and Atmospheric Administration (NOAA), and the United States Geological Survey (USGS) reported that streamflow during June was only about 25 percent of the average. Above and to the left of Mt. Shasta is Lake Shastina, a reservoir which is noticeably lower in the 2001 image than the 2000 image. Images courtesy USGS EROS Data Center and the Landsat 7 Science Team

  16. Development and application of remote video monitoring system for combine harvester based on embedded Linux

    NASA Astrophysics Data System (ADS)

    Chen, Jin; Wang, Yifan; Wang, Xuelei; Wang, Yuehong; Hu, Rui

    2017-01-01

    Combine harvester usually works in sparsely populated areas with harsh environment. In order to achieve the remote real-time video monitoring of the working state of combine harvester. A remote video monitoring system based on ARM11 and embedded Linux is developed. The system uses USB camera for capturing working state video data of the main parts of combine harvester, including the granary, threshing drum, cab and cut table. Using JPEG image compression standard to compress video data then transferring monitoring screen to remote monitoring center over the network for long-range monitoring and management. At the beginning of this paper it describes the necessity of the design of the system. Then it introduces realization methods of hardware and software briefly. And then it describes detailedly the configuration and compilation of embedded Linux operating system and the compiling and transplanting of video server program are elaborated. At the end of the paper, we carried out equipment installation and commissioning on combine harvester and then tested the system and showed the test results. In the experiment testing, the remote video monitoring system for combine harvester can achieve 30fps with the resolution of 800x600, and the response delay in the public network is about 40ms.

  17. 76 FR 62134 - Bureau of Consular Affairs; Registration for the Diversity Immigrant (DV-2013) Visa Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-06

    ... Resident. We will not accept group or family photographs; you must include a separate photograph for each... new digital image: The image file format must be in the Joint Photographic Experts Group (JPEG) format... Web site four to six weeks before the scheduled interviews with U.S. consular officers at overseas...

  18. A Posteriori Restoration of Block Transform-Compressed Data

    NASA Technical Reports Server (NTRS)

    Brown, R.; Boden, A. F.

    1995-01-01

    The Galileo spacecraft will use lossy data compression for the transmission of its science imagery over the low-bandwidth communication system. The technique chosen for image compression is a block transform technique based on the Integer Cosine Transform, a derivative of the JPEG image compression standard. Considered here are two known a posteriori enhancement techniques, which are adapted.

  19. Comparison of approaches for mobile document image analysis using server supported smartphones

    NASA Astrophysics Data System (ADS)

    Ozarslan, Suleyman; Eren, P. Erhan

    2014-03-01

    With the recent advances in mobile technologies, new capabilities are emerging, such as mobile document image analysis. However, mobile phones are still less powerful than servers, and they have some resource limitations. One approach to overcome these limitations is performing resource-intensive processes of the application on remote servers. In mobile document image analysis, the most resource consuming process is the Optical Character Recognition (OCR) process, which is used to extract text in mobile phone captured images. In this study, our goal is to compare the in-phone and the remote server processing approaches for mobile document image analysis in order to explore their trade-offs. For the inphone approach, all processes required for mobile document image analysis run on the mobile phone. On the other hand, in the remote-server approach, core OCR process runs on the remote server and other processes run on the mobile phone. Results of the experiments show that the remote server approach is considerably faster than the in-phone approach in terms of OCR time, but adds extra delays such as network delay. Since compression and downscaling of images significantly reduce file sizes and extra delays, the remote server approach overall outperforms the in-phone approach in terms of selected speed and correct recognition metrics, if the gain in OCR time compensates for the extra delays. According to the results of the experiments, using the most preferable settings, the remote server approach performs better than the in-phone approach in terms of speed and acceptable correct recognition metrics.

  20. Pine Island Glacier, Antarctica, MISR Multi-angle Composite

    Atmospheric Science Data Center

    2013-12-17

    ...     View Larger Image (JPEG) A large iceberg has finally separated from the calving front ... next due to stereo parallax. This parallax is used in MISR processing to retrieve cloud heights over snow and ice. Additionally, a plume ...

  1. JPEG2000 Image Compression on Solar EUV Images

    NASA Astrophysics Data System (ADS)

    Fischer, Catherine E.; Müller, Daniel; De Moortel, Ineke

    2017-01-01

    For future solar missions as well as ground-based telescopes, efficient ways to return and process data have become increasingly important. Solar Orbiter, which is the next ESA/NASA mission to explore the Sun and the heliosphere, is a deep-space mission, which implies a limited telemetry rate that makes efficient onboard data compression a necessity to achieve the mission science goals. Missions like the Solar Dynamics Observatory (SDO) and future ground-based telescopes such as the Daniel K. Inouye Solar Telescope, on the other hand, face the challenge of making petabyte-sized solar data archives accessible to the solar community. New image compression standards address these challenges by implementing efficient and flexible compression algorithms that can be tailored to user requirements. We analyse solar images from the Atmospheric Imaging Assembly (AIA) instrument onboard SDO to study the effect of lossy JPEG2000 (from the Joint Photographic Experts Group 2000) image compression at different bitrates. To assess the quality of compressed images, we use the mean structural similarity (MSSIM) index as well as the widely used peak signal-to-noise ratio (PSNR) as metrics and compare the two in the context of solar EUV images. In addition, we perform tests to validate the scientific use of the lossily compressed images by analysing examples of an on-disc and off-limb coronal-loop oscillation time-series observed by AIA/SDO.

  2. Capacity is the Wrong Paradigm

    DTIC Science & Technology

    2002-01-01

    short, steganography values detection over ro- bustness, whereas watermarking values robustness over de - tection.) Hiding techniques for JPEG images ...world length of the code. D: If the algorithm is known, this method is trivially de - tectable if we are sending images (with no encryption). If we are...implications of the work of Chaitin and Kolmogorov on algorithmic complex- ity [5]. We have also concentrated on screen images in this paper and have not

  3. A Novel Image Compression Algorithm for High Resolution 3D Reconstruction

    NASA Astrophysics Data System (ADS)

    Siddeq, M. M.; Rodrigues, M. A.

    2014-06-01

    This research presents a novel algorithm to compress high-resolution images for accurate structured light 3D reconstruction. Structured light images contain a pattern of light and shadows projected on the surface of the object, which are captured by the sensor at very high resolutions. Our algorithm is concerned with compressing such images to a high degree with minimum loss without adversely affecting 3D reconstruction. The Compression Algorithm starts with a single level discrete wavelet transform (DWT) for decomposing an image into four sub-bands. The sub-band LL is transformed by DCT yielding a DC-matrix and an AC-matrix. The Minimize-Matrix-Size Algorithm is used to compress the AC-matrix while a DWT is applied again to the DC-matrix resulting in LL2, HL2, LH2 and HH2 sub-bands. The LL2 sub-band is transformed by DCT, while the Minimize-Matrix-Size Algorithm is applied to the other sub-bands. The proposed algorithm has been tested with images of different sizes within a 3D reconstruction scenario. The algorithm is demonstrated to be more effective than JPEG2000 and JPEG concerning higher compression rates with equivalent perceived quality and the ability to more accurately reconstruct the 3D models.

  4. Recent Advances in Compressed Sensing: Discrete Uncertainty Principles and Fast Hyperspectral Imaging

    DTIC Science & Technology

    2015-03-26

    Fourier Analysis and Applications, vol. 14, pp. 838–858, 2008. 11. D. J. Cooke, “A discrete X - ray transform for chromotomographic hyperspectral imaging ... medical imaging , e.g., magnetic resonance imaging (MRI). Since the early 1980s, MRI has granted doctors the ability to distinguish between healthy tissue...i.e., at most K entries of x are nonzero. In many settings, this is a valid signal model; for example, JPEG2000 exploits the fact that natural images

  5. Study and validation of tools interoperability in JPSEC

    NASA Astrophysics Data System (ADS)

    Conan, V.; Sadourny, Y.; Jean-Marie, K.; Chan, C.; Wee, S.; Apostolopoulos, J.

    2005-08-01

    Digital imagery is important in many applications today, and the security of digital imagery is important today and is likely to gain in importance in the near future. The emerging international standard ISO/IEC JPEG-2000 Security (JPSEC) is designed to provide security for digital imagery, and in particular digital imagery coded with the JPEG-2000 image coding standard. One of the primary goals of a standard is to ensure interoperability between creators and consumers produced by different manufacturers. The JPSEC standard, similar to the popular JPEG and MPEG family of standards, specifies only the bitstream syntax and the receiver's processing, and not how the bitstream is created or the details of how it is consumed. This paper examines the interoperability for the JPSEC standard, and presents an example JPSEC consumption process which can provide insights in the design of JPSEC consumers. Initial interoperability tests between different groups with independently created implementations of JPSEC creators and consumers have been successful in providing the JPSEC security services of confidentiality (via encryption) and authentication (via message authentication codes, or MACs). Further interoperability work is on-going.

  6. Research on lossless compression of true color RGB image with low time and space complexity

    NASA Astrophysics Data System (ADS)

    Pan, ShuLin; Xie, ChengJun; Xu, Lin

    2008-12-01

    Eliminating correlated redundancy of space and energy by using a DWT lifting scheme and reducing the complexity of the image by using an algebraic transform among the RGB components. An improved Rice Coding algorithm, in which presents an enumerating DWT lifting scheme that fits any size images by image renormalization has been proposed in this paper. This algorithm has a coding and decoding process without backtracking for dealing with the pixels of an image. It support LOCO-I and it can also be applied to Coder / Decoder. Simulation analysis indicates that the proposed method can achieve a high image compression. Compare with Lossless-JPG, PNG(Microsoft), PNG(Rene), PNG(Photoshop), PNG(Anix PicViewer), PNG(ACDSee), PNG(Ulead photo Explorer), JPEG2000, PNG(KoDa Inc), SPIHT and JPEG-LS, the lossless image compression ratio improved 45%, 29%, 25%, 21%, 19%, 17%, 16%, 15%, 11%, 10.5%, 10% separately with 24 pieces of RGB image provided by KoDa Inc. Accessing the main memory in Pentium IV,CPU2.20GHZ and 256MRAM, the coding speed of the proposed coder can be increased about 21 times than the SPIHT and the efficiency of the performance can be increased 166% or so, the decoder's coding speed can be increased about 17 times than the SPIHT and the efficiency of the performance can be increased 128% or so.

  7. Thin client (web browser)-based collaboration for medical imaging and web-enabled data.

    PubMed

    Le, Tuong Huu; Malhi, Nadeem

    2002-01-01

    Utilizing thin client software and open source server technology, a collaborative architecture was implemented allowing for sharing of Digital Imaging and Communications in Medicine (DICOM) and non-DICOM images with real-time markup. Using the Web browser as a thin client integrated with standards-based components, such as DHTML (dynamic hypertext markup language), JavaScript, and Java, collaboration was achieved through a Web server/proxy server combination utilizing Java Servlets and Java Server Pages. A typical collaborative session involved the driver, who directed the navigation of the other collaborators, the passengers, and provided collaborative markups of medical and nonmedical images. The majority of processing was performed on the server side, allowing for the client to remain thin and more accessible.

  8. Development of a high-performance image server using ATM technology

    NASA Astrophysics Data System (ADS)

    Do Van, Minh; Humphrey, Louis M.; Ravin, Carl E.

    1996-05-01

    The ability to display digital radiographs to a radiologist in a reasonable time has long been the goal of many PACS. Intelligent routing, or pre-fetching images, has become a solution whereby a system uses a set of rules to route the images to a pre-determined destination. Images would then be stored locally on a workstation for faster display times. Some PACS use a large, centralized storage approach and workstations retrieve images over high bandwidth connections. Another approach to image management is to provide a high performance, clustered storage system. This has the advantage of eliminating the complexity of pre-fetching and allows for rapid image display from anywhere within the hospital. We discuss the development of such a storage device, which provides extremely fast access to images across a local area network. Among the requirements for development of the image server were high performance, DICOM 3.0 compliance, and the use of industry standard components. The completed image server provides performance more than sufficient for use in clinical practice. Setting up modalities to send images to the image server is simple due to the adherence to the DICOM 3.0 specification. Using only off-the-shelf components allows us to keep the cost of the server relatively inexpensive and allows for easy upgrades as technology becomes more advanced. These factors make the image server ideal for use as a clustered storage system in a radiology department.

  9. A Steganographic Embedding Undetectable by JPEG Compatibility Steganalysis

    DTIC Science & Technology

    2002-01-01

    itd.nrl.navy.mil Abstract. Steganography and steganalysis of digital images is a cat- and-mouse game. In recent work, Fridrich, Goljan and Du introduced a method...proposed embedding method. 1 Introduction Steganography and steganalysis of digital images is a cat-and-mouse game. Ever since Kurak and McHugh’s seminal...paper on LSB embeddings in images [10], various researchers have published work on either increasing the payload, im- proving the resistance to

  10. On LSB Spatial Domain Steganography and Channel Capacity

    DTIC Science & Technology

    2008-03-21

    reveal the hidden information should not be taken as proof that the image is now clean. The survivability of LSB type spatial domain steganography ...the mindset that JPEG compressing an image is sufficient to destroy the steganography for spatial domain LSB type stego. We agree that JPEGing...modeling of 2 bit LSB steganography shows that theoretically there is non-zero stego payload possible even though the image has been JPEGed. We wish to

  11. Digital image modification detection using color information and its histograms.

    PubMed

    Zhou, Haoyu; Shen, Yue; Zhu, Xinghui; Liu, Bo; Fu, Zigang; Fan, Na

    2016-09-01

    The rapid development of many open source and commercial image editing software makes the authenticity of the digital images questionable. Copy-move forgery is one of the most widely used tampering techniques to create desirable objects or conceal undesirable objects in a scene. Existing techniques reported in the literature to detect such tampering aim to improve the robustness of these methods against the use of JPEG compression, blurring, noise, or other types of post processing operations. These post processing operations are frequently used with the intention to conceal tampering and reduce tampering clues. A robust method based on the color moments and other five image descriptors is proposed in this paper. The method divides the image into fixed size overlapping blocks. Clustering operation divides entire search space into smaller pieces with similar color distribution. Blocks from the tampered regions will reside within the same cluster since both copied and moved regions have similar color distributions. Five image descriptors are used to extract block features, which makes the method more robust to post processing operations. An ensemble of deep compositional pattern-producing neural networks are trained with these extracted features. Similarity among feature vectors in clusters indicates possible forged regions. Experimental results show that the proposed method can detect copy-move forgery even if an image was distorted by gamma correction, addictive white Gaussian noise, JPEG compression, or blurring. Copyright © 2016. Published by Elsevier Ireland Ltd.

  12. Observer performance assessment of JPEG-compressed high-resolution chest images

    NASA Astrophysics Data System (ADS)

    Good, Walter F.; Maitz, Glenn S.; King, Jill L.; Gennari, Rose C.; Gur, David

    1999-05-01

    The JPEG compression algorithm was tested on a set of 529 chest radiographs that had been digitized at a spatial resolution of 100 micrometer and contrast sensitivity of 12 bits. Images were compressed using five fixed 'psychovisual' quantization tables which produced average compression ratios in the range 15:1 to 61:1, and were then printed onto film. Six experienced radiologists read all cases from the laser printed film, in each of the five compressed modes as well as in the non-compressed mode. For comparison purposes, observers also read the same cases with reduced pixel resolutions of 200 micrometer and 400 micrometer. The specific task involved detecting masses, pneumothoraces, interstitial disease, alveolar infiltrates and rib fractures. Over the range of compression ratios tested, for images digitized at 100 micrometer, we were unable to demonstrate any statistically significant decrease (p greater than 0.05) in observer performance as measured by ROC techniques. However, the observers' subjective assessments of image quality did decrease significantly as image resolution was reduced and suggested a decreasing, but nonsignificant, trend as the compression ratio was increased. The seeming discrepancy between our failure to detect a reduction in observer performance, and other published studies, is likely due to: (1) the higher resolution at which we digitized our images; (2) the higher signal-to-noise ratio of our digitized films versus typical CR images; and (3) our particular choice of an optimized quantization scheme.

  13. A new concept of real-time security camera monitoring with privacy protection by masking moving objects

    NASA Astrophysics Data System (ADS)

    Yabuta, Kenichi; Kitazawa, Hitoshi; Tanaka, Toshihisa

    2006-02-01

    Recently, monitoring cameras for security have been extensively increasing. However, it is normally difficult to know when and where we are monitored by these cameras and how the recorded images are stored and/or used. Therefore, how to protect privacy in the recorded images is a crucial issue. In this paper, we address this problem and introduce a framework for security monitoring systems considering the privacy protection. We state requirements for monitoring systems in this framework. We propose a possible implementation that satisfies the requirements. To protect privacy of recorded objects, they are made invisible by appropriate image processing techniques. Moreover, the original objects are encrypted and watermarked into the image with the "invisible" objects, which is coded by the JPEG standard. Therefore, the image decoded by a normal JPEG viewer includes the objects that are unrecognized or invisible. We also introduce in this paper a so-called "special viewer" in order to decrypt and display the original objects. This special viewer can be used by limited users when necessary for crime investigation, etc. The special viewer allows us to choose objects to be decoded and displayed. Moreover, in this proposed system, real-time processing can be performed, since no future frame is needed to generate a bitstream.

  14. DCTune Perceptual Optimization of Compressed Dental X-Rays

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    In current dental practice, x-rays of completed dental work are often sent to the insurer for verification. It is faster and cheaper to transmit instead digital scans of the x-rays. Further economies result if the images are sent in compressed form. DCTune is a technology for optimizing DCT (digital communication technology) quantization matrices to yield maximum perceptual quality for a given bit-rate, or minimum bit-rate for a given perceptual quality. Perceptual optimization of DCT color quantization matrices. In addition, the technology provides a means of setting the perceptual quality of compressed imagery in a systematic way. The purpose of this research was, with respect to dental x-rays, 1) to verify the advantage of DCTune over standard JPEG (Joint Photographic Experts Group), 2) to verify the quality control feature of DCTune, and 3) to discover regularities in the optimized matrices of a set of images. We optimized matrices for a total of 20 images at two resolutions (150 and 300 dpi) and four bit-rates (0.25, 0.5, 0.75, 1.0 bits/pixel), and examined structural regularities in the resulting matrices. We also conducted psychophysical studies (1) to discover the DCTune quality level at which the images became 'visually lossless,' and (2) to rate the relative quality of DCTune and standard JPEG images at various bitrates. Results include: (1) At both resolutions, DCTune quality is a linear function of bit-rate. (2) DCTune quantization matrices for all images at all bitrates and resolutions are modeled well by an inverse Gaussian, with parameters of amplitude and width. (3) As bit-rate is varied, optimal values of both amplitude and width covary in an approximately linear fashion. (4) Both amplitude and width vary in systematic and orderly fashion with either bit-rate or DCTune quality; simple mathematical functions serve to describe these relationships. (5) In going from 150 to 300 dpi, amplitude parameters are substantially lower and widths larger at corresponding bit-rates or qualities. (6) Visually lossless compression occurs at a DCTune quality value of about 1. (7) At 0.25 bits/pixel, comparative ratings give DCTune a substantial advantage over standard JPEG. As visually lossless bit-rates are approached, this advantage of necessity diminishes. We have concluded that DCTune optimized quantization matrices provide better visual quality than standard JPEG. Meaningful quality levels may be specified by means of the DCTune metric. Optimized matrices are very similar across the class of dental x-rays, suggesting the possibility of a 'class-optimal' matrix. DCTune technology appears to provide some value in the context of compressed dental x-rays.

  15. A multicenter observer performance study of 3D JPEG2000 compression of thin-slice CT.

    PubMed

    Erickson, Bradley J; Krupinski, Elizabeth; Andriole, Katherine P

    2010-10-01

    The goal of this study was to determine the compression level at which 3D JPEG2000 compression of thin-slice CTs of the chest and abdomen-pelvis becomes visually perceptible. A secondary goal was to determine if residents in training and non-physicians are substantially different from experienced radiologists in their perception of compression-related changes. This study used multidetector computed tomography 3D datasets with 0.625-1-mm thickness slices of standard chest, abdomen, or pelvis, clipped to 12 bits. The Kakadu v5.2 JPEG2000 compression algorithm was used to compress and decompress the 80 examinations creating four sets of images: lossless, 1.5 bpp (8:1), 1 bpp (12:1), and 0.75 bpp (16:1). Two randomly selected slices from each examination were shown to observers using a flicker mode paradigm in which observers rapidly toggled between two images, the original and a compressed version, with the task of deciding whether differences between them could be detected. Six staff radiologists, four residents, and six PhDs experienced in medical imaging (from three institutions) served as observers. Overall, 77.46% of observers detected differences at 8:1, 94.75% at 12:1, and 98.59% at 16:1 compression levels. Across all compression levels, the staff radiologists noted differences 64.70% of the time, the resident's detected differences 71.91% of the time, and the PhDs detected differences 69.95% of the time. Even mild compression is perceptible with current technology. The ability to detect differences does not equate to diagnostic differences, although perception of compression artifacts could affect diagnostic decision making and diagnostic workflow.

  16. The effect of lossy image compression on image classification

    NASA Technical Reports Server (NTRS)

    Paola, Justin D.; Schowengerdt, Robert A.

    1995-01-01

    We have classified four different images, under various levels of JPEG compression, using the following classification algorithms: minimum-distance, maximum-likelihood, and neural network. The training site accuracy and percent difference from the original classification were tabulated for each image compression level, with maximum-likelihood showing the poorest results. In general, as compression ratio increased, the classification retained its overall appearance, but much of the pixel-to-pixel detail was eliminated. We also examined the effect of compression on spatial pattern detection using a neural network.

  17. Limited distortion in LSB steganography

    NASA Astrophysics Data System (ADS)

    Kim, Younhee; Duric, Zoran; Richards, Dana

    2006-02-01

    It is well known that all information hiding methods that modify the least significant bits introduce distortions into the cover objects. Those distortions have been utilized by steganalysis algorithms to detect that the objects had been modified. It has been proposed that only coefficients whose modification does not introduce large distortions should be used for embedding. In this paper we propose an effcient algorithm for information hiding in the LSBs of JPEG coefficients. Our algorithm uses parity coding to choose the coefficients whose modifications introduce minimal additional distortion. We derive the expected value of the additional distortion as a function of the message length and the probability distribution of the JPEG quantization errors of cover images. Our experiments show close agreement between the theoretical prediction and the actual additional distortion.

  18. VIMOS - a Cosmology Machine for the VLT

    NASA Astrophysics Data System (ADS)

    2002-03-01

    Successful Test Observations With Powerful New Instrument at Paranal [1] Summary One of the most fundamental tasks of modern astrophysics is the study of the evolution of the Universe . This is a daunting undertaking that requires extensive observations of large samples of objects in order to produce reasonably detailed maps of the distribution of galaxies in the Universe and to perform statistical analysis. Much effort is now being put into mapping the relatively nearby space and thereby to learn how the Universe looks today . But to study its evolution, we must compare this with how it looked when it still was young . This is possible, because astronomers can "look back in time" by studying remote objects - the larger their distance, the longer the light we now observe has been underway to us, and the longer is thus the corresponding "look-back time". This may sound easy, but it is not. Very distant objects are very dim and can only be observed with large telescopes. Looking at one object at a time would make such a study extremely time-consuming and, in practical terms, impossible. To do it anyhow, we need the largest possible telescope with a highly specialised, exceedingly sensitive instrument that is able to observe a very large number of (faint) objects in the remote universe simultaneously . The VLT VIsible Multi-Object Spectrograph (VIMOS) is such an instrument. It can obtain many hundreds of spectra of individual galaxies in the shortest possible time; in fact, in one special observing mode, up to 6400 spectra of the galaxies in a remote cluster during a single exposure, augmenting the data gathering power of the telescope by the same proportion. This marvellous science machine has just been installed at the 8.2-m MELIPAL telescope, the third unit of the Very Large Telescope (VLT) at the ESO Paranal Observatory. A main task will be to carry out 3-dimensional mapping of the distant Universe from which we can learn its large-scale structure . "First light" was achieved on February 26, 2002, and a first series of test observations has successfully demonstrated the huge potential of this amazing facility. Much work on VIMOS is still ahead during the coming months in order to put into full operation and fine-tune the most efficient "galaxy cruncher" in the world. VIMOS is the outcome of a fruitful collaboration between ESO and several research institutes in France and Italy, under the responsibility of the Laboratoire d'Astrophysique de Marseille (CNRS, France). The other partners in the "VIRMOS Consortium" are the Laboratoire d'Astrophysique de Toulouse, Observatoire Midi-Pyrénées, and Observatoire de Haute-Provence in France, and Istituto di Radioastronomia (Bologna), Istituto di Fisica Cosmica e Tecnologie Relative (Milano), Osservatorio Astronomico di Bologna, Osservatorio Astronomico di Brera (Milano) and Osservatorio Astronomico di Capodimonte (Naples) in Italy. PR Photo 09a/02 : VIMOS image of the Antennae Galaxies (centre). PR Photo 09b/02 : First VIMOS Multi-Object Spectrum (full field) PR Photo 09c/02 : The VIMOS instrument on VLT MELIPAL PR Photo 09d/02 : The VIMOS team at "First Light". PR Photo 09e/02 : "First Light" image of NGC 5364 PR Photo 09f/02 : Image of the Crab Nebula PR Photo 09g/02 : Image of spiral galaxy NGC 2613 PR Photo 09h/02 : Image of spiral galaxy Messier 100 PR Photo 09i/02 : Image of cluster of galaxies ACO 3341 PR Photo 09j/02 : Image of cluster of galaxies MS 1008.1-1224 PR Photo 09k/02 : Mask design for MOS exposure PR Photo 09l/02 : First VIMOS Multi-Object Spectrum (detail) PR Photo 09m/02 : Integrated Field Spectroscopy of central area of the "Antennae Galaxies" PR Photo 09n/02 : Integrated Field Spectroscopy of central area of the "Antennae Galaxies" (detail) Science with VIMOS ESO PR Photo 09a/02 ESO PR Photo 09a/02 [Preview - JPEG: 400 x 469 pix - 152k] [Normal - JPEG: 800 x 938 pix - 408k] ESO PR Photo 09b/02 ESO PR Photo 09b/02 [Preview - JPEG: 400 x 511 pix - 304k] [Normal - JPEG: 800 x 1022 pix - 728k] Caption : PR Photo 09a/02 : One of the first images from the new VIMOS facility, obtained right after the moment of "first light" on Ferbruary 26, 2002. It shows the famous "Antennae Galaxies" (NGC 4038/39), the result of a recent collision between two galaxies. As an immediate outcome of this dramatic event, stars are born within massive complexes that appear blue in this composite photo, based on exposures through green, orange and red optical filtres. PR Photo 09b/02 : Some of the first spectra of distant galaxies obtained with VIMOS in Multi-Object-Spectroscopy (MOS) mode. More than 220 galaxies were observed simultaneously, an unprecedented efficiency for such a "deep" exposure, reaching so far out in space. These spectra allow to obtain the redshift, a measure of distance, as well as to assess the physical status of the gas and stars in each of these galaxies. A part of this photo is enlarged as PR Photo 09l/02. Technical information about these photos is available below. Other "First Light" images from VIMOS are shown in the photo gallery below. The next in the long series of front-line instruments to be installed on the ESO Very Large Telescope (VLT), VIMOS (and its complementary, infrared-sensitive counterpart NIRMOS, now in the design stage) will allow mapping of the distribution of galaxies, clusters, and quasars during a time interval spanning more than 90% of the age of the universe. It will let us look back in time to a moment only ~1.5 billion years after the Big Bang (corresponding to a redshift of about 5). Like archaeologists, astronomers can then dig deep into those early ages when the first building blocks of galaxies were still in the process of formation. They will be able to determine when most of the star formation occurred in the universe and how it evolved with time. They will analyse how the galaxies cluster in space, and how this distribution varies with time. Such observations will put important constraints on evolution models, in particular on the average density of matter in the Universe. Mapping the distant universe requires to determine the distances of the enormous numbers of remote galaxies seen in deep pictures of the sky, adding depth - the third, indispensible dimension - to the photo. VIMOS offers this capability, and very efficiently. Multi-object spectroscopy is a technique by which many objects are observed simultaneously. VIMOS can observe the spectra of about 1000 galaxies in one exposure, from which redshifts, hence distances, can be measured [2]. The possibility to observe two galaxies at once would be equivalent to having a telescope twice the size of a VLT Unit Telescope. VIMOS thus effectively "increases" the size of the VLT hundreds of times. From these spectra, the stellar and gaseous content and internal velocities of galaxies can be infered, forming the base for detailed physical studies. At present the distances of only a few thousand galaxies and quasars have been measured in the distant universe. VIMOS aims at observing 100 times more, over one hundred thousand of those remote objects. This will form a solid base for unprecedented and detailed statistical studies of the population of galaxies and quasars in the very early universe. The international VIRMOS Consortium VIMOS is one of two major astronomical instruments to be delivered by the VIRMOS Consortium of French and Italian institutes under a contract signed in the summer of 1997 between the European Southern Observatory (ESO) and the French Centre National de la Recherche Scientifique (CNRS). The participating institutes are: in France: * Laboratoire d'Astrophysique de Marseille (LAM), Observatoire Marseille-Provence (project responsible) * Laboratoire d'Astrophysique de Toulouse, Observatoire Midi-Pyrénées * Observatoire de Haute-Provence (OHP) in Italy: * Istituto di Radioastronomia (IRA-CNR) (Bologna) * Istituto di Fisica Cosmica e Tecnologie Relative (IFCTR) (Milano) * Osservatorio Astronomico di Capodimonte (OAC) (Naples) * Osservatorio Astronomico di Bologna (OABo) * Osservatorio Astronomico di Brera (OABr) (Milano) VIMOS at the VLT: a unique and powerful combination ESO PR Photo 09c/02 ESO PR Photo 09c/02 [Preview - JPEG: 501 x 400 pix - 312k] [Normal - JPEG: 1002 x 800 pix - 840k] Caption : PR Photo 09c/02 shows the new VIMOS instrument on one of the Nasmyth platforms of the 8.2-m VLT MELIPAL telescope at Paranal. VIMOS is installed on the Nasmyth "Focus B" platform of the 8.2-m VLT MELIPAL telescope, cf. PR Photo 09c/02 . It may be compared to four multi-mode instruments of the FORS-type (cf. ESO PR 14/98 ), joined in one stiff structure. The construction of VIMOS has involved the production of large and complex optical elements and their integration in more than 30 remotely controlled, finely moving functions in the instrument. In the configuration employed for the "first light", VIMOS made use of two of its four channels. The two others will be put into operation in the next commissioning period during the coming months. However, VIMOS is already now the most efficient multi-object spectrograph in the world , with an equivalent (accumulated) slit length of up to 70 arcmin on the sky. VIMOS has a field-of-view as large as half of the full moon (14 x 16 arcmin 2 for the four quadrants), the largest sky field to be imaged so far by the VLT. It has excellent sensitivity in the blue region of the spectrum (about 60% more efficient than any other similar instruments in the ultraviolet band), and it is also very sensitive in all other visible spectral regions, all the way to the red limit. But the absolutely unique feature of VIMOS is its capability to take large numbers of spectra simultaneously , leading to exceedingly efficient use of the observing time. Up to about 1000 objects can be observed in a single exposure in multi-slit mode. And no less than 6400 spectra can be recorded with the Integral Field Unit , in which a closely packed fibre optics bundle can simultaneously observe a continuous sky area measuring no less than 56 x 56 arcsec 2. A dedicated machine, the Mask Manufacturing Unit (MMU) , cuts the slits for the entrance apertures of the spectrograph. The laser is capable of cutting 200 slits in less than 15 minutes. This facility was put into operation at Paranal by the VIRMOS Consortium already in August 2000 and has since been extensively used for observations with the FORS2 instrument; more details are available in ESO PR 19/99. Fast start-up of VIMOS at Paranal ESO PR Photo 09d/02 ESO PR Photo 09d/02 [Preview - JPEG: 473 x 400 pix - 280k] [Normal - JPEG: 946 x 1209 pix - 728k] ESO PR Photo 09e/02 ESO PR Photo 09e/02 [Preview - JPEG: 400 x 438 pix - 176k] [Normal - JPEG: 800 x 876 pix - 664k] Caption : PR Photo 09d/02 : The VIRMOS team in the MELIPAL control room, moments after "First Light" on February 26, 2002. From left to right: Oreste Caputi, Marco Scodeggio, Giovanni Sciarretta , Olivier Le Fevre, Sylvie Brau-Nogue, Christian Lucuix, Bianca Garilli, Markus Kissler-Patig (in front), Xavier Reyes, Michel Saisse, Luc Arnold and Guido Mancini . PR Photo 09e/02 : The spiral galaxy NGC 5364 was the first object to be observed by VIMOS. This false-colour near-infrared, raw "First Light" photo shows the extensive spiral arms. Technical information about this photo is available below. VIMOS was shipped from Observatoire de Haute-Provence (France) at the end of 2001, and reassembled at Paranal during a first period in January 2002. From mid-February, the instrument was made ready for installation on the VLT MELIPAL telescope; this happened on February 24, 2002. VIMOS saw "First Light" just two days later, on February 26, 2000, cf. PR Photo 09e/02 . During the same night, a number of excellent images were obtained of various objects, demonstrating the fine capabilities of the instrument in the "direct imaging"-mode. The first spectra were successfully taken during the night of March 2 - 3, 2002 . The slit masks that were used on this occasion were prepared with dedicated software that also optimizes the object selection, cf. PR Photo 09k/02 , and were then cut with the laser machine. From the first try on, the masks have been well aligned on the sky objects. The first observations with large numbers of spectra were obtained shortly thereafter. First accomplishments Images of nearby galaxies, clusters of galaxies, and distant galaxy fields were among the first to be obtained, using the VIMOS imaging mode and demonstrating the excellent efficiency of the instrument, various examples are shown below. The first observations of multi-spectra were performed in a selected sky field in which many faint galaxies are present; it is known as the "VIRMOS-VLT Deep Survey Field at 1000+02". Thanks to the excellent sensitivity of VIMOS, the spectra of galaxies as faint as (red) magnitude R = 23 (i.e. over 6 million times fainter than what can be perceived with the unaided eye) are visible on exposures lasting only 15 minutes. Some of the first observations with the Integral Field Unit were made of the core of the famous Antennae Galaxies (NGC 4038/39) . They will form the basis for a detailed map of the strong emission produced by the current, dramatic collision of the two galaxies. First Images and Spectra from VIMOS - a Gallery The following photos are from a collection of the first images and spectra obtained with VIMOS . See also PR Photos 09a/02 , 09b/02 and 09e/02 , reproduced above. Technical information about all of them is available below. ESO PR Photo 09f/02 ESO PR Photo 09f/02 [Preview - JPEG: 400 x 469 pix - 224k] [Normal - JPEG: 800 x 937 pix - 544k] [HiRes - JPEG: 2001 x 2343 pix - 3.6M] Caption : PR Photo 09f/02 : The Crab Nebula (Messier 1) , as observed by VIMOS. This well-known object is the remnant of a stellar explosion in the year 1054. ESO PR Photo 09g/02 ESO PR Photo 09g/02 [Preview - JPEG: 478 x 400 pix - 184k] [Normal - JPEG: 956 x 1209 pix - 416k] [HiRes - JPEG: 1801 x 1507 pix - 1.4M] Caption : PR Photo 09g/02 : VIMOS photo of NGC 2613 , a spiral galaxy that ressembles our own Milky Way. ESO PR Photo 09h/02 ESO PR Photo 09h/02 [Preview - JPEG: 400 x 469 pix - 152k] [Normal - JPEG: 800 x 938 pix - 440k] [HiRes - JPEG: 1800 x 2100 pix - 2.0M] Caption : PR Photo 09h/02 : Messier 100 is one of the largest and brightest spiral galaxies in the sky. ESO PR Photo 09i/02 ESO PR Photo 09i/02 [Preview - JPEG: 400 x 405 pix - 144k] [Normal - JPEG: 800 x 810 pix - 312k] Caption : PR Photo 09i/02 : The cluster of galaxies ACO 3341 is located at a distance of about 300 million light-years (redshift z = 0.037), i.e., comparatively nearby in cosmological terms. It contains a large number of galaxies of different size and brightness that are bound together by gravity. ESO PR Photo 09j/02 ESO PR Photo 09j/02 [Preview - JPEG: 447 x 400 pix - 200k] [Normal - JPEG: 893 x 800 pix - 472k] [HiRes - JPEG: 1562 x 1399 pix - 1.1M] Caption : PR Photo 09j/02 : The distant cluster of galaxies MS 1008.1-1224 is some 3 billion light-years distant (redshift z = 0.301). The galaxies in this cluster - that we observe as they were 3 billion years ago - are different from galaxies in our neighborhood; their stellar populations, on the average, are younger. ESO PR Photo 09k/02 ESO PR Photo 09k/02 [Preview - JPEG: 400 x 455 pix - 280k] [Normal - JPEG: 800 x 909 pix - 696k] Caption : PR Photo 09k/02 : Design of a Mask for Multi-Object Spectroscopy (MOS) observations with VIMOS. The mask serves to block, as far as possible, unwanted background light from the "night sky" (radiation from atoms and molecules in the Earth's upper atmosphere). During the set-up process for multi-object observations, the VIMOS software optimizes the position of the individual slits in the mask (one for each object for which a spectrum will be obtained) before these are cut. The photo shows an example of this fitting process, with the slit contours superposed on a short pre-exposure of the sky field to be observed. ESO PR Photo 09l/02 ESO PR Photo 09l/02 [Preview - JPEG: 470 x 400 pix - 200k] [Normal - JPEG: 939 x 800 pix - 464k] Caption : PR Photo 09l/02 : First Multi-Object Spectroscopy (MOS) observations with VIMOS; enlargement of a small part of the field shown in PR Photo 09b/02. The light from each galaxy passes through the dedicated slit in the mask (see PR Photo 09k/02 ) and produces a spectrum on the detector. Each vertical rectangle contains the spectrum of one galaxy that is located several billion light-years away. The horizontal lines are the strong emission from the "night sky" (radiation from atoms and molecules in the Earth's upper atmosphere), while the vertical traces are the spectral signatures of the galaxies. The full field contains the spectra of over 220 galaxies that were observed simultaneously, illustrating the great efficiency of this technique. Later, about 1000 spectra will be obtained in one exposure. ESO PR Photo 09m/02 ESO PR Photo 09m/02 [Preview - JPEG: 470 x 400 pix - 264k] [Normal - JPEG: 939 x 800 pix - 720k] Caption : PR Photo 09m/02 : was obtained with the Integral Field Spectroscopy mode of VIMOS. In one single exposure, more than 3000 spectra were taken of the central area of the Antennae Galaxies ( PR Photo 09a/02 ). ESO PR Photo 09n/02 ESO PR Photo 09n/02 [Preview - JPEG: 532 x 400 pix - 320k] [Normal - JPEG: 1063 x 800 pix - 864k] Caption : PR Photo 09n/02 : An enlargement of a small area in PR Photo 09m/02. This observation allows mapping of the distribution of elements like hydrogen (H) and sulphur (S II), for which the signatures are clearly identified in these spectra. The wavelength increases towards the top (arrow). Notes [1]: This is a joint Press Release of ESO , Centre National de la Recherche Scientifique (CNRS) in France, and Consiglio Nazionale delle Ricerche (CNR) and Istituto Nazionale di Astrofisica (INAF) in Italy. [2]: In astronomy, the redshift denotes the fraction by which the lines in the spectrum of an object are shifted towards longer wavelengths. The observed redshift of a distant galaxy gives a direct estimate of the apparent recession velocity as caused by the universal expansion. Since the expansion rate increases with distance, the velocity is itself a function (the Hubble relation) of the distance to the object. Technical information about the photos PR Photo 09a/01 : Composite VRI image of NGC 4038/39, obtained on 26 February 2002, in a bright sky (full moon). Individual exposures of 60 sec each; image quality 0.6 arcsec FWHM; the field measures 3.5 x 3.5 arcmin 2. North is up and East is left. PR Photo 09b/02 : MOS-spectra obtained with two quadrants totalling 221 slits + 6 reference objects (stars placed in square holes to ensure a correct alignment). Exposure time 15 min; LR(red) grism. This is the raw (unprocessed) image of the spectra. PR Photo 09e/02 : A 60 sec i exposure of NGC 5364 on February 26, 2002; image quality 0.6 arcsec FWHM; full moon; 3.5 x 3.5 arcmin 2 ; North is up and East is left. PR Photo 09f/02 : Composite VRI image of Messier 1, obtained on March 4, 2002. The individual exposures lasted 180 sec; image quality 0.7 arcsec FWHM; field 7 x 7 arcmin 2 ; North is up and East is left. PR Photo 09g/02 : Composite VRI image of NGC 2613, obtained on February 28, 2002. The individual exposures lasted 180 sec; image quality 0.7 arcsec FWHM; field 7 x 7 arcmin 2 ; North is up and East is left. PR Photo 09h/02 : Composite VRI image of Messier 100, obtained on March 3, 2002. The individual exposures lasted 180 sec, image quality 0.7 arcsec FWHM; field 7 x 7 arcmin 2 ; North is up and East is left. PR Photo 09i/02 : R-band image of galaxy cluster ACO 3341, obtained on March 4, 2002. Exposure 300 sec, image quality 0.5 arcsec FWHM;. field 7 x 7 arcmin 2 ; North is up and East is left. PR Photo 09j/02 : Composite VRI image of the distant cluster of galaxies MS 1008.1-1224. The individual exposures lasted 300 sec; image quality 0.8 arcsec FWHM; field 5 x 3 arcmin 2 ; North is to the right and East is up. PR Photo 09k/02 : Mask design made with the VMMPS tool, overlaying a pre-image. The selected objects are seen at the centre of the yellow squares, where a 1 arcsec slit is cut along the spatial X-axis. The rectangles in white represent the dispersion in wavelength of the spectra along the Y-axis. Masks are cut with the Mask Manufacturing Unit (MMU) built by the Virmos Consortium. PR Photo 09l/02 : Enlargement of a small area of PR Photo 09b/02. PR Photo 09m/02 : Spectra of the central area of NGC 4038/39, obtained with the Integral Field Unit on February 26, 2002. The exposure lasted 5 min and was made with the low resolution red grating. PR Photo 09m/02 : Zoom-in on small area of PR Photo 09m/02. The strong emission lines of hydrogen (H-alpha) and ionized sulphur (S II) are seen.

  19. JPEG XS call for proposals subjective evaluations

    NASA Astrophysics Data System (ADS)

    McNally, David; Bruylants, Tim; Willème, Alexandre; Ebrahimi, Touradj; Schelkens, Peter; Macq, Benoit

    2017-09-01

    In March 2016 the Joint Photographic Experts Group (JPEG), formally known as ISO/IEC SC29 WG1, issued a call for proposals soliciting compression technologies for a low-latency, lightweight and visually transparent video compression scheme. Within the JPEG family of standards, this scheme was denominated JPEG XS. The subjective evaluation of visually lossless compressed video sequences at high resolutions and bit depths poses particular challenges. This paper describes the adopted procedures, the subjective evaluation setup, the evaluation process and summarizes the obtained results which were achieved in the context of the JPEG XS standardization process.

  20. The Capodimonte Deep Field

    NASA Astrophysics Data System (ADS)

    2001-04-01

    A Window towards the Distant Universe Summary The Osservatorio Astronomico Capodimonte Deep Field (OACDF) is a multi-colour imaging survey project that is opening a new window towards the distant universe. It is conducted with the ESO Wide Field Imager (WFI) , a 67-million pixel advanced camera attached to the MPG/ESO 2.2-m telescope at the La Silla Observatory (Chile). As a pilot project at the Osservatorio Astronomico di Capodimonte (OAC) [1], the OACDF aims at providing a large photometric database for deep extragalactic studies, with important by-products for galactic and planetary research. Moreover, it also serves to gather experience in the proper and efficient handling of very large data sets, preparing for the arrival of the VLT Survey Telescope (VST) with the 1 x 1 degree 2 OmegaCam facility. PR Photo 15a/01 : Colour composite of the OACDF2 field . PR Photo 15b/01 : Interacting galaxies in the OACDF2 field. PR Photo 15c/01 : Spiral galaxy and nebulous object in the OACDF2 field. PR Photo 15d/01 : A galaxy cluster in the OACDF2 field. PR Photo 15e/01 : Another galaxy cluster in the OACDF2 field. PR Photo 15f/01 : An elliptical galaxy in the OACDF2 field. The Capodimonte Deep Field ESO PR Photo 15a/01 ESO PR Photo 15a/01 [Preview - JPEG: 400 x 426 pix - 73k] [Normal - JPEG: 800 x 851 pix - 736k] [Hi-Res - JPEG: 3000 x 3190 pix - 7.3M] Caption : This three-colour image of about 1/4 of the Capodimonte Deep Field (OACDF) was obtained with the Wide-Field Imager (WFI) on the MPG/ESO 2.2-m telescope at the la Silla Observatory. It covers "OACDF Subfield no. 2 (OACDF2)" with an area of about 35 x 32 arcmin 2 (about the size of the full moon), and it is one of the "deepest" wide-field images ever obtained. Technical information about this photo is available below. With the comparatively few large telescopes available in the world, it is not possible to study the Universe to its outmost limits in all directions. Instead, astronomers try to obtain the most detailed information possible in selected viewing directions, assuming that what they find there is representative for the Universe as a whole. This is the philosophy behind the so-called "deep-field" projects that subject small areas of the sky to intensive observations with different telescopes and methods. The astronomers determine the properties of the objects seen, as well as their distances and are then able to obtain a map of the space within the corresponding cone-of-view (the "pencil beam"). Recent, successful examples of this technique are the "Hubble Deep Field" (cf. ESO PR Photo 26/98 ) and the "Chandra Deep Field" ( ESO PR 05/01 ). In this context, the Capodimonte Deep Field (OACDF) is a pilot research project, now underway at the Osservatorio Astronomico di Capodimonte (OAC) in Napoli (Italy). It is a multi-colour imaging survey performed with the Wide Field Imager (WFI) , a 67-million pixel (8k x 8k) digital camera that is installed at the 2.2-m MPG/ESO Telescope at ESO's La Silla Observatory in Chile. The scientific goal of the OACDF is to provide an important database for subsequent extragalactic, galactic and planetary studies. It will allow the astronomers at OAC - who are involved in the VLT Survey Telescope (VST) project - to gain insight into the processing (and use) of the large data flow from a camera similar to, but four times smaller than the OmegaCam wide-field camera that will be installed at the VST. The field selection for the OACDF was based on the following criteria: * There must be no stars brighter than about 9th magnitude in the field, in order to avoid saturation of the CCD detector and effects from straylight in the telescope and camera. No Solar System planets should be near the field during the observations; * It must be located far from the Milky Way plane (at high galactic latitude) in order to reduce the number of galactic stars seen in this direction; * It must be located in the southern sky in order to optimize observing conditions (in particular, the altitude of the field above the horizon), as seen from the La Silla and Paranal sites; * There should be little interstellar material in this direction that may obscure the view towards the distant Universe; * Observations in this field should have been made with the Hubble Space Telescope (HST) that may serve for comparison and calibration purposes. Based on these criteria, the astronomers selected a field measuring about 1 x 1 deg 2 in the southern constellation of Corvus (The Raven). This is now known as the Capodimonte Deep Field (OACDF) . The above photo ( PR Photo 15a/01 ) covers one-quarter of the full field (Subfield No. 2 - OACDF2) - some of the objects seen in this area are shown below in more detail. More than 35,000 objects have been found in this area; the faintest are nearly 100 million fainter than what can be perceived with the unaided eye in the dark sky. Selected objects in the Capodimonte Deep Field ESO PR Photo 15b/01 ESO PR Photo 15b/01 [Preview - JPEG: 400 x 435 pix - 60k] [Normal - JPEG: 800 x 870 pix - 738k] [Hi-Res - JPEG: 3000 x 3261 pix - 5.1M] Caption : Enlargement of the interacting galaxies that are seen in the upper left corner of the OACDF2 field shown in PR Photo 15a/01 . The enlargement covers 1250 x 1130 WFI pixels (1 pixel = 0.24 arcsec), or about 5.0 x 4.5 arcmin 2 in the sky. The lower spiral is itself an interactive double. ESO PR Photo 15c/01 ESO PR Photo 15c/01 [Preview - JPEG: 557 x 400 pix - 93k] [Normal - JPEG: 1113 x 800 pix - 937k] [Hi-Res - JPEG: 3000 x 2156 pix - 4.0M] Caption : Enlargement of a spiral galaxy and a nebulous object in this area. The field shown covers 1250 x 750 pixels, or about 5 x 3 arcmin 2 in the sky. Note the very red objects next to the two bright stars in the lower-right corner. The colours of these objects are consistent with those of spheroidal galaxies at intermediate distances (redshifts). ESO PR Photo 15d/01 ESO PR Photo 15d/01 [Preview - JPEG: 400 x 530 pix - 68k] [Normal - JPEG: 800 x 1060 pix - 870k] [Hi-Res - JPEG: 2768 x 3668 pix - 6.2M] Caption : A further enlargement of a galaxy cluster of which most members are located in the north-east quadrant (upper left) and have a reddish colour. The nebulous object to the upper left is a dwarf galaxy of spheroidal shape. The red object, located near the centre of the field and resembling a double star, is very likely a gravitational lens [2]. Some of the very red, point-like objects in the field may be distant quasars, very-low mass stars or, possibly, relatively nearby brown dwarf stars. The field shown covers 1380 x 1630 pixels, or 5.5 x 6.5 arcmin 2. ESO PR Photo 15e/01 ESO PR Photo 15e/01 [Preview - JPEG: 400 x 418 pix - 56k] [Normal - JPEG: 800 x 835 pix - 700k] [Hi-Res - JPEG: 3000 x 3131 pix - 5.0M] Caption : Enlargement of a moderately distant galaxy cluster in the south-east quadrant (lower left) of the OACDF2 field. The field measures 1380 x 1260 pixels, or about 5.5 x 5.0 arcmin 2 in the sky. ESO PR Photo 15f/01 ESO PR Photo 15f/01 [Preview - JPEG: 449 x 400 pix - 68k] [Normal - JPEG: 897 x 800 pix - 799k] [Hi-Res - JPEG: 3000 x 2675 pix - 5.6M] Caption : Enlargement of the elliptical galaxy that is located to the west (right) in the OACDF2 field. The numerous tiny objects surrounding the galaxy may be globular clusters. The fuzzy object on the right edge of the field may be a dwarf spheroidal galaxy. The size of the field is about 6 x 5 arcmin 2. Technical Information about the OACDF Survey The observations for the OACDF project were performed in three different ESO periods (18-22 April 1999, 7-12 March 2000 and 26-30 April 2000). Some 100 Gbyte of raw data were collected during each of the three observing runs. The first OACDF run was done just after the commissioning of the ESO-WFI. The observational strategy was to perform a 1 x 1 deg 2 short-exposure ("shallow") survey and then a 0.5 x 1 deg 2 "deep" survey. The shallow survey was performed in the B, V, R and I broad-band filters. Four adjacent 30 x 30 arcmin 2 fields, together covering a 1 x 1 deg 2 field in the sky, were observed for the shallow survey. Two of these fields were chosen for the 0.5 x 1 deg 2 deep survey; OACDF2 shown above is one of these. The deep survey was performed in the B, V, R broad-bands and in other intermediate-band filters. The OACDF data are fully reduced and the catalogue extraction has started. A two-processor (500 Mhz each) DS20 machine with 100 Gbyte of hard disk, specifically acquired at the OAC for WFI data reduction, was used. The detailed guidelines of the data reduction, as well as the catalogue extraction, are reported in a research paper that will appear in the European research journal Astronomy & Astrophysics . Notes [1]: The team members are: Massimo Capaccioli, Juan M. Alcala', Roberto Silvotti, Magda Arnaboldi, Vincenzo Ripepi, Emanuella Puddu, Massimo Dall'Ora, Giuseppe Longo and Roberto Scaramella . [2]: This is a preliminary result by Juan Alcala', Massimo Capaccioli, Giuseppe Longo, Mikhail Sazhin, Roberto Silvotti and Vincenzo Testa , based on recent observations with the Telescopio Nazionale Galileo (TNG) which show that the spectra of the two objects are identical. Technical information about the photos PR Photo 15a/01 has been obtained by the combination of the B, V, and R stacked images of the OACDF2 field. The total exposure times in the three bands are 2 hours in B and V (12 ditherings of 10 min each were stacked to produce the B and V images) and 3 hours in R (13 ditherings of 15 min each). The mosaic images in the B and V bands were aligned relative to the R-band image and adjusted to a logarithmic intensity scale prior to the combination. The typical seeing was of the order of 1 arcsec in each of the three bands. Preliminary estimates of the three-sigma limiting magnitudes in B, V and R indicate 25.5, 25.0 and 25.0, respectively. More than 35,000 objects are detected above the three-sigma level. PR Photos 15b-f/01 display selected areas of the field shown in PR Photo 15a/01 at the original WFI scale, hereby also demonstrating the enormous amount of information contained in these wide-field images. In all photos, North is up and East is left.

  1. The Pixon Method for Data Compression Image Classification, and Image Reconstruction

    NASA Technical Reports Server (NTRS)

    Puetter, Richard; Yahil, Amos

    2002-01-01

    As initially proposed, this program had three goals: (1) continue to develop the highly successful Pixon method for image reconstruction and support other scientist in implementing this technique for their applications; (2) develop image compression techniques based on the Pixon method; and (3) develop artificial intelligence algorithms for image classification based on the Pixon approach for simplifying neural networks. Subsequent to proposal review the scope of the program was greatly reduced and it was decided to investigate the ability of the Pixon method to provide superior restorations of images compressed with standard image compression schemes, specifically JPEG-compressed images.

  2. Digital image forensics for photographic copying

    NASA Astrophysics Data System (ADS)

    Yin, Jing; Fang, Yanmei

    2012-03-01

    Image display technology has greatly developed over the past few decades, which make it possible to recapture high-quality images from the display medium, such as a liquid crystal display(LCD) screen or a printed paper. The recaptured images are not regarded as a separate image class in the current research of digital image forensics, while the content of the recaptured images may have been tempered. In this paper, two sets of features based on the noise and the traces of double JPEG compression are proposed to identify these recaptured images. Experimental results showed that our proposed features perform well for detecting photographic copying.

  3. Hierarchical storage of large volume of multidector CT data using distributed servers

    NASA Astrophysics Data System (ADS)

    Ratib, Osman; Rosset, Antoine; Heuberger, Joris; Bandon, David

    2006-03-01

    Multidector scanners and hybrid multimodality scanners have the ability to generate large number of high-resolution images resulting in very large data sets. In most cases, these datasets are generated for the sole purpose of generating secondary processed images and 3D rendered images as well as oblique and curved multiplanar reformatted images. It is therefore not essential to archive the original images after they have been processed. We have developed an architecture of distributed archive servers for temporary storage of large image datasets for 3D rendering and image processing without the need for long term storage in PACS archive. With the relatively low cost of storage devices it is possible to configure these servers to hold several months or even years of data, long enough for allowing subsequent re-processing if required by specific clinical situations. We tested the latest generation of RAID servers provided by Apple computers with a capacity of 5 TBytes. We implemented a peer-to-peer data access software based on our Open-Source image management software called OsiriX, allowing remote workstations to directly access DICOM image files located on the server through a new technology called "bonjour". This architecture offers a seamless integration of multiple servers and workstations without the need for central database or complex workflow management tools. It allows efficient access to image data from multiple workstation for image analysis and visualization without the need for image data transfer. It provides a convenient alternative to centralized PACS architecture while avoiding complex and time-consuming data transfer and storage.

  4. Blocking reduction of Landsat Thematic Mapper JPEG browse images using optimal PSNR estimated spectra adaptive postfiltering

    NASA Technical Reports Server (NTRS)

    Linares, Irving; Mersereau, Russell M.; Smith, Mark J. T.

    1994-01-01

    Two representative sample images of Band 4 of the Landsat Thematic Mapper are compressed with the JPEG algorithm at 8:1, 16:1 and 24:1 Compression Ratios for experimental browsing purposes. We then apply the Optimal PSNR Estimated Spectra Adaptive Postfiltering (ESAP) algorithm to reduce the DCT blocking distortion. ESAP reduces the blocking distortion while preserving most of the image's edge information by adaptively postfiltering the decoded image using the block's spectral information already obtainable from each block's DCT coefficients. The algorithm iteratively applied a one dimensional log-sigmoid weighting function to the separable interpolated local block estimated spectra of the decoded image until it converges to the optimal PSNR with respect to the original using a 2-D steepest ascent search. Convergence is obtained in a few iterations for integer parameters. The optimal logsig parameters are transmitted to the decoder as a negligible byte of overhead data. A unique maxima is guaranteed due to the 2-D asymptotic exponential overshoot shape of the surface generated by the algorithm. ESAP is based on a DFT analysis of the DCT basis functions. It is implemented with pixel-by-pixel spatially adaptive separable FIR postfilters. PSNR objective improvements between 0.4 to 0.8 dB are shown together with their corresponding optimal PSNR adaptive postfiltered images.

  5. Integration of radiographic images with an electronic medical record.

    PubMed Central

    Overhage, J. M.; Aisen, A.; Barnes, M.; Tucker, M.; McDonald, C. J.

    2001-01-01

    Radiographic images are important and expensive diagnostic tests. However, the provider caring for the patient often does not review the images directly due to time constraints. Institutions can use picture archiving and communications systems to make images more available to the provider, but this may not be the best solution. We integrated radiographic image review into the Regenstrief Medical Record System in order to address this problem. To achieve adequate performance, we store JPEG compressed images directly in the RMRS. Currently, physicians review about 5% of all radiographic studies using the RMRS image review function. PMID:11825241

  6. [Intranet-based integrated information system of radiotherapy-related images and diagnostic reports].

    PubMed

    Nakamura, R; Sasaki, M; Oikawa, H; Harada, S; Tamakawa, Y

    2000-03-01

    To use an intranet technique to develop an information system that simultaneously supports both diagnostic reports and radiotherapy planning images. Using a file server as the gateway a radiation oncology LAN was connected to an already operative RIS LAN. Dose-distribution images were saved in tagged-image-file format by way of a screen dump to the file server. X-ray simulator images and portal images were saved in encapsulated postscript format in the file server and automatically converted to portable document format. The files on the file server were automatically registered to the Web server by the search engine and were available for searching and browsing using the Web browser. It took less than a minute to register planning images. For clients, searching and browsing the file took less than 3 seconds. Over 150,000 reports and 4,000 images from a six-month period were accessible. Because the intranet technique was used, construction and maintenance was completed without specialty. Prompt access to essential information about radiotherapy has been made possible by this system. It promotes public access to radiotherapy planning that may improve the quality of treatment.

  7. DICOM-compliant PACS with CD-based image archival

    NASA Astrophysics Data System (ADS)

    Cox, Robert D.; Henri, Christopher J.; Rubin, Richard K.; Bret, Patrice M.

    1998-07-01

    This paper describes the design and implementation of a low- cost PACS conforming to the DICOM 3.0 standard. The goal was to provide an efficient image archival and management solution on a heterogeneous hospital network as a basis for filmless radiology. The system follows a distributed, client/server model and was implemented at a fraction of the cost of a commercial PACS. It provides reliable archiving on recordable CD and allows access to digital images throughout the hospital and on the Internet. Dedicated servers have been designed for short-term storage, CD-based archival, data retrieval and remote data access or teleradiology. The short-term storage devices provide DICOM storage and query/retrieve services to scanners and workstations and approximately twelve weeks of 'on-line' image data. The CD-based archival and data retrieval processes are fully automated with the exception of CD loading and unloading. The system employs lossless compression on both short- and long-term storage devices. All servers communicate via the DICOM protocol in conjunction with both local and 'master' SQL-patient databases. Records are transferred from the local to the master database independently, ensuring that storage devices will still function if the master database server cannot be reached. The system features rules-based work-flow management and WWW servers to provide multi-platform remote data access. The WWW server system is distributed on the storage, retrieval and teleradiology servers allowing viewing of locally stored image data directly in a WWW browser without the need for data transfer to a central WWW server. An independent system monitors disk usage, processes, network and CPU load on each server and reports errors to the image management team via email. The PACS was implemented using a combination of off-the-shelf hardware, freely available software and applications developed in-house. The system has enabled filmless operation in CT, MR and ultrasound within the radiology department and throughout the hospital. The use of WWW technology has enabled the development of an intuitive we- based teleradiology and image management solution that provides complete access to image data.

  8. Quality labeled faces in the wild (QLFW): a database for studying face recognition in real-world environments

    NASA Astrophysics Data System (ADS)

    Karam, Lina J.; Zhu, Tong

    2015-03-01

    The varying quality of face images is an important challenge that limits the effectiveness of face recognition technology when applied in real-world applications. Existing face image databases do not consider the effect of distortions that commonly occur in real-world environments. This database (QLFW) represents an initial attempt to provide a set of labeled face images spanning the wide range of quality, from no perceived impairment to strong perceived impairment for face detection and face recognition applications. Types of impairment include JPEG2000 compression, JPEG compression, additive white noise, Gaussian blur and contrast change. Subjective experiments are conducted to assess the perceived visual quality of faces under different levels and types of distortions and also to assess the human recognition performance under the considered distortions. One goal of this work is to enable automated performance evaluation of face recognition technologies in the presence of different types and levels of visual distortions. This will consequently enable the development of face recognition systems that can operate reliably on real-world visual content in the presence of real-world visual distortions. Another goal is to enable the development and assessment of visual quality metrics for face images and for face detection and recognition applications.

  9. Improved compression technique for multipass color printers

    NASA Astrophysics Data System (ADS)

    Honsinger, Chris

    1998-01-01

    A multipass color printer prints a color image by printing one color place at a time in a prescribed order, e.g., in a four-color systems, the cyan plane may be printed first, the magenta next, and so on. It is desirable to discard the data related to each color plane once it has been printed, so that data from the next print may be downloaded. In this paper, we present a compression scheme that allows the release of a color plane memory, but still takes advantage of the correlation between the color planes. The compression scheme is based on a block adaptive technique for decorrelating the color planes followed by a spatial lossy compression of the decorrelated data. A preferred method of lossy compression is the DCT-based JPEG compression standard, as it is shown that the block adaptive decorrelation operations can be efficiently performed in the DCT domain. The result of the compression technique are compared to that of using JPEG on RGB data without any decorrelating transform. In general, the technique is shown to improve the compression performance over a practical range of compression ratios by at least 30 percent in all images, and up to 45 percent in some images.

  10. Multi-Class Classification for Identifying JPEG Steganography Embedding Methods

    DTIC Science & Technology

    2008-09-01

    B.H. (2000). STEGANOGRAPHY: Hidden Images, A New Challenge in the Fight Against Child Porn . UPDATE, Volume 13, Number 2, pp. 1-4, Retrieved June 3...Other crimes involving the use of steganography include child pornography where the stego files are used to hide a predator’s location when posting

  11. Another Look at an Enigmatic New World

    NASA Astrophysics Data System (ADS)

    2005-02-01

    VLT NACO Performs Outstanding Observations of Titan's Atmosphere and Surface On January 14, 2005, the ESA Huygens probe arrived at Saturn's largest satellite, Titan. After a faultless descent through the dense atmosphere, it touched down on the icy surface of this strange world from where it continued to transmit precious data back to the Earth. Several of the world's large ground-based telescopes were also active during this exciting event, observing Titan before and near the Huygens encounter, within the framework of a dedicated campaign coordinated by the members of the Huygens Project Scientist Team. Indeed, large astronomical telescopes with state-of-the art adaptive optics systems allow scientists to image Titan's disc in quite some detail. Moreover, ground-based observations are not restricted to the limited period of the fly-by of Cassini and landing of Huygens. They hence complement ideally the data gathered by this NASA/ESA mission, further optimising the overall scientific return. A group of astronomers [1] observed Titan with ESO's Very Large Telescope (VLT) at the Paranal Observatory (Chile) during the nights from 14 to 16 January, by means of the adaptive optics NAOS/CONICA instrument mounted on the 8.2-m Yepun telescope [2]. The observations were carried out in several modes, resulting in a series of fine images and detailed spectra of this mysterious moon. They complement earlier VLT observations of Titan, cf. ESO Press Photos 08/04 and ESO Press Release 09/04. The highest contrast images ESO PR Photo 04a/05 ESO PR Photo 04a/05 Titan's surface (NACO/VLT) [Preview - JPEG: 400 x 712 pix - 64k] [Normal - JPEG: 800 x 1424 pix - 524k] ESO PR Photo 04b/05 ESO PR Photo 04b/05 Map of Titan's Surface (NACO/VLT) [Preview - JPEG: 400 x 651 pix - 41k] [Normal - JPEG: 800 x 1301 pix - 432k] Caption: ESO PR Photo 04a/05 shows Titan's trailing hemisphere [3] with the Huygens landing site marked as an "X". The left image was taken with NACO and a narrow-band filter centred at 2 microns. On the right is the NACO/SDI image of the same location showing Titan's surface through the 1.6 micron methane window. A spherical projection with coordinates on Titan is overplotted. ESO PR Photo 04b/05 is a map of Titan taken with NACO at 1.28 micron (a methane window allowing it to probe down to the surface). On the leading side of Titan, the bright equatorial feature ("Xanadu") is dominating. On the trailing side, the landing site of the Huygens probe is indicated. ESO PR Photo 04c/05 ESO PR Photo 04c/05 Titan, the Enigmatic Moon, and Huygens Landing Site (NACO-SDI/VLT and Cassini/ISS) [Preview - JPEG: 400 x 589 pix - 40k] [Normal - JPEG: 800 x 1178 pix - 290k] Caption: ESO PR Photo 04c/05 is a comparison between the NACO/SDI image and an image taken by Cassini/ISS while approaching Titan. The Cassini image shows the Huygens landing site map wrapped around Titan, rotated to the same position as the January NACO SDI observations. The yellow "X" marks the landing site of the ESA Huygens probe. The Cassini/ISS image is courtesy of NASA, JPL, Space Science Institute (see http://sci.esa.int/science-e/www/object/index.cfm?fobjectid=36222). The coloured lines delineate the regions that were imaged by Cassini at differing resolutions. The lower-resolution imaging sequences are outlined in blue. Other areas have been specifically targeted for moderate and high resolution mosaicking of surface features. These include the site where the European Space Agency's Huygens probe has touched down in mid-January (marked with the yellow X), part of the bright region named Xanadu (easternmost extent of the area covered), and a boundary between dark and bright regions. ESO PR Photo 04d/05 ESO PR Photo 04d/05 Evolution of the Atmosphere of Titan (NACO/VLT) [Preview - JPEG: 400 x 902 pix - 40k] [Normal - JPEG: 800 x 1804 pix - 320k] Caption: ESO PR Photo 04d/05 is an image of Titan's atmosphere at 2.12 microns as observed with NACO on the VLT at three different epochs from 2002 till now. Titan's atmosphere exhibits seasonal and meteorological changes which can clearly be seen here : the North-South asymmetry - indicative of changes in the chemical composition in one pole or the other, depending on the season - is now clearly in favour of the North pole. Indeed, the situation has reversed with respect to a few years ago when the South pole was brighter. Also visible in these images is a bright feature in the South pole, found to be presently dimming after having appeared very bright from 2000 to 2003. The differences in size are due to the variation in the distance to Earth of Saturn and its planetary system. The new images show Titan's atmosphere and surface at various near-infrared spectral bands. The surface of Titan's trailing side is visible in images taken through narrow-band filters at wavelengths 1.28, 1.6 and 2.0 microns. They correspond to the so-called "methane windows" which allow to peer all the way through the lower Titan atmosphere to the surface. On the other hand, Titan's atmosphere is visible through filters centred in the wings of these methane bands, e.g. at 2.12 and 2.17 microns. Eric Gendron of the Paris Observatory in France and leader of the team, is extremely pleased: "We believe that some of these images are the highest-contrast images of Titan ever taken with any ground-based or earth-orbiting telescope." The excellent images of Titan's surface show the location of the Huygens landing site in much detail. In particular, those centred at wavelength 1.6 micron and obtained with the Simultaneous Differential Imager (SDI) on NACO [4] provide the highest contrast and best views. This is firstly because the filters match the 1.6 micron methane window most accurately. Secondly, it is possible to get an even clearer view of the surface by subtracting accurately the simultaneously recorded images of the atmospheric haze, taken at wavelength 1.625 micron. The images show the great complexity of Titan's trailing side, which was earlier thought to be very dark. However, it is now obvious that bright and dark regions cover the field of these images. The best resolution achieved on the surface features is about 0.039 arcsec, corresponding to 200 km on Titan. ESO PR Photo 04c/04 illustrates the striking agreement between the NACO/SDI image taken with the VLT from the ground and the ISS/Cassini map. The images of Titan's atmosphere at 2.12 microns show a still-bright south pole with an additional atmospheric bright feature, which may be clouds or some other meteorological phenomena. The astronomers have followed it since 2002 with NACO and notice that it seems to be fading with time. At 2.17 microns, this feature is not visible and the north-south asymmetry - also known as "Titan's smile" - is clearly in favour in the north. The two filters probe different altitude levels and the images thus provide information about the extent and evolution of the north-south asymmetry. Probing the composition of the surface ESO PR Photo 04e/05 ESO PR Photo 04e/05 Spectrum of Two Regions on Titan (NACO/VLT) [Preview - JPEG: 400 x 623 pix - 44k] [Normal - JPEG: 800 x 1246 pix - 283k] Caption: ESO PR Photo 04e/05 represents two of the many spectra obtained on January 16, 2005 with NACO and covering the 2.02 to 2.53 micron range. The blue spectrum corresponds to the brightest region on Titan's surface within the slit, while the red spectrum corresponds to the dark area around the Huygens landing site. In the methane band, the two spectra are equal, indicating a similar atmospheric content; in the methane window centred at 2.0 microns, the spectra show differences in brightness, but are in phase. This suggests that there is no real variation in the composition beyond different atmospheric mixings. ESO PR Photo 04f/05 ESO PR Photo 04f/05 Imaging Titan with a Tunable Filter (NACO Fabry-Perot/VLT) [Preview - JPEG: 400 x 718 pix - 44k] [Normal - JPEG: 800 x 1435 pix - 326k] Caption: ESO PR Photo 04f/05 presents a series of images of Titan taken around the 2.0 micron methane window probing different layers of the atmosphere and the surface. The images are currently under thorough processing and analysis so as to reveal any subtle variations in wavelength that could be indicative of the spectral response of the various surface components, thus allowing the astronomers to identify them. Because the astronomers have also obtained spectroscopic data at different wavelengths, they will be able to recover useful information on the surface composition. The Cassini/VIMS instrument explores Titan's surface in the infrared range and, being so close to this moon, it obtains spectra with a much better spatial resolution than what is possible with Earth-based telescopes. However, with NACO at the VLT, the astronomers have the advantage of observing Titan with considerably higher spectral resolution, and thus to gain more detailed spectral information about the composition, etc. The observations therefore complement each other. Once the composition of the surface at the location of the Huygens landing is known from the detailed analysis of the in-situ measurements, it should become possible to learn the nature of the surface features elsewhere on Titan by combining the Huygens results with more extended cartography from Cassini as well as from VLT observations to come. More information Results on Titan obtained with data from NACO/VLT are in press in the journal Icarus ("Maps of Titan's surface from 1 to 2.5 micron" by A. Coustenis et al.). Previous images of Titan obtained with NACO and with NACO/SDI are accessible as ESO PR Photos 08/04 and ESO PR Photos 11/04. See also these Press Releases for additional scientific references.

  12. New procedures to evaluate visually lossless compression for display systems

    NASA Astrophysics Data System (ADS)

    Stolitzka, Dale F.; Schelkens, Peter; Bruylants, Tim

    2017-09-01

    Visually lossless image coding in isochronous display streaming or plesiochronous networks reduces link complexity and power consumption and increases available link bandwidth. A new set of codecs developed within the last four years promise a new level of coding quality, but require new techniques that are sufficiently sensitive to the small artifacts or color variations induced by this new breed of codecs. This paper begins with a summary of the new ISO/IEC 29170-2, a procedure for evaluation of lossless coding and reports the new work by JPEG to extend the procedure in two important ways, for HDR content and for evaluating the differences between still images, panning images and image sequences. ISO/IEC 29170-2 relies on processing test images through a well-defined process chain for subjective, forced-choice psychophysical experiments. The procedure sets an acceptable quality level equal to one just noticeable difference. Traditional image and video coding evaluation techniques, such as, those used for television evaluation have not proven sufficiently sensitive to the small artifacts that may be induced by this breed of codecs. In 2015, JPEG received new requirements to expand evaluation of visually lossless coding for high dynamic range images, slowly moving images, i.e., panning, and image sequences. These requirements are the basis for new amendments of the ISO/IEC 29170-2 procedures described in this paper. These amendments promise to be highly useful for the new content in television and cinema mezzanine networks. The amendments passed the final ballot in April 2017 and are on track to be published in 2018.

  13. Performance enhancement of a web-based picture archiving and communication system using commercial off-the-shelf server clusters.

    PubMed

    Liu, Yan-Lin; Shih, Cheng-Ting; Chang, Yuan-Jen; Chang, Shu-Jun; Wu, Jay

    2014-01-01

    The rapid development of picture archiving and communication systems (PACSs) thoroughly changes the way of medical informatics communication and management. However, as the scale of a hospital's operations increases, the large amount of digital images transferred in the network inevitably decreases system efficiency. In this study, a server cluster consisting of two server nodes was constructed. Network load balancing (NLB), distributed file system (DFS), and structured query language (SQL) duplication services were installed. A total of 1 to 16 workstations were used to transfer computed radiography (CR), computed tomography (CT), and magnetic resonance (MR) images simultaneously to simulate the clinical situation. The average transmission rate (ATR) was analyzed between the cluster and noncluster servers. In the download scenario, the ATRs of CR, CT, and MR images increased by 44.3%, 56.6%, and 100.9%, respectively, when using the server cluster, whereas the ATRs increased by 23.0%, 39.2%, and 24.9% in the upload scenario. In the mix scenario, the transmission performance increased by 45.2% when using eight computer units. The fault tolerance mechanisms of the server cluster maintained the system availability and image integrity. The server cluster can improve the transmission efficiency while maintaining high reliability and continuous availability in a healthcare environment.

  14. Performance Enhancement of a Web-Based Picture Archiving and Communication System Using Commercial Off-the-Shelf Server Clusters

    PubMed Central

    Chang, Shu-Jun; Wu, Jay

    2014-01-01

    The rapid development of picture archiving and communication systems (PACSs) thoroughly changes the way of medical informatics communication and management. However, as the scale of a hospital's operations increases, the large amount of digital images transferred in the network inevitably decreases system efficiency. In this study, a server cluster consisting of two server nodes was constructed. Network load balancing (NLB), distributed file system (DFS), and structured query language (SQL) duplication services were installed. A total of 1 to 16 workstations were used to transfer computed radiography (CR), computed tomography (CT), and magnetic resonance (MR) images simultaneously to simulate the clinical situation. The average transmission rate (ATR) was analyzed between the cluster and noncluster servers. In the download scenario, the ATRs of CR, CT, and MR images increased by 44.3%, 56.6%, and 100.9%, respectively, when using the server cluster, whereas the ATRs increased by 23.0%, 39.2%, and 24.9% in the upload scenario. In the mix scenario, the transmission performance increased by 45.2% when using eight computer units. The fault tolerance mechanisms of the server cluster maintained the system availability and image integrity. The server cluster can improve the transmission efficiency while maintaining high reliability and continuous availability in a healthcare environment. PMID:24701580

  15. Non-linear Post Processing Image Enhancement

    NASA Technical Reports Server (NTRS)

    Hunt, Shawn; Lopez, Alex; Torres, Angel

    1997-01-01

    A non-linear filter for image post processing based on the feedforward Neural Network topology is presented. This study was undertaken to investigate the usefulness of "smart" filters in image post processing. The filter has shown to be useful in recovering high frequencies, such as those lost during the JPEG compression-decompression process. The filtered images have a higher signal to noise ratio, and a higher perceived image quality. Simulation studies comparing the proposed filter with the optimum mean square non-linear filter, showing examples of the high frequency recovery, and the statistical properties of the filter are given,

  16. Sharper and Deeper Views with MACAO-VLTI

    NASA Astrophysics Data System (ADS)

    2003-05-01

    "First Light" with Powerful Adaptive Optics System for the VLT Interferometer Summary On April 18, 2003, a team of engineers from ESO celebrated the successful accomplishment of "First Light" for the MACAO-VLTI Adaptive Optics facility on the Very Large Telescope (VLT) at the Paranal Observatory (Chile). This is the second Adaptive Optics (AO) system put into operation at this observatory, following the NACO facility ( ESO PR 25/01 ). The achievable image sharpness of a ground-based telescope is normally limited by the effect of atmospheric turbulence. However, with Adaptive Optics (AO) techniques, this major drawback can be overcome so that the telescope produces images that are as sharp as theoretically possible, i.e., as if they were taken from space. The acronym "MACAO" stands for "Multi Application Curvature Adaptive Optics" which refers to the particular way optical corrections are made which "eliminate" the blurring effect of atmospheric turbulence. The MACAO-VLTI facility was developed at ESO. It is a highly complex system of which four, one for each 8.2-m VLT Unit Telescope, will be installed below the telescopes (in the Coudé rooms). These systems correct the distortions of the light beams from the large telescopes (induced by the atmospheric turbulence) before they are directed towards the common focus at the VLT Interferometer (VLTI). The installation of the four MACAO-VLTI units of which the first one is now in place, will amount to nothing less than a revolution in VLT interferometry . An enormous gain in efficiency will result, because of the associated 100-fold gain in sensitivity of the VLTI. Put in simple words, with MACAO-VLTI it will become possible to observe celestial objects 100 times fainter than now . Soon the astronomers will be thus able to obtain interference fringes with the VLTI ( ESO PR 23/01 ) of a large number of objects hitherto out of reach with this powerful observing technique, e.g. external galaxies. The ensuing high-resolution images and spectra will open entirely new perspectives in extragalactic research and also in the studies of many faint objects in our own galaxy, the Milky Way. During the present period, the first of the four MACAO-VLTI facilties was installed, integrated and tested by means of a series of observations. For these tests, an infrared camera was specially developed which allowed a detailed evaluation of the performance. It also provided some first, spectacular views of various celestial objects, some of which are shown here. PR Photo 12a/03 : View of the first MACAO-VLTI facility at Paranal PR Photo 12b/03 : The star HIC 59206 (uncorrected image). PR Photo 12c/03 : HIC 59206 (AO corrected image) PR Photo 12e/03 : HIC 69495 (AO corrected image) PR Photo 12f/03 : 3-D plot of HIC 69495 images (without and with AO correction) PR Photo 12g/03 : 3-D plot of the artificially dimmed star HIC 74324 (without and with AO correction) PR Photo 12d/03 : The MACAO-VLTI commissioning team at "First Light" PR Photo 12h/03 : K-band image of the Galactic Center PR Photo 12i/03 : K-band image of the unstable star Eta Carinae PR Photo 12j/03 : K-band image of the peculiar star Frosty Leo MACAO - the Multi Application Curvature Adaptive Optics facility ESO PR Photo 12a/03 ESO PR Photo 12a/03 [Preview - JPEG: 408 x 400 pix - 56k [Normal - JPEG: 815 x 800 pix - 720k] Captions : PR Photo 12a/03 is a front view of the first MACAO-VLTI unit, now installed at the 8.2-m VLT KUEYEN telescope. Adaptive Optics (AO) systems work by means of a computer-controlled deformable mirror (DM) that counteracts the image distortion induced by atmospheric turbulence. It is based on real-time optical corrections computed from image data obtained by a "wavefront sensor" (a special camera) at very high speed, many hundreds of times each second. The ESO Multi Application Curvature Adaptive Optics (MACAO) system uses a 60-element bimorph deformable mirror (DM) and a 60-element curvature wavefront sensor, with a "heartbeat" of 350 Hz (times per second). With this high spatial and temporal correcting power, MACAO is able to nearly restore the theoretically possible ("diffraction-limited") image quality of an 8.2-m VLT Unit Telescope in the near-infrared region of the spectrum, at a wavelength of about 2 µm. The resulting image resolution (sharpness) of the order of 60 milli-arcsec is an improvement by more than a factor of 10 as compared to standard seeing-limited observations. Without the benefit of the AO technique, such image sharpness could only be obtained if the telescope were placed above the Earth's atmosphere. The technical development of MACAO-VLTI in its present form was begun in 1999 and with project reviews at 6 months' intervals, the project quickly reached cruising speed. The effective design is the result of a very fruitful collaboration between the AO department at ESO and European industry which contributed with the diligent fabrication of numerous high-tech components, including the bimorph DM with 60 actuators, a fast-reaction tip-tilt mount and many others. The assembly, tests and performance-tuning of this complex real-time system was assumed by ESO-Garching staff. Installation at Paranal The first crates of the 60+ cubic-meter shipment with MACAO components arrived at the Paranal Observatory on March 12, 2003. Shortly thereafter, ESO engineers and technicians began the painstaking assembly of this complex instrument, below the VLT 8.2-m KUEYEN telescope (formerly UT2). They followed a carefully planned scheme, involving installation of the electronics, water cooling systems, mechanical and optical components. At the end, they performed the demanding optical alignment, delivering a fully assembled instrument one week before the planned first test observations. This extra week provided a very welcome and useful opportunity to perform a multitude of tests and calibrations in preparation of the actual observations. AO to the service of Interferometry The VLT Interferometer (VLTI) combines starlight captured by two or more 8.2- VLT Unit Telescopes (later also from four moveable1.8-m Auxiliary Telescopes) and allows to vastly increase the image resolution. The light beams from the telescopes are brought together "in phase" (coherently). Starting out at the primary mirrors, they undergo numerous reflections along their different paths over total distances of several hundred meters before they reach the interferometric Laboratory where they are combined to within a fraction of a wavelength, i.e., within nanometers! The gain by the interferometric technique is enormous - combining the light beams from two telescopes separated by 100 metres allows observation of details which could otherwise only be resolved by a single telescope with a diameter of 100 metres. Sophisticated data reduction is necessary to interpret interferometric measurements and to deduce important physical parameters of the observed objects like the diameters of stars, etc., cf. ESO PR 22/02 . The VLTI measures the degree of coherence of the combined beams as expressed by the contrast of the observed interferometric fringe pattern. The higher the degree of coherence between the individual beams, the stronger is the measured signal. By removing wavefront aberrations introduced by atmospheric turbulence, the MACAO-VLTI systems enormously increase the efficiency of combining the individual telescope beams. In the interferometric measurement process, the starlight must be injected into optical fibers which are extremely small in order to accomplish their function; only 6 µm (0.006 mm) in diameter. Without the "refocussing" action of MACAO, only a tiny fraction of the starlight captured by the telescopes can be injected into the fibers and the VLTI would not be working at the peak of efficiency for which it has been designed. MACAO-VLTI will now allow a gain of a factor 100 in the injected light flux - this will be tested in detail when two VLT Unit Telescopes, both equipped with MACAO-VLTI's, work together. However, the very good performance actually achieved with the first system makes the engineers very confident that a gain of this order will indeed be reached. This ultimate test will be performed as soon as the second MACAO-VLTI system has been installed later this year. MACAO-VLTI First Light After one month of installation work and following tests by means of an artificial light source installed in the Nasmyth focus of KUEYEN, MACAO-VLTI had "First Light" on April 18 when it received "real" light from several astronomical obejcts. During the preceding performance tests to measure the image improvement (sharpness, light energy concentration) in near-infrared spectral bands at 1.2, 1.6 and 2.2 µm, MACAO-VLTI was checked by means of a custom-made Infrared Test Camera developed for this purpose by ESO. This intermediate test was required to ensure the proper functioning of MACAO before it is used to feed a corrected beam of light into the VLTI. After only a few nights of testing and optimizing of the various functions and operational parameters, MACAO-VLTI was ready to be used for astronomical observations. The images below were taken under average seeing conditions and illustrate the improvement of the image quality when using MACAO-VLTI . MACAO-VLTI - First Images Here are some of the first images obtained with the test camera at the first MACAO-VLTI system, now installed at the 8.2-m VLT KUEYEN telescope. ESO PR Photo 12b/03 ESO PR Photo 12b/03 [Preview - JPEG: 400 x 468 pix - 25k [Normal - JPEG: 800 x 938 pix - 291k] ESO PR Photo 12c/03 ESO PR Photo 12c/03 [Preview - JPEG: 400 x 469 pix - 14k [Normal - JPEG: 800 x 938 pix - 135k] Captions : PR Photos 12b-c/03 show the first image, obtained by the first MACAO-VLTI system at the 8.2-m VLT KUEYEN telescope in the infrared K-band (wavelength 2.2 µm). It displays images of the star HIC 59206 (visual magnitude 10) obtained before (left; Photo 12b/03 ) and after (right; Photo 12c/03 ) the adaptive optics system was switched on. The binary is separated by 0.120 arcsec and the image was taken under medium seeing conditions (0.75 arcsec) seeing. The dramatic improvement in image quality is obvious. ESO PR Photo 12d/03 ESO PR Photo 12d/03 [Preview - JPEG: 400 x 427 pix - 18k [Normal - JPEG: 800 x 854 pix - 205k] ESO PR Photo 12e/03 ESO PR Photo 12e/03 [Preview - JPEG: 483 x 400 pix - 17k [Normal - JPEG: 966 x 800 pix - 169k] Captions : PR Photo 12d/03 shows one of the best images obtained with MACAO-VLTI (logarithmic intensity scale). The seeing was 0.8 arcsec at the time of the observations and three diffraction rings can clearly be seen around the star HIC 69495 of visual magnitude 9.9. This pattern is only well visible when the image resolution is very close to the theoretical limit. The exposure of the point-like source lasted 100 seconds through a narrow K-band filter. It has a Strehl ratio (a measure of light concentration) of about 55% and a Full-Width- Half-Maximum (FWHM) of 0.060 arcsec. The 3-D plot ( PRPhoto 12e/03 ) demonstrates the tremendous gain in peak intensity of the AO image (right) in peak intensity as compared to "open-loop" image (the "noise" to the left) obtained without the benefit of AO. ESO PR Photo 12f/03 ESO PR Photo 12f/03 [Preview - JPEG: 494 x 400 pix - 20k [Normal - JPEG: 988 x 800 pix - 204k] Caption : PR Photo 12f/03 demonstrates the correction performance of MACAO-VLTI when using a faint guide star. The observed star ( HIC 74324 (stellar spectral type G0 and visual magnitude 9.4) was artificially dimmed by a neutral optical filter to visual magnitude 16.5. The observation was carried out in 0.55 arcsec seeing and with a rather short atmospheric correlation time of 3 milliseconds at visible wavelengths. The Strehl ratio in the 25-second K-band exposure is about 10% and the FWHM is 0.14 arcseconds. The uncorrected image is shown to the left for comparison. The improvement is again impressive, even for a star as faint as this, indicating that guide stars of this magnitude are feasible during future observations. ESO PR Photo 12g/03 ESO PR Photo 12g/03 [Preview - JPEG: 528 x 400 pix - 48k [Normal - JPEG: 1055 x 800 pix - 542k] Captions : PR Photo 12g/03 shows some of the MACAO-VLTI commissioning team members in the VLT Control Room at the moment of "First Light" during the night between April 18-19, 2003. Sitting: Markus Kasper, Enrico Fedrigo - Standing: Robin Arsenault, Sebastien Tordo, Christophe Dupuy, Toomas Erm, Jason Spyromilio, Rob Donaldson (all from ESO). PR Photos 12b-c/03 show the first image in the infrared K-band (wavelength 2.2 µm) of a star (visual magnitude 10) obtained without and with image corrections by means of adaptive optics. PR Photo 12d/03 displays one of the best images obtained with MACAO-VLTI during the early tests. It shows a Strehl ratio (measure of light concentration) that fulfills the specifications according to which MACAO-VLTI was built. This enormous improvement when using AO techniques is clearly demonstrated in PR Photo 12e/03 , with the uncorrected image profile (left) hardly visible when compared to the corrected profile (right). PR Photo 11f/03 demonstrates the correction capabilities of MACAO-VLTI when using a faint guide star. Tests using different spectral types showed that the limiting visual magnitude varies between 16 for early-type B-stars and about 18 for late-type M-stars. Astronomical Objects seen at the Diffraction Limit The following examples of MACAO-VLTI observations of two well-known astronomical objects were obtained in order to provisionally evaluate the research opportunities now opening with MACAO-VLTI. They may well be compared with space-based images. The Galactic Center ESO PR Photo 12h/03 ESO PR Photo 12h/03 [Preview - JPEG: 693 x 400 pix - 46k [Normal - JPEG: 1386 x 800 pix - 403k] Caption : PR Photo 12h/03 shows a 90-second K-band exposure of the central 6 x 13 arcsec 2 around the Galactic Center obtained by MACAO-VLTI under average atmospheric conditions (0.8 arcsec seeing). Although the 14.6 magnitude guide star is located roughly 20 arcsec from the field center - this leading to isoplanatic degradation of image sharpness - the present image is nearly diffraction limited and has a point-source FWHM of about 0.115 arcsec. The center of our own galaxy is located in the Sagittarius constellation at a distance of approximately 30,000 light-years. PR Photo 12h/03 shows a short-exposure infrared view of this region, obtained by MACAO-VLTI during the early test phase. Recent AO observations using the NACO facility at the VLT provide compelling evidence that a supermassive black hole with 2.6 million solar masses is located at the very center, cf. ESO PR 17/02 . This result, based on astrometric observations of a star orbiting the black hole and approaching it to within a distance of only 17 light-hours, would not have been possible without images of diffraction limited resolution. Eta Carinae ESO PR Photo 12i/03 ESO PR Photo 12i/03 [Preview - JPEG: 400 x 482 pix - 25k [Normal - JPEG: 800 x 963 pix - 313k] Caption : PR Photo 12i/03 displays an infrared narrow K-band image of the massive star Eta Carinae . The image quality is difficult to estimate because the central star saturated the detector, but the clear structure of the diffraction spikes and the size of the smallest features visible in the photo indicate a near-diffraction limited performance. The field measures about 6.5 x 6.5 arcsec 2. Eta Carinae is one of the heaviest stars known, with a mass that probably exceeds 100 solar masses. It is about 4 million times brighter than the Sun, making it one of the most luminous stars known. Such a massive star has a comparatively short lifetime of about 1 million years only and - measured in the cosmic timescale- Eta Carinae must have formed quite recently. This star is highly unstable and prone to violent outbursts. They are caused by the very high radiation pressure at the star's upper layers, which blows significant portions of the matter at the "surface" into space during violent eruptions that may last several years. The last of these outbursts occurred between 1835 and 1855 and peaked in 1843. Despite its comparaticely large distance - some 7,500 to 10,000 light-years - Eta Carinae briefly became the second brightest star in the sky at that time (with an apparent magnitude -1), only surpassed by Sirius. Frosty Leo ESO PR Photo 12j/03 ESO PR Photo 12j/03 [Preview - JPEG: 411 x 400 pix - 22k [Normal - JPEG: 821 x 800 pix - 344k] Caption : PR Photo 12j/03 shows a 5 x 5 arcsec 2 K-band image of the peculiar star known as "Frosty Leo" obtained in 0.7 arcsec seeing. Although the object is comparatively bright (visual magnitude 11), it is a difficult AO target because of its extension of about 3 arcsec at visible wavelengths. The corrected image quality is about FWHM 0.1 arcsec. Frosty Leo is a magnitude 11 (post-AGB) star surrounded by an envelope of gas, dust, and large amounts of ice (hence the name). The associated nebula is of "butterfly" shape (bipolar morphology) and it is one of the best known examples of the brief transitional phase between two late evolutionary stages, asymptotic giant branch (AGB) and the subsequent planetary nebulae (PNe). For a three-solar-mass object like this one, this phase is believed to last only a few thousand years, the wink of an eye in the life of the star. Hence, objects like this one are very rare and Frosty Leo is one of the nearest and brightest among them.

  17. Thermal feature extraction of servers in a datacenter using thermal image registration

    NASA Astrophysics Data System (ADS)

    Liu, Hang; Ran, Jian; Xie, Ting; Gao, Shan

    2017-09-01

    Thermal cameras provide fine-grained thermal information that enhances monitoring and enables automatic thermal management in large datacenters. Recent approaches employing mobile robots or thermal camera networks can already identify the physical locations of hot spots. Other distribution information used to optimize datacenter management can also be obtained automatically using pattern recognition technology. However, most of the features extracted from thermal images, such as shape and gradient, may be affected by changes in the position and direction of the thermal camera. This paper presents a method for extracting the thermal features of a hot spot or a server in a container datacenter. First, thermal and visual images are registered based on textural characteristics extracted from images acquired in datacenters. Then, the thermal distribution of each server is standardized. The features of a hot spot or server extracted from the standard distribution can reduce the impact of camera position and direction. The results of experiments show that image registration is efficient for aligning the corresponding visual and thermal images in the datacenter, and the standardization procedure reduces the impacts of camera position and direction on hot spot or server features.

  18. An implementation of wireless medical image transmission system on mobile devices.

    PubMed

    Lee, SangBock; Lee, Taesoo; Jin, Gyehwan; Hong, Juhyun

    2008-12-01

    The advanced technology of computing system was followed by the rapid improvement of medical instrumentation and patient record management system. The typical examples are hospital information system (HIS) and picture archiving and communication system (PACS), which computerized the management procedure of medical records and images in hospital. Because these systems were built and used in hospitals, doctors out of hospital have problems to access them immediately on emergent cases. To solve these problems, this paper addressed the realization of system that could transmit the images acquired by medical imaging systems in hospital to the remote doctors' handheld PDA's using CDMA cellular phone network. The system consists of server and PDA. The server was developed to manage the accounts of doctors and patients and allocate the patient images to each doctor. The PDA was developed to display patient images through remote server connection. To authenticate the personal user, remote data access (RDA) method was used in PDA accessing the server database and file transfer protocol (FTP) was used to download patient images from the remove server. In laboratory experiments, it was calculated to take ninety seconds to transmit thirty images with 832 x 488 resolution and 24 bit depth and 0.37 Mb size. This result showed that the developed system has no problems for remote doctors to receive and review the patient images immediately on emergent cases.

  19. EBLAST: an efficient high-compression image transformation 3. application to Internet image and video transmission

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.; Ritter, Gerhard X.; Caimi, Frank M.

    2001-12-01

    A wide variety of digital image compression transforms developed for still imaging and broadcast video transmission are unsuitable for Internet video applications due to insufficient compression ratio, poor reconstruction fidelity, or excessive computational requirements. Examples include hierarchical transforms that require all, or large portion of, a source image to reside in memory at one time, transforms that induce significant locking effect at operationally salient compression ratios, and algorithms that require large amounts of floating-point computation. The latter constraint holds especially for video compression by small mobile imaging devices for transmission to, and compression on, platforms such as palmtop computers or personal digital assistants (PDAs). As Internet video requirements for frame rate and resolution increase to produce more detailed, less discontinuous motion sequences, a new class of compression transforms will be needed, especially for small memory models and displays such as those found on PDAs. In this, the third series of papers, we discuss the EBLAST compression transform and its application to Internet communication. Leading transforms for compression of Internet video and still imagery are reviewed and analyzed, including GIF, JPEG, AWIC (wavelet-based), wavelet packets, and SPIHT, whose performance is compared with EBLAST. Performance analysis criteria include time and space complexity and quality of the decompressed image. The latter is determined by rate-distortion data obtained from a database of realistic test images. Discussion also includes issues such as robustness of the compressed format to channel noise. EBLAST has been shown to perform superiorly to JPEG and, unlike current wavelet compression transforms, supports fast implementation on embedded processors with small memory models.

  20. Impact of JPEG2000 compression on spatial-spectral endmember extraction from hyperspectral data

    NASA Astrophysics Data System (ADS)

    Martín, Gabriel; Ruiz, V. G.; Plaza, Antonio; Ortiz, Juan P.; García, Inmaculada

    2009-08-01

    Hyperspectral image compression has received considerable interest in recent years. However, an important issue that has not been investigated in the past is the impact of lossy compression on spectral mixture analysis applications, which characterize mixed pixels in terms of a suitable combination of spectrally pure spectral substances (called endmembers) weighted by their estimated fractional abundances. In this paper, we specifically investigate the impact of JPEG2000 compression of hyperspectral images on the quality of the endmembers extracted by algorithms that incorporate both the spectral and the spatial information (useful for incorporating contextual information in the spectral endmember search). The two considered algorithms are the automatic morphological endmember extraction (AMEE) and the spatial spectral endmember extraction (SSEE) techniques. Experimental results are conducted using a well-known data set collected by AVIRIS over the Cuprite mining district in Nevada and with detailed ground-truth information available from U. S. Geological Survey. Our experiments reveal some interesting findings that may be useful to specialists applying spatial-spectral endmember extraction algorithms to compressed hyperspectral imagery.

  1. Fast computational scheme of image compression for 32-bit microprocessors

    NASA Technical Reports Server (NTRS)

    Kasperovich, Leonid

    1994-01-01

    This paper presents a new computational scheme of image compression based on the discrete cosine transform (DCT), underlying JPEG and MPEG International Standards. The algorithm for the 2-d DCT computation uses integer operations (register shifts and additions / subtractions only); its computational complexity is about 8 additions per image pixel. As a meaningful example of an on-board image compression application we consider the software implementation of the algorithm for the Mars Rover (Marsokhod, in Russian) imaging system being developed as a part of Mars-96 International Space Project. It's shown that fast software solution for 32-bit microprocessors may compete with the DCT-based image compression hardware.

  2. Artifacts in slab average-intensity-projection images reformatted from JPEG 2000 compressed thin-section abdominal CT data sets.

    PubMed

    Kim, Bohyoung; Lee, Kyoung Ho; Kim, Kil Joong; Mantiuk, Rafal; Kim, Hye-ri; Kim, Young Hoon

    2008-06-01

    The objective of our study was to assess the effects of compressing source thin-section abdominal CT images on final transverse average-intensity-projection (AIP) images. At reversible, 4:1, 6:1, 8:1, 10:1, and 15:1 Joint Photographic Experts Group (JPEG) 2000 compressions, we compared the artifacts in 20 matching compressed thin sections (0.67 mm), compressed thick sections (5 mm), and AIP images (5 mm) reformatted from the compressed thin sections. The artifacts were quantitatively measured with peak signal-to-noise ratio (PSNR) and a perceptual quality metric (High Dynamic Range Visual Difference Predictor [HDR-VDP]). By comparing the compressed and original images, three radiologists independently graded the artifacts as 0 (none, indistinguishable), 1 (barely perceptible), 2 (subtle), or 3 (significant). Friedman tests and exact tests for paired proportions were used. At irreversible compressions, the artifacts tended to increase in the order of AIP, thick-section, and thin-section images in terms of PSNR (p < 0.0001), HDR-VDP (p < 0.0001), and the readers' grading (p < 0.01 at 6:1 or higher compressions). At 6:1 and 8:1, distinguishable pairs (grades 1-3) tended to increase in the order of AIP, thick-section, and thin-section images. Visually lossless threshold for the compression varied between images but decreased in the order of AIP, thick-section, and thin-section images (p < 0.0001). Compression artifacts in thin sections are significantly attenuated in AIP images. On the premise that thin sections are typically reviewed using an AIP technique, it is justifiable to compress them to a compression level currently accepted for thick sections.

  3. Study on an agricultural environment monitoring server system using Wireless Sensor Networks.

    PubMed

    Hwang, Jeonghwan; Shin, Changsun; Yoe, Hyun

    2010-01-01

    This paper proposes an agricultural environment monitoring server system for monitoring information concerning an outdoors agricultural production environment utilizing Wireless Sensor Network (WSN) technology. The proposed agricultural environment monitoring server system collects environmental and soil information on the outdoors through WSN-based environmental and soil sensors, collects image information through CCTVs, and collects location information using GPS modules. This collected information is converted into a database through the agricultural environment monitoring server consisting of a sensor manager, which manages information collected from the WSN sensors, an image information manager, which manages image information collected from CCTVs, and a GPS manager, which processes location information of the agricultural environment monitoring server system, and provides it to producers. In addition, a solar cell-based power supply is implemented for the server system so that it could be used in agricultural environments with insufficient power infrastructure. This agricultural environment monitoring server system could even monitor the environmental information on the outdoors remotely, and it could be expected that the use of such a system could contribute to increasing crop yields and improving quality in the agricultural field by supporting the decision making of crop producers through analysis of the collected information.

  4. Recent Evolution of the CDS Services - SIMBAD, VizieR and Aladin

    NASA Astrophysics Data System (ADS)

    Genova, F.; Allen, M. G.; Bienayme, O.; Boch, T.; Bonnarel, F.; Cambresy, L.; Derriere, S.; Dubois, P.; Fernique, P.; Lesteven, S.; Loup, C.; Ochsenbein, F.; Schaaff, A.; Vollmer, B.; Wenger, M.; Louys, M.; Jasniewicz, G.; Davoust, E.

    2005-12-01

    The Centre de Donnees astronomiques de Strasbourg (CDS) maintains several widely used databases and services. Among significant recent evolutions: - a new version of SIMBAD (SIMBAD 4), based on the PostgreSQL database system, has been developed, to replace the current version which has been operational since 1990. It allows new query and sampling possibilities. For accessing SIMBAD from other applications, a full Web Service will be made available in addition to the client-server program which is presently used as name resolver by many services. - VizieR, which gives access to major surveys, observation logs and tables published in journals, is continuously updated in collaboration with journals and ground- and space-based observatories. The diversity of information in VizieR makes it an excellent test-bed for the Virtual Observatory, in particular for the definition of astronomy semantics and of query language, and the implementation of registries. - a major update of Aladin (Aladin V3 Multiview) was released in April 2005. It integrates in particular a multiview display, image resampling, blinking, access to real pixel values (not only 8 bits), compatibility with common image formats such as GIF, JPEG and PNG, scaling functions for better pixel contrasts, a 'Region of Interest Generator' which automatically builds small views around catalog objects, a cross-match function, the possibility to compute new catalog colums via algebraic expressions, extended script commands for batch mode use, and access to additional data such as SDSS. Aladin is routinely used as a portal to the Virtual Observatory. Many of the new functions have been prototyped in the frame of the European Astrophysical Virtual Observatory project, and other are tested for the VO-TECH project.

  5. Volcanoes of the Wrangell Mountains and Cook Inlet region, Alaska: selected photographs

    USGS Publications Warehouse

    Neal, Christina A.; McGimsey, Robert G.; Diggles, Michael F.

    2001-01-01

    Alaska is home to more than 40 active volcanoes, many of which have erupted violently and repeatedly in the last 200 years. This CD-ROM contains 97 digitized color 35-mm images which represent a small fraction of thousands of photographs taken by Alaska Volcano Observatory scientists, other researchers, and private citizens. The photographs were selected to portray Alaska's volcanoes, to document recent eruptive activity, and to illustrate the range of volcanic phenomena observed in Alaska. These images are for use by the interested public, multimedia producers, desktop publishers, and the high-end printing industry. The digital images are stored in the 'images' folder and can be read across Macintosh, Windows, DOS, OS/2, SGI, and UNIX platforms with applications that can read JPG (JPEG - Joint Photographic Experts Group format) or PCD (Kodak's PhotoCD (YCC) format) files. Throughout this publication, the image numbers match among the file names, figure captions, thumbnail labels, and other references. Also included on this CD-ROM are Windows and Macintosh viewers and engines for keyword searches (Adobe Acrobat Reader with Search). At the time of this publication, Kodak's policy on the distribution of color-management files is still unresolved, and so none is included on this CD-ROM. However, using the Universal Ektachrome or Universal Kodachrome transforms found in your software will provide excellent color. In addition to PhotoCD (PCD) files, this CD-ROM contains large (14.2'x19.5') and small (4'x6') screen-resolution (72 dots per inch; dpi) images in JPEG format. These undergo downsizing and compression relative to the PhotoCD images.

  6. Optimized atom position and coefficient coding for matching pursuit-based image compression.

    PubMed

    Shoa, Alireza; Shirani, Shahram

    2009-12-01

    In this paper, we propose a new encoding algorithm for matching pursuit image coding. We show that coding performance is improved when correlations between atom positions and atom coefficients are both used in encoding. We find the optimum tradeoff between efficient atom position coding and efficient atom coefficient coding and optimize the encoder parameters. Our proposed algorithm outperforms the existing coding algorithms designed for matching pursuit image coding. Additionally, we show that our algorithm results in better rate distortion performance than JPEG 2000 at low bit rates.

  7. TerraLook: Providing easy, no-cost access to satellite images for busy people and the technologically disinclined

    USGS Publications Warehouse

    Geller, G.N.; Fosnight, E.A.; Chaudhuri, Sambhudas

    2008-01-01

    Access to satellite images has been largely limited to communities with specialized tools and expertise, even though images could also benefit other communities. This situation has resulted in underutilization of the data. TerraLook, which consists of collections of georeferenced JPEG images and an open source toolkit to use them, makes satellite images available to those lacking experience with remote sensing. Users can find, roam, and zoom images, create and display vector overlays, adjust and annotate images so they can be used as a communication vehicle, compare images taken at different times, and perform other activities useful for natural resource management, sustainable development, education, and other activities. ?? 2007 IEEE.

  8. TerraLook: Providing easy, no-cost access to satellite images for busy people and the technologically disinclined

    USGS Publications Warehouse

    Geller, G.N.; Fosnight, E.A.; Chaudhuri, Sambhudas

    2007-01-01

    Access to satellite images has been largely limited to communities with specialized tools and expertise, even though images could also benefit other communities. This situation has resulted in underutilization of the data. TerraLook, which consists of collections of georeferenced JPEG images and an open source toolkit to use them, makes satellite images available to those lacking experience with remote sensing. Users can find, roam, and zoom images, create and display vector overlays, adjust and annotate images so they can be used as a communication vehicle, compare images taken at different times, and perform other activities useful for natural resource management, sustainable development, education, and other activities. ?? 2007 IEEE.

  9. About a method for compressing x-ray computed microtomography data

    NASA Astrophysics Data System (ADS)

    Mancini, Lucia; Kourousias, George; Billè, Fulvio; De Carlo, Francesco; Fidler, Aleš

    2018-04-01

    The management of scientific data is of high importance especially for experimental techniques that produce big data volumes. Such a technique is x-ray computed tomography (CT) and its community has introduced advanced data formats which allow for better management of experimental data. Rather than the organization of the data and the associated meta-data, the main topic on this work is data compression and its applicability to experimental data collected from a synchrotron-based CT beamline at the Elettra-Sincrotrone Trieste facility (Italy) and studies images acquired from various types of samples. This study covers parallel beam geometry, but it could be easily extended to a cone-beam one. The reconstruction workflow used is the one currently in operation at the beamline. Contrary to standard image compression studies, this manuscript proposes a systematic framework and workflow for the critical examination of different compression techniques and does so by applying it to experimental data. Beyond the methodology framework, this study presents and examines the use of JPEG-XR in combination with HDF5 and TIFF formats providing insights and strategies on data compression and image quality issues that can be used and implemented at other synchrotron facilities and laboratory systems. In conclusion, projection data compression using JPEG-XR appears as a promising, efficient method to reduce data file size and thus to facilitate data handling and image reconstruction.

  10. The National Institutes of Health Clinical Center Digital Imaging Network, Picture Archival and Communication System, and Radiology Information System.

    PubMed

    Goldszal, A F; Brown, G K; McDonald, H J; Vucich, J J; Staab, E V

    2001-06-01

    In this work, we describe the digital imaging network (DIN), picture archival and communication system (PACS), and radiology information system (RIS) currently being implemented at the Clinical Center, National Institutes of Health (NIH). These systems are presently in clinical operation. The DIN is a redundant meshed network designed to address gigabit density and expected high bandwidth requirements for image transfer and server aggregation. The PACS projected workload is 5.0 TB of new imaging data per year. Its architecture consists of a central, high-throughput Digital Imaging and Communications in Medicine (DICOM) data repository and distributed redundant array of inexpensive disks (RAID) servers employing fiber-channel technology for immediate delivery of imaging data. On demand distribution of images and reports to clinicians and researchers is accomplished via a clustered web server. The RIS follows a client-server model and provides tools to order exams, schedule resources, retrieve and review results, and generate management reports. The RIS-hospital information system (HIS) interfaces include admissions, discharges, and transfers (ATDs)/demographics, orders, appointment notifications, doctors update, and results.

  11. A Powerful Twin Arrives

    NASA Astrophysics Data System (ADS)

    1999-11-01

    First Images from FORS2 at VLT KUEYEN on Paranal The first, major astronomical instrument to be installed at the ESO Very Large Telescope (VLT) was FORS1 ( FO cal R educer and S pectrograph) in September 1998. Immediately after being attached to the Cassegrain focus of the first 8.2-m Unit Telescope, ANTU , it produced a series of spectacular images, cf. ESO PR 14/98. Many important observations have since been made with this outstanding facility. Now FORS2 , its powerful twin, has been installed at the second VLT Unit Telescope, KUEYEN . It is the fourth major instrument at the VLT after FORS1 , ISAAC and UVES.. The FORS2 Commissioning Team that is busy installing and testing this large and complex instrument reports that "First Light" was successfully achieved already on October 29, 1999, only two days after FORS2 was first mounted at the Cassegrain focus. Since then, various observation modes have been carefully tested, including normal and high-resolution imaging, echelle and multi-object spectroscopy, as well as fast photometry with millisecond time resolution. A number of fine images were obtained during this work, some of which are made available with the present Press Release. The FORS instruments ESO PR Photo 40a/99 ESO PR Photo 40a/99 [Preview - JPEG: 400 x 345 pix - 203k] [Normal - JPEG: 800 x 689 pix - 563kb] [Full-Res - JPEG: 1280 x 1103 pix - 666kb] Caption to PR Photo 40a/99: This digital photo shows the twin instruments, FORS2 at KUEYEN (in the foreground) and FORS1 at ANTU, seen in the background through the open ventilation doors in the two telescope enclosures. Although they look alike, the two instruments have specific functions, as described in the text. FORS1 and FORS2 are the products of one of the most thorough and advanced technological studies ever made of a ground-based astronomical instrument. They have been specifically designed to investigate the faintest and most remote objects in the universe. They are "multi-mode instruments" that may be used in several different observation modes. FORS2 is largely identical to FORS1 , but there are a number of important differences. For example, it contains a Mask Exchange Unit (MXU) for laser-cut star-plates [1] that may be inserted at the focus, allowing a large number of spectra of different objects, in practice up to about 70, to be taken simultaneously. Highly sophisticated software assigns slits to individual objects in an optimal way, ensuring a great degree of observing efficiency. Instead of the polarimetry optics found in FORS1 , FORS2 has new grisms that allow the use of higher spectral resolutions. The FORS project was carried out under ESO contract by a consortium of three German astronomical institutes, the Heidelberg State Observatory and the University Observatories of Göttingen and Munich. The participating institutes have invested a total of about 180 man-years of work in this unique programme. The photos below demonstrate some of the impressive possibilities with this new instrument. They are based on observations with the FORS2 standard resolution collimator (field size 6.8 x 6.8 armin = 2048 x 2048 pixels; 1 pixel = 0.20 arcsec). In addition, observations of the Crab pulsar demonstrate a new observing mode, high-speed photometry. Protostar HH-34 in Orion ESO PR Photo 40b/99 ESO PR Photo 40b/99 [Preview - JPEG: 400 x 444 pix - 220kb] [Normal - JPEG: 800 x 887 pix - 806kb] [Full-Res - JPEG: 2000 x 2217 pix - 3.6Mb] The Area around HH-34 in Orion ESO PR Photo 40c/99 ESO PR Photo 40c/99 [Preview - JPEG: 400 x 494 pix - 262kb] [Full-Res - JPEG: 802 x 991 pix - 760 kb] The HH-34 Superjet in Orion (centre) PR Photo 40b/99 shows a three-colour composite of the young object Herbig-Haro 34 (HH-34) , now in the protostar stage of evolution. It is based on CCD frames obtained with the FORS2 instrument in imaging mode, on November 2 and 6, 1999. This object has a remarkable, very complicated appearance that includes two opposite jets that ram into the surrounding interstellar matter. This structure is produced by a machine-gun-like blast of "bullets" of dense gas ejected from the star at high velocities (approaching 250 km/sec). This seems to indicate that the star experiences episodic "outbursts" when large chunks of material fall onto it from a surrounding disk. HH-34 is located at a distance of approx. 1,500 light-years, near the famous Orion Nebula , one of the most productive star birth regions. Note also the enigmatic "waterfall" to the upper left, a feature that is still unexplained. PR Photo 40c/99 is an enlargement of a smaller area around the central object. Technical information : Photo 40b/99 is based on a composite of three images taken through three different filters: B (wavelength 429 nm; Full-Width-Half-Maximum (FWHM) 88 nm; exposure time 10 min; here rendered as blue), H-alpha (centered on the hydrogen emission line at wavelength 656 nm; FWHM 6 nm; 30 min; green) and S II (centrered at the emission lines of inonized sulphur at wavelength 673 nm; FWHM 6 nm; 30 min; red) during a period of 0.8 arcsec seeing. The field shown measures 6.8 x 6.8 arcmin and the images were recorded in frames of 2048 x 2048 pixels, each measuring 0.2 arcsec. The Full Resolution version shows the original pixels. North is up; East is left. N 70 Nebula in the Large Magellanic Cloud ESO PR Photo 40d/99 ESO PR Photo 40d/99 [Preview - JPEG: 400 x 444 pix - 360kb] [Normal - JPEG: 800 x 887 pix - 1.0Mb] [Full-Res - JPEG: 1997 x 2213 pix - 3.4Mb] The N 70 Nebula in the LMC ESO PR Photo 40e/99 ESO PR Photo 40e/99 [Preview - JPEG: 400 x 485 pix - 346kb] [Full-Res - JPEG: 986 x 1196 pix - 1.2Mb] The N70 Nebula in the LMC (detail) PR Photo 40d/99 shows a three-colour composite of the N 70 nebula. It is a "Super Bubble" in the Large Magellanic Cloud (LMC) , a satellite galaxy to the Milky Way system, located in the southern sky at a distance of about 160,000 light-years. This photo is based on CCD frames obtained with the FORS2 instrument in imaging mode in the morning of November 5, 1999. N 70 is a luminous bubble of interstellar gas, measuring about 300 light-years in diameter. It was created by winds from hot, massive stars and supernova explosions and the interior is filled with tenuous, hot expanding gas. An object like N70 provides astronomers with an excellent opportunity to explore the connection between the lifecycles of stars and the evolution of galaxies. Very massive stars profoundly affect their environment. They stir and mix the interstellar clouds of gas and dust, and they leave their mark in the compositions and locations of future generations of stars and star systems. PR Photo 40e/99 is an enlargement of a smaller area of this nebula. Technical information : Photos 40d/99 is based on a composite of three images taken through three different filters: B (429 nm; FWHM 88 nm; 3 min; here rendered as blue), V (554 nm; FWHM 111 nm; 3 min; green) and H-alpha (656 nm; FWHM 6 nm; 3 min; red) during a period of 1.0 arcsec seeing. The field shown measures 6.8 x 6.8 arcmin and the images were recorded in frames of 2048 x 2048 pixels, each measuring 0.2 arcsec. The Full Resolution version shows the original pixels. North is up; East is left. The Crab Nebula in Taurus ESO PR Photo 40f/99 ESO PR Photo 40f/99 [Preview - JPEG: 400 x 446 pix - 262k] [Normal - JPEG: 800 x 892 pix - 839 kb] [Full-Res - JPEG: 2036 x 2269 pix - 3.6Mb] The Crab Nebula in Taurus ESO PR Photo 40g/99 ESO PR Photo 40g/99 [Preview - JPEG: 400 x 444 pix - 215kb] [Full-Res - JPEG: 817 x 907 pix - 485 kb] The Crab Nebula in Taurus (detail) PR Photo 40f/99 shows a three colour composite of the well-known Crab Nebula (also known as "Messier 1" ), as observed with the FORS2 instrument in imaging mode in the morning of November 10, 1999. It is the remnant of a supernova explosion at a distance of about 6,000 light-years, observed almost 1000 years ago, in the year 1054. It contains a neutron star near its center that spins 30 times per second around its axis (see below). PR Photo 40g/99 is an enlargement of a smaller area. More information on the Crab Nebula and its pulsar is available on the web, e.g. at a dedicated website for Messier objects. In this picture, the green light is predominantly produced by hydrogen emission from material ejected by the star that exploded. The blue light is predominantly emitted by very high-energy ("relativistic") electrons that spiral in a large-scale magnetic field (so-called syncrotron emission ). It is believed that these electrons are continuously accelerated and ejected by the rapidly spinning neutron star at the centre of the nebula and which is the remnant core of the exploded star. This pulsar has been identified with the lower/right of the two close stars near the geometric center of the nebula, immediately left of the small arc-like feature, best seen in PR Photo 40g/99 . Technical information : Photo 40f/99 is based on a composite of three images taken through three different optical filters: B (429 nm; FWHM 88 nm; 5 min; here rendered as blue), R (657 nm; FWHM 150 nm; 1 min; green) and S II (673 nm; FWHM 6 nm; 5 min; red) during periods of 0.65 arcsec (R, S II) and 0.80 (B) seeing, respectively. The field shown measures 6.8 x 6.8 arcmin and the images were recorded in frames of 2048 x 2048 pixels, each measuring 0.2 arcsec. The Full Resolution version shows the original pixels. North is up; East is left. The High Time Resolution mode (HIT) of FORS2 ESO PR Photo 40h/99 ESO PR Photo 40h/99 [Preview - JPEG: 400 x 304 pix - 90kb] [Normal - JPEG: 707 x 538 pix - 217kb] Time Sequence of the Pulsar in the Crab Nebula ESO PR Photo 40i/99 ESO PR Photo 40i/99 [Preview - JPEG: 400 x 324 pix - 42kb] [Normal - JPEG: 800 x 647 pix - 87kb] Lightcurve of the Pulsar in the Crab Nebula In combination with the large light collecting power of the VLT Unit Telescopes, the high time resolution (25 nsec = 0.000000025 sec) of the ESO-developed FIERA CCD-detector controller opens a new observing window for celestial objects that undergo light intensity variations on very short time scales. A first implementation of this type of observing mode was tested with FORS2 during the first commissioning phase, by means of one of the most fascinating astronomical objects, the rapidly spinning neutron star in the Crab Nebula . It is also known as the Crab pulsar and is an exceedingly dense object that represents an extreme state of matter - it weighs as much as the Sun, but measures only about 30 km across. The result presented here was obtained in the so-called trailing mode , during which one of the rectangular openings of the Multi-Object Spectroscopy (MOS) assembly within FORS2 is placed in front of the lower end of the field. In this way, the entire surface of the CCD is covered, except the opening in which the object under investigation is positioned. By rotating this opening, some neighbouring objects (e.g. stars for alignment) may be observed simultaneously. As soon as the shutter is opened, the charges on the chip are progressively shifted upwards, one pixel at a time, until those first collected in the bottom row behind the opening have reached the top row. Then the entire CCD is read out and the digital data with the full image is stored in the computer. In this way, successive images (or spectra) of the object are recorded in the same frame, displaying the intensity variation with time during the exposure. For this observation, the total exposure lasted 2.5 seconds. During this time interval the image of the pulsar (and those of some neighbouring stars) were shifted 2048 times over the 2048 rows of the CCD. Each individual exposure therefore lasted exactly 1.2 msec (0.0012 sec), corresponding to a nominal time-resolution of 2.4 msec (2 pixels). Faster or slower time resolutions are possible by increasing or decreasing the shift and read-out rate [2]. In ESO PR Photo 40h/99 , the continuous lines in the top and bottom half are produced by normal stars of constant brightness, while the series of dots represents the individual pulses of the Crab pulsar, one every 33 milliseconds (i.e. the neutron star rotates around its axis 30 times per second). It is also obvious that these dots are alternatively brighter and fainter: they mirror the double-peaked profile of the light pulses, as shown in ESO PR Photo 40i/99 . In this diagramme, the time increases along the abscissa axis (1 pixel = 1.2 msec) and the momentary intensity (uncalibrated) is along the ordinate axis. One full revolution of the neutron star corresponds to the distance from one high peak to the next, and the diagramme therefore covers six consecutive revolutions (about 200 milliseconds). Following thorough testing, this new observing mode will allow to investigate the brightness variations of this and many other objects in great detail in order to gain new and fundamental insights in the physical mechanisms that produce the radiation pulses. In addition, it is foreseen to do high time resolution spectroscopy of rapidly varying phenomena. Pushing it to the limits with an 8.2-m telescope like KUEYEN will be a real challenge to the observers that will most certainly lead to great and exciting research projects in various fields of modern astrophysics. Technical information : The frame shown in Photo 40h/99 was obtained during a total exposure time of 2.5 sec without any optical filtre. During this time, the charges on the CCD were shifted over 2048 rows; each row was therefore exposed during 1.2 msec. The bright continuous line comes from the star next to the pulsar; the orientation was such that the "observation slit" was placed over two neighbouring stars. Preliminary data reduction: 11 pixels were added across the pulsar image to increase the signal-to-noise ratio and the background light from the Crab Nebula was subtracted for the same reason. Division by a brighter star (also background-subtracted, but not shown in the image) helped to reduce the influence of the Earth's atmosphere. Notes [1] The masks are produced by the Mask Manufacturing Unit (MMU) built by the VIRMOS Consortium for the VIMOS and NIRMOS instruments that will be installed at the VLT MELIPAL and YEPUN telescopes, respectively. [2] The time resolution achieved during the present test was limited by the maximum charge transfer rate of this particular CCD chip; in the future, FORS2 may be equipped with a new chip with a rate that is up to 20 times faster. How to obtain ESO Press Information ESO Press Information is made available on the World-Wide Web (URL: http://www.eso.org../ ). ESO Press Photos may be reproduced, if credit is given to the European Southern Observatory.

  12. Mobile object retrieval in server-based image databases

    NASA Astrophysics Data System (ADS)

    Manger, D.; Pagel, F.; Widak, H.

    2013-05-01

    The increasing number of mobile phones equipped with powerful cameras leads to huge collections of user-generated images. To utilize the information of the images on site, image retrieval systems are becoming more and more popular to search for similar objects in an own image database. As the computational performance and the memory capacity of mobile devices are constantly increasing, this search can often be performed on the device itself. This is feasible, for example, if the images are represented with global image features or if the search is done using EXIF or textual metadata. However, for larger image databases, if multiple users are meant to contribute to a growing image database or if powerful content-based image retrieval methods with local features are required, a server-based image retrieval backend is needed. In this work, we present a content-based image retrieval system with a client server architecture working with local features. On the server side, the scalability to large image databases is addressed with the popular bag-of-word model with state-of-the-art extensions. The client end of the system focuses on a lightweight user interface presenting the most similar images of the database highlighting the visual information which is common with the query image. Additionally, new images can be added to the database making it a powerful and interactive tool for mobile contentbased image retrieval.

  13. Image transmission system using adaptive joint source and channel decoding

    NASA Astrophysics Data System (ADS)

    Liu, Weiliang; Daut, David G.

    2005-03-01

    In this paper, an adaptive joint source and channel decoding method is designed to accelerate the convergence of the iterative log-dimain sum-product decoding procedure of LDPC codes as well as to improve the reconstructed image quality. Error resilience modes are used in the JPEG2000 source codec, which makes it possible to provide useful source decoded information to the channel decoder. After each iteration, a tentative decoding is made and the channel decoded bits are then sent to the JPEG2000 decoder. Due to the error resilience modes, some bits are known to be either correct or in error. The positions of these bits are then fed back to the channel decoder. The log-likelihood ratios (LLR) of these bits are then modified by a weighting factor for the next iteration. By observing the statistics of the decoding procedure, the weighting factor is designed as a function of the channel condition. That is, for lower channel SNR, a larger factor is assigned, and vice versa. Results show that the proposed joint decoding methods can greatly reduce the number of iterations, and thereby reduce the decoding delay considerably. At the same time, this method always outperforms the non-source controlled decoding method up to 5dB in terms of PSNR for various reconstructed images.

  14. New Paranal Views

    NASA Astrophysics Data System (ADS)

    2001-01-01

    Last year saw very good progress at ESO's Paranal Observatory , the site of the Very Large Telescope (VLT). The third and fourth 8.2-m Unit Telescopes, MELIPAL and YEPUN had "First Light" (cf. PR 01/00 and PR 18/00 ), while the first two, ANTU and KUEYEN , were busy collecting first-class data for hundreds of astronomers. Meanwhile, work continued towards the next phase of the VLT project, the combination of the telescopes into the VLT Interferometer. The test instrument, VINCI (cf. PR 22/00 ) is now being installed in the VLTI Laboratory at the centre of the observing platform on the top of Paranal. Below is a new collection of video sequences and photos that illustrate the latest developments at the Paranal Observatory. The were obtained by the EPR Video Team in December 2000. The photos are available in different formats, including "high-resolution" that is suitable for reproduction purposes. A related ESO Video News Reel for professional broadcasters will soon become available and will be announced via the usual channels. Overview Paranal Observatory (Dec. 2000) Video Clip 02a/01 [MPEG - 4.5Mb] ESO PR Video Clip 02a/01 "Paranal Observatory (December 2000)" (4875 frames/3:15 min) [MPEG Video+Audio; 160x120 pix; 4.5Mb] [MPEG Video+Audio; 320x240 pix; 13.5 Mb] [RealMedia; streaming; 34kps] [RealMedia; streaming; 200kps] ESO Video Clip 02a/01 shows some of the construction activities at the Paranal Observatory in December 2000, beginning with a general view of the site. Then follow views of the Residencia , a building that has been designed by Architects Auer and Weber in Munich - it integrates very well into the desert, creating a welcome recreational site for staff and visitors in this harsh environment. The next scenes focus on the "stations" for the auxiliary telescopes for the VLTI and the installation of two delay lines in the 140-m long underground tunnel. The following part of the video clip shows the start-up of the excavation work for the 2.6-m VLT Survey Telescope (VST) as well as the location known as the "NTT Peak", now under consideration for the installation of the 4-m VISTA telescope. The last images are from to the second 8.2-m Unit Telescope, KUEYEN, that has been in full use by the astronomers with the UVES and FORS2 instruments since April 2000. ESO PR Photo 04a/01 ESO PR Photo 04a/01 [Preview - JPEG: 466 x 400 pix - 58k] [Normal - JPEG: 931 x 800 pix - 688k] [Hires - JPEG: 3000 x 2577 pix - 7.6M] Caption : PR Photo 04a/01 shows an afternoon view from the Paranal summit towards East, with the Base Camp and the new Residencia on the slope to the right, above the valley in the shadow of the mountain. ESO PR Photo 04b/01 ESO PR Photo 04b/01 [Preview - JPEG: 791 x 400 pix - 89k] [Normal - JPEG: 1582 x 800 pix - 1.1Mk] [Hires - JPEG: 3000 x 1517 pix - 3.6M] PR Photo 04b/01 shows the ramp leading to the main entrance to the partly subterranean Residencia , with the steel skeleton for the dome over the central area in place. ESO PR Photo 04c/01 ESO PR Photo 04c/01 [Preview - JPEG: 498 x 400 pix - 65k] [Normal - JPEG: 995 x 800 pix - 640k] [Hires - JPEG: 3000 x 2411 pix - 6.6M] PR Photo 04c/01 is an indoor view of the reception hall under the dome, looking towards the main entrance. ESO PR Photo 04d/01 ESO PR Photo 04d/01 [Preview - JPEG: 472 x 400 pix - 61k] [Normal - JPEG: 944 x 800 pix - 632k] [Hires - JPEG: 3000 x 2543 pix - 5.8M] PR Photo 04d/01 shows the ramps from the reception area towards the rooms. The VLT Interferometer The Delay Lines consitute a most important element of the VLT Interferometer , cf. PR Photos 26a-e/00. At this moment, two Delay Lines are operational on site. A third system will be integrated early this year. The VLTI Delay Line is located in an underground tunnel that is 168 metres long and 8 metres wide. This configuration has been designed to accommodate up to eight Delay Lines, including their transfer optics in an ideal environment: stable temperature, high degree of cleanliness, low levels of straylight, low air turbulence. The positions of the Delay Line carriages are computed to adjust the Optical Path Lengths requested for the fringe pattern observation. The positions are controlled in real time by a laser metrology system, specially developed for this purpose. The position precision is about 20 nm (1 nm = 10 -9 m, or 1 millionth of a millimetre) over a distance of 120 metres. The maximum velocity is 0.50 m/s in position mode and maximum 0.05 m/s in operation. The system is designed for 25 year of operation and to survive earthquake up to 8.6 magnitude on the Richter scale. The VLTI Delay Line is a three-year project, carried out by ESO in collaboration with Dutch Space Holdings (formerly Fokker Space) and TPD-TNO . VLTI Delay Lines (December 2000) - ESO PR Video Clip 02b/01 [MPEG - 3.6Mb] ESO PR Video Clip 02b/01 "VLTI Delay Lines (December 2000)" (2000 frames/1:20 min) [MPEG Video+Audio; 160x120 pix; 3.6Mb] [MPEG Video+Audio; 320x240 pix; 13.7 Mb] [RealMedia; streaming; 34kps] [RealMedia; streaming; 200kps] ESO Video Clip 02b/00 shows the Delay Lines of the VLT Interferometer facility at Paranal during tests. One of the carriages is moving on 66-metre long rectified rails, driven by a linear motor. The carriage is equipped with three wheels in order to preserve high guidance accuracy. Another important element is the Cat's Eye that reflects the light from the telescope to the VLT instrumentation. This optical system is made of aluminium (including the mirrors) to avoid thermo-mechanical problems. ESO PR Photo 04e/01 ESO PR Photo 04e/01 [Preview - JPEG: 400 x 402 pix - 62k] [Normal - JPEG: 800 x 804 pix - 544k] [Hires - JPEG: 3000 x 3016 pix - 6.2M] Caption : PR Photo 04e/01 shows one of the 30 "stations" for the movable 1.8-m Auxiliary Telescopes. When one of these telescopes is positioned ("parked") on top of it, The light will be guided through the hole towards the Interferometric Tunnel and the Delay Lines. ESO PR Photo 04f/01 ESO PR Photo 04f/01 [Preview - JPEG: 568 x 400 pix - 96k] [Normal - JPEG: 1136 x 800 pix - 840k] [Hires - JPEG: 3000 x 2112 pix - 4.6M] PR Photo 04f/01 shows a general view of the Interferometric Tunnel and the Delay Lines. ESO PR Photo 04g/01 ESO PR Photo 04g/01 [Preview - JPEG: 406 x 400 pix - 62k] [Normal - JPEG: 812 x 800 pix - 448k] [Hires - JPEG: 3000 x 2956 pix - 5.5M] PR Photo 04g/01 shows one of the Delay Line carriages in parking position. The "NTT Peak" The "NTT Peak" is a mountain top located about 2 km to the north of Paranal. It received this name when ESO considered to move the 3.58-m New Technology Telescope from La Silla to this peak. The possibility of installing the 4-m VISTA telescope (cf. PR 03/00 ) on this peak is now being discussed. ESO PR Photo 04h/01 ESO PR Photo 04h/01 [Preview - JPEG: 630 x 400 pix - 89k] [Normal - JPEG: 1259 x 800 pix - 1.1M] [Hires - JPEG: 3000 x 1907 pix - 5.2M] PR Photo 04h/01 shows the view from the "NTT Peak" towards south, vith the Paranal mountain and the VLT enclosures in the background. ESO PR Photo 04i/01 ESO PR Photo 04i/01 [Preview - JPEG: 516 x 400 pix - 50k] [Normal - JPEG: 1031 x 800 pix - 664k] [Hires - JPEG: 3000 x 2328 pix - 6.0M] PR Photo 04i/01 is a view towards the "NTT Peak" from the top of the Paranal mountain. The access road and the concrete pillar that was used to support a site testing telescope at the top of this peak are seen This is the caption to ESO PR Photos 04a-1/01 and PR Video Clips 02a-b/01 . They may be reproduced, if credit is given to the European Southern Observatory. The ESO PR Video Clips service to visitors to the ESO website provides "animated" illustrations of the ongoing work and events at the European Southern Observatory. The most recent clip was: ESO PR Video Clip 01/01 about the Physics On Stage Festival (11 January 2001) . Information is also available on the web about other ESO videos.

  15. Effects of Digitization and JPEG Compression on Land Cover Classification Using Astronaut-Acquired Orbital Photographs

    NASA Technical Reports Server (NTRS)

    Robinson, Julie A.; Webb, Edward L.; Evangelista, Arlene

    2000-01-01

    Studies that utilize astronaut-acquired orbital photographs for visual or digital classification require high-quality data to ensure accuracy. The majority of images available must be digitized from film and electronically transferred to scientific users. This study examined the effect of scanning spatial resolution (1200, 2400 pixels per inch [21.2 and 10.6 microns/pixel]), scanning density range option (Auto, Full) and compression ratio (non-lossy [TIFF], and lossy JPEG 10:1, 46:1, 83:1) on digital classification results of an orbital photograph from the NASA - Johnson Space Center archive. Qualitative results suggested that 1200 ppi was acceptable for visual interpretive uses for major land cover types. Moreover, Auto scanning density range was superior to Full density range. Quantitative assessment of the processing steps indicated that, while 2400 ppi scanning spatial resolution resulted in more classified polygons as well as a substantially greater proportion of polygons < 0.2 ha, overall agreement between 1200 ppi and 2400 ppi was quite high. JPEG compression up to approximately 46:1 also did not appear to have a major impact on quantitative classification characteristics. We conclude that both 1200 and 2400 ppi scanning resolutions are acceptable options for this level of land cover classification, as well as a compression ratio at or below approximately 46:1. Auto range density should always be used during scanning because it acquires more of the information from the film. The particular combination of scanning spatial resolution and compression level will require a case-by-case decision and will depend upon memory capabilities, analytical objectives and the spatial properties of the objects in the image.

  16. Processed Thematic Mapper Satellite Imagery for Selected Areas within the U.S.-Mexico Borderlands

    USGS Publications Warehouse

    Dohrenwend, John C.; Gray, Floyd; Miller, Robert J.

    2000-01-01

    The study is summarized in the Adobe Acrobat Portable Document Format (PDF) file OF00-309.PDF. This publication also contain satellite full-scene images of selected areas along the U.S.-Mexico border. These images are presented as high-resolution images in jpeg format (IMAGES). The folder LOCATIONS in contains TIFF images showing exact positions of easily-identified reference locations for each of the Landsat TM scenes located at least partly within the U.S. A reference location table (BDRLOCS.DOC in MS Word format) lists the latitude and longitude of each reference location with a nominal precision of 0.001 minute of arc

  17. FBCOT: a fast block coding option for JPEG 2000

    NASA Astrophysics Data System (ADS)

    Taubman, David; Naman, Aous; Mathew, Reji

    2017-09-01

    Based on the EBCOT algorithm, JPEG 2000 finds application in many fields, including high performance scientific, geospatial and video coding applications. Beyond digital cinema, JPEG 2000 is also attractive for low-latency video communications. The main obstacle for some of these applications is the relatively high computational complexity of the block coder, especially at high bit-rates. This paper proposes a drop-in replacement for the JPEG 2000 block coding algorithm, achieving much higher encoding and decoding throughputs, with only modest loss in coding efficiency (typically < 0.5dB). The algorithm provides only limited quality/SNR scalability, but offers truly reversible transcoding to/from any standard JPEG 2000 block bit-stream. The proposed FAST block coder can be used with EBCOT's post-compression RD-optimization methodology, allowing a target compressed bit-rate to be achieved even at low latencies, leading to the name FBCOT (Fast Block Coding with Optimized Truncation).

  18. Research-oriented image registry for multimodal image integration.

    PubMed

    Tanaka, M; Sadato, N; Ishimori, Y; Yonekura, Y; Yamashita, Y; Komuro, H; Hayahsi, N; Ishii, Y

    1998-01-01

    To provide multimodal biomedical images automatically, we constructed the research-oriented image registry, Data Delivery System (DDS). DDS was constructed on the campus local area network. Machines which generate images (imagers: DSA, ultrasound, PET, MRI, SPECT and CT) were connected to the campus LAN. Once a patient is registered, all his images are automatically picked up by DDS as they are generated, transferred through the gateway server to the intermediate server, and copied into the directory of the user who registered the patient. DDS informs the user through e-mail that new data have been generated and transferred. Data format is automatically converted into one which is chosen by the user. Data inactive for a certain period in the intermediate server are automatically achieved into the final and permanent data server based on compact disk. As a soft link is automatically generated through this step, a user has access to all (old or new) image data of the patient of his interest. As DDS runs with minimal maintenance, cost and time for data transfer are significantly saved. By making the complex process of data transfer and conversion invisible, DDS has made it easy for naive-to-computer researchers to concentrate on their biomedical interest.

  19. Multiple descriptions based on multirate coding for JPEG 2000 and H.264/AVC.

    PubMed

    Tillo, Tammam; Baccaglini, Enrico; Olmo, Gabriella

    2010-07-01

    Multiple description coding (MDC) makes use of redundant representations of multimedia data to achieve resiliency. Descriptions should be generated so that the quality obtained when decoding a subset of them only depends on their number and not on the particular received subset. In this paper, we propose a method based on the principle of encoding the source at several rates, and properly blending the data encoded at different rates to generate the descriptions. The aim is to achieve efficient redundancy exploitation, and easy adaptation to different network scenarios by means of fine tuning of the encoder parameters. We apply this principle to both JPEG 2000 images and H.264/AVC video data. We consider as the reference scenario the distribution of contents on application-layer overlays with multiple-tree topology. The experimental results reveal that our method favorably compares with state-of-art MDC techniques.

  20. First Digit Law and Its Application to Digital Forensics

    NASA Astrophysics Data System (ADS)

    Shi, Yun Q.

    Digital data forensics, which gathers evidence of data composition, origin, and history, is crucial in our digital world. Although this new research field is still in its infancy stage, it has started to attract increasing attention from the multimedia-security research community. This lecture addresses the first digit law and its applications to digital forensics. First, the Benford and generalized Benford laws, referred to as first digit law, are introduced. Then, the application of first digit law to detection of JPEG compression history for a given BMP image and detection of double JPEG compressions are presented. Finally, applying first digit law to detection of double MPEG video compressions is discussed. It is expected that the first digit law may play an active role in other task of digital forensics. The lesson learned is that statistical models play an important role in digital forensics and for a specific forensic task different models may provide different performance.

  1. Technology insertion of a COTS RAID server as an image buffer in the image chain of the Defense Mapping Agency's Digital Production System

    NASA Astrophysics Data System (ADS)

    Mehring, James W.; Thomas, Scott D.

    1995-11-01

    The Data Services Segment of the Defense Mapping Agency's Digital Production System provides a digital archive of imagery source data for use by DMA's cartographic user's. This system was developed in the mid-1980's and is currently undergoing modernization. This paper addresses the modernization of the imagery buffer function that was performed by custom hardware in the baseline system and is being replaced by a RAID Server based on commercial off the shelf (COTS) hardware. The paper briefly describes the baseline DMA image system and the modernization program, that is currently under way. Throughput benchmark measurements were made to make design configuration decisions for a commercial off the shelf (COTS) RAID Server to perform as system image buffer. The test program began with performance measurements of the RAID read and write operations between the RAID arrays and the server CPU for RAID levels 0, 5 and 0+1. Interface throughput measurements were made for the HiPPI interface between the RAID Server and the image archive and processing system as well as the client side interface between a custom interface board that provides the interface between the internal bus of the RAID Server and the Input- Output Processor (IOP) external wideband network currently in place in the DMA system to service client workstations. End to end measurements were taken from the HiPPI interface through the RAID write and read operations to the IOP output interface.

  2. D Reconstruction with a Collaborative Approach Based on Smartphones and a Cloud-Based Server

    NASA Astrophysics Data System (ADS)

    Nocerino, E.; Poiesi, F.; Locher, A.; Tefera, Y. T.; Remondino, F.; Chippendale, P.; Van Gool, L.

    2017-11-01

    The paper presents a collaborative image-based 3D reconstruction pipeline to perform image acquisition with a smartphone and geometric 3D reconstruction on a server during concurrent or disjoint acquisition sessions. Images are selected from the video feed of the smartphone's camera based on their quality and novelty. The smartphone's app provides on-the-fly reconstruction feedback to users co-involved in the acquisitions. The server is composed of an incremental SfM algorithm that processes the received images by seamlessly merging them into a single sparse point cloud using bundle adjustment. Dense image matching algorithm can be lunched to derive denser point clouds. The reconstruction details, experiments and performance evaluation are presented and discussed.

  3. A new efficient method for color image compression based on visual attention mechanism

    NASA Astrophysics Data System (ADS)

    Shao, Xiaoguang; Gao, Kun; Lv, Lily; Ni, Guoqiang

    2010-11-01

    One of the key procedures in color image compression is to extract its region of interests (ROIs) and evaluate different compression ratios. A new non-uniform color image compression algorithm with high efficiency is proposed in this paper by using a biology-motivated selective attention model for the effective extraction of ROIs in natural images. When the ROIs have been extracted and labeled in the image, the subsequent work is to encode the ROIs and other regions with different compression ratios via popular JPEG algorithm. Furthermore, experiment results and quantitative and qualitative analysis in the paper show perfect performance when comparing with other traditional color image compression approaches.

  4. Fourth Light at Paranal!

    NASA Astrophysics Data System (ADS)

    2000-09-01

    VLT YEPUN Joins ANTU, KUEYEN and MELIPAL It was a historical moment last night (September 3 - 4, 2000) in the VLT Control Room at the Paranal Observatory , after nearly 15 years of hard work. Finally, four teams of astronomers and engineers were sitting at the terminals - and each team with access to an 8.2-m telescope! From now on, the powerful "Paranal Quartet" will be observing night after night, with a combined mirror surface of more than 210 m 2. And beginning next year, some of them will be linked to form part of the unique VLT Interferometer with unparalleled sensitivity and image sharpness. YEPUN "First Light" Early in the evening, the fourth 8.2-m Unit Telescope, YEPUN , was pointed to the sky for the first time and successfully achieved "First Light". Following a few technical exposures, a series of "first light" photos was made of several astronomical objects with the VLT Test Camera. This instrument was also used for the three previous "First Light" events for ANTU ( May 1998 ), KUEYEN ( March 1999 ) and MELIPAL ( January 2000 ). These images served to evaluate provisionally the performance of the new telescope, mainly in terms of mechanical and optical quality. The ESO staff were very pleased with the results and pronounced YEPUN fit for the subsequent commissioning phase. When the name YEPUN was first given to the fourth VLT Unit Telescope, it was supposed to mean "Sirius" in the Mapuche language. However, doubts have since arisen about this translation and a detailed investigation now indicates that the correct meaning is "Venus" (as the Evening Star). For a detailed explanation, please consult the essay On the Meaning of "YEPUN" , now available at the ESO website. The first images At 21:39 hrs local time (01:39 UT), YEPUN was turned to point in the direction of a dense Milky Way field, near the border between the constellations Sagitta (The Arrow) and Aquila (The Eagle). A guide star was acquired and the active optics system quickly optimized the mirror system. At 21:44 hrs (01:44 UT), the Test Camera at the Cassegrain focus within the M1 mirror cell was opened for 30 seconds, with the planetary nebula Hen 2-428 in the field. The resulting "First Light" image was immediately read out and appeared on the computer screen at 21:45:53 hrs (01:45:53 UT). "Not bad! - "Very nice!" were the first, "business-as-usual"-like comments in the room. The zenith distance during this observation was 44° and the image quality was measured as 0.9 arcsec, exactly the same as that registered by the Seeing Monitoring Telescope outside the telescope building. There was some wind. ESO PR Photo 22a/00 ESO PR Photo 22a/00 [Preview - JPEG: 374 x 400 pix - 128k] [Normal - JPEG: 978 x 1046 pix - 728k] Caption : ESO PR Photo 22a/00 shows a colour composite of some of the first astronomical exposures obtained by YEPUN . The object is the planetary nebula Hen 2-428 that is located at a distance of 6,000-8,000 light-years and seen in a dense sky field, only 2° from the main plane of the Milky Way. As other planetary nebulae, it is caused by a dying star (the bluish object at the centre) that shreds its outer layers. The image is based on exposures through three optical filtres: B(lue) (10 min exposure, seeing 0.9 arcsec; here rendered as blue), V(isual) (5 min; 0.9 arcsec; green) and R(ed) (3 min; 0.9 arcsec; red). The field measures 88 x 78 arcsec 2 (1 pixel = 0.09 arcsec). North is to the lower right and East is to the lower left. The 5-day old Moon was about 90° away in the sky that was accordingly bright. The zenith angle was 44°. The ESO staff then proceeded to take a series of three photos with longer exposures through three different optical filtres. They have been combined to produce the image shown in ESO PR Photo 22a/00 . More astronomical images were obtained in sequence, first of the dwarf galaxy NGC 6822 in the Local Group (see PR Photo 22f/00 below) and then of the spiral galaxy NGC 7793 . All 8.2-m telescopes now in operation at Paranal The ESO Director General, Catherine Cesarsky , who was present on Paranal during this event, congratulated the ESO staff to the great achievement, herewith bringing a major phase of the VLT project to a successful end. She was particularly impressed by the excellent optical quality that was achieved at this early moment of the commissioning tests. A measurement showed that already now, 80% of the light is concentrated within 0.22 arcsec. The manager of the VLT project, Massimo Tarenghi , was very happy to reach this crucial project milestone, after nearly fifteen years of hard work. He also remarked that with the M2 mirror already now "in the active optics loop", the telescope was correctly compensating for the somewhat mediocre atmospheric conditions on this night. The next major step will be the "first light" for the VLT Interferometer (VLTI) , when the light from two Unit Telescopes is combined. This event is expected in the middle of next year. Impressions from the YEPUN "First Light" event First Light for YEPUN - ESO PR VC 06/00 ESO PR Video Clip 06/00 "First Light for YEPUN" (5650 frames/3:46 min) [MPEG Video+Audio; 160x120 pix; 7.7Mb] [MPEG Video+Audio; 320x240 pix; 25.7 Mb] [RealMedia; streaming; 34kps] [RealMedia; streaming; 200kps] ESO Video Clip 06/00 shows sequences from the Control Room at the Paranal Observatory, recorded with a fixed TV-camera in the evening of September 3 at about 23:00 hrs local time (03:00 UT), i.e., soon after the moment of "First Light" for YEPUN . The video sequences were transmitted via ESO's dedicated satellite communication link to the Headquarters in Garching for production of the clip. It begins at the moment a guide star is acquired to perform an automatic "active optics" correction of the mirrors; the associated explanation is given by Massimo Tarenghi (VLT Project Manager). The first astronomical observation is performed and the first image of the planetary nebula Hen 2-428 is discussed by the ESO Director General, Catherine Cesarsky . The next image, of the nearby dwarf galaxy NGC 6822 , arrives and is shown and commented on by the ESO Director General. Finally, Massimo Tarenghi talks about the next major step of the VLT Project. The combination of the lightbeams from two 8.2-m Unit Telescopes, planned for the summer of 2001, will mark the beginning of the VLT Interferometer. ESO Press Photo 22b/00 ESO Press Photo 22b/00 [Preview; JPEG: 400 x 300; 88k] [Full size; JPEG: 1600 x 1200; 408k] The enclosure for the fourth VLT 8.2-m Unit Telescope, YEPUN , photographed at sunset on September 3, 2000, immediately before "First Light" was successfully achieved. The upper part of the mostly subterranean Interferometric Laboratory for the VLTI is seen in front. (Digital Photo). ESO Press Photo 22c/00 ESO Press Photo 22c/00 [Preview; JPEG: 400 x 300; 112k] [Full size; JPEG: 1280 x 960; 184k] The initial tuning of the YEPUN optical system took place in the early evening of September 3, 2000, from the "observing hut" on the floor of the telescope enclosure. From left to right: Krister Wirenstrand who is responsible for the VLT Control Software, Jason Spyromilio - Head of the Commissioning Team, and Massimo Tarenghi , VLT Manager. (Digital Photo). ESO Press Photo 22d/00 ESO Press Photo 22d/00 [Preview; JPEG: 400 x 300; 112k] [Full size; JPEG: 1280 x 960; 184k] "Mission Accomplished" - The ESO Director General, Catherine Cesarsky , and the Paranal Director, Roberto Gilmozzi , face the VLT Manager, Massimo Tarenghi at the YEPUN Control Station, right after successful "First Light" for this telescope. (Digital Photo). An aerial image of YEPUN in its enclosure is available as ESO PR Photo 43a/99. The mechanical structure of YEPUN was first pre-assembled at the Ansaldo factory in Milan (Italy) where it served for tests while the other telescopes were erected at Paranal. An early photo ( ESO PR Photo 37/95 ) is available that was obtained during the visit of the ESO Council to Milan in December 1995, cf. ESO PR 18/95. Paranal at sunset ESO Press Photo 22e/00 ESO Press Photo 22e/00 [Preview; JPEG: 400 x 200; 14kb] [Normal; JPEG: 800 x 400; 84kb] [High-Res; JPEG: 4000 x 2000; 4.0Mb] Wide-angle view of the Paranal Observatory at sunset. The last rays of the sun illuminate the telescope enclosures at the top of the mountain and some of the buildings at the Base Camp. The new "residencia" that will provide living space for the Paranal staff and visitors from next year is being constructed to the left. The "First Light" observations with YEPUN began soon after sunset. This photo was obtained in March 2000. Additional photos (September 6, 2000) ESO PR Photo 22f/00 ESO PR Photo 22f/00 [Preview - JPEG: 400 x 487 pix - 224k] [Normal - JPEG: 992 x 1208 pix - 1.3Mb] Caption : ESO PR Photo 22f/00 shows a colour composite of three exposures of a field in the dwarf galaxy NGC 6822 , a member of the Local Group of Galaxies at a distance of about 2 million light-years. They were obtained by YEPUN and the VLT Test Camera at about 23:00 hrs local time on September 3 (03:00 UT on September 4), 2000. The image is based on exposures through three optical filtres: B(lue) (10 min exposure; here rendered as blue), V(isual) (5 min; green) and R(ed) (5 min; red); the seeing was 0.9 - 1.0 arcsec. Individual stars of many different colours (temperatures) are seen. The field measures about 1.5 x 1.5 arcmin 2. Another image of this galaxy was obtained earlier with ANTU and FORS1 , cf. PR Photo 10b/99. ESO Press Photo 22g/00 ESO Press Photo 22g/00 [Preview; JPEG: 400 x 300; 136k] [Full size; JPEG: 1280 x 960; 224k] Most of the crew that put together YEPUN is here photographed after the installation of the M1 mirror cell at the bottom of the mechanical structure (on July 30, 2000). Back row (left to right): Erich Bugueno (Mechanical Supervisor), Erito Flores (Maintenance Technician); front row (left to right) Peter Gray (Mechanical Engineer), German Ehrenfeld (Mechanical Engineer), Mario Tapia (Mechanical Engineer), Christian Juica (kneeling - Mechanical Technician), Nelson Montano (Maintenance Engineer), Hansel Sepulveda (Mechanical Technican) and Roberto Tamai (Mechanical Engineer). (Digital Photo). ESO PR Photos may be reproduced, if credit is given to the European Southern Observatory. The ESO PR Video Clips service to visitors to the ESO website provides "animated" illustrations of the ongoing work and events at the European Southern Observatory. The most recent clip was: ESO PR Video Clip 05/00 ("Portugal to Accede to ESO (27 June 2000). Information is also available on the web about other ESO videos.

  5. Filmless PACS in a multiple facility environment

    NASA Astrophysics Data System (ADS)

    Wilson, Dennis L.; Glicksman, Robert A.; Prior, Fred W.; Siu, Kai-Yeung; Goldburgh, Mitchell M.

    1996-05-01

    A Picture Archiving and Communication System centered on a shared image file server can support a filmless hospital. Systems based on this architecture have proven themselves in over four years of clinical operation. Changes in healthcare delivery are causing radiology groups to support multiple facilities for remote clinic support and consolidation of services. There will be a corresponding need for communicating over a standardized wide area network (WAN). Interactive workflow, a natural extension to the single facility case, requires a means to work effectively and seamlessly across moderate to low speed communication networks. Several schemes for supporting a consortium of medical treatment facilities over a WAN are explored. Both centralized and distributed database approaches are evaluated against several WAN scenarios. Likewise, several architectures for distributing image file servers or buffers over a WAN are explored, along with the caching and distribution strategies that support them. An open system implementation is critical to the success of a wide area system. The role of the Digital Imaging and Communications in Medicine (DICOM) standard in supporting multi- facility and multi-vendor open systems is also addressed. An open system can be achieved by using a DICOM server to provide a view of the system-wide distributed database. The DICOM server interface to a local version of the global database lets a local workstation treat the multiple, distributed data servers as though they were one local server for purposes of examination queries. The query will recover information about the examination that will permit retrieval over the network from the server on which the examination resides. For efficiency reasons, the ability to build cross-facility radiologist worklists and clinician-oriented patient folders is essential. The technologies of the World-Wide-Web can be used to generate worklists and patient folders across facilities. A reliable broadcast protocol may be a convenient way to notify many different users and many image servers about new activities in the network of image servers. In addition to ensuring reliability of message delivery and global serialization of each broadcast message in the network, the broadcast protocol should not introduce significant communication overhead.

  6. Server-based Approach to Web Visualization of Integrated Three-dimensional Brain Imaging Data

    PubMed Central

    Poliakov, Andrew V.; Albright, Evan; Hinshaw, Kevin P.; Corina, David P.; Ojemann, George; Martin, Richard F.; Brinkley, James F.

    2005-01-01

    The authors describe a client-server approach to three-dimensional (3-D) visualization of neuroimaging data, which enables researchers to visualize, manipulate, and analyze large brain imaging datasets over the Internet. All computationally intensive tasks are done by a graphics server that loads and processes image volumes and 3-D models, renders 3-D scenes, and sends the renderings back to the client. The authors discuss the system architecture and implementation and give several examples of client applications that allow visualization and analysis of integrated language map data from single and multiple patients. PMID:15561787

  7. Digital watermarking algorithm research of color images based on quaternion Fourier transform

    NASA Astrophysics Data System (ADS)

    An, Mali; Wang, Weijiang; Zhao, Zhen

    2013-10-01

    A watermarking algorithm of color images based on the quaternion Fourier Transform (QFFT) and improved quantization index algorithm (QIM) is proposed in this paper. The original image is transformed by QFFT, the watermark image is processed by compression and quantization coding, and then the processed watermark image is embedded into the components of the transformed original image. It achieves embedding and blind extraction of the watermark image. The experimental results show that the watermarking algorithm based on the improved QIM algorithm with distortion compensation achieves a good tradeoff between invisibility and robustness, and better robustness for the attacks of Gaussian noises, salt and pepper noises, JPEG compression, cropping, filtering and image enhancement than the traditional QIM algorithm.

  8. 36 CFR 1194.22 - Web-based intranet and internet information and applications.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... active region of a server-side image map. (f) Client-side image maps shall be provided instead of server-side image maps except where the regions cannot be defined with an available geometric shape. (g) Row...) Frames shall be titled with text that facilitates frame identification and navigation. (j) Pages shall be...

  9. 36 CFR 1194.22 - Web-based intranet and internet information and applications.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... active region of a server-side image map. (f) Client-side image maps shall be provided instead of server-side image maps except where the regions cannot be defined with an available geometric shape. (g) Row...) Frames shall be titled with text that facilitates frame identification and navigation. (j) Pages shall be...

  10. 36 CFR § 1194.22 - Web-based intranet and internet information and applications.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... active region of a server-side image map. (f) Client-side image maps shall be provided instead of server-side image maps except where the regions cannot be defined with an available geometric shape. (g) Row...) Frames shall be titled with text that facilitates frame identification and navigation. (j) Pages shall be...

  11. A visual detection model for DCT coefficient quantization

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Watson, Andrew B.

    1994-01-01

    The discrete cosine transform (DCT) is widely used in image compression and is part of the JPEG and MPEG compression standards. The degree of compression and the amount of distortion in the decompressed image are controlled by the quantization of the transform coefficients. The standards do not specify how the DCT coefficients should be quantized. One approach is to set the quantization level for each coefficient so that the quantization error is near the threshold of visibility. Results from previous work are combined to form the current best detection model for DCT coefficient quantization noise. This model predicts sensitivity as a function of display parameters, enabling quantization matrices to be designed for display situations varying in luminance, veiling light, and spatial frequency related conditions (pixel size, viewing distance, and aspect ratio). It also allows arbitrary color space directions for the representation of color. A model-based method of optimizing the quantization matrix for an individual image was developed. The model described above provides visual thresholds for each DCT frequency. These thresholds are adjusted within each block for visual light adaptation and contrast masking. For given quantization matrix, the DCT quantization errors are scaled by the adjusted thresholds to yield perceptual errors. These errors are pooled nonlinearly over the image to yield total perceptual error. With this model one may estimate the quantization matrix for a particular image that yields minimum bit rate for a given total perceptual error, or minimum perceptual error for a given bit rate. Custom matrices for a number of images show clear improvement over image-independent matrices. Custom matrices are compatible with the JPEG standard, which requires transmission of the quantization matrix.

  12. Titanic Weather Forecasting

    NASA Astrophysics Data System (ADS)

    2004-04-01

    New Detailed VLT Images of Saturn's Largest Moon Optimizing space missions Titan, the largest moon of Saturn was discovered by Dutch astronomer Christian Huygens in 1655 and certainly deserves its name. With a diameter of no less than 5,150 km, it is larger than Mercury and twice as large as Pluto. It is unique in having a hazy atmosphere of nitrogen, methane and oily hydrocarbons. Although it was explored in some detail by the NASA Voyager missions, many aspects of the atmosphere and surface still remain unknown. Thus, the existence of seasonal or diurnal phenomena, the presence of clouds, the surface composition and topography are still under debate. There have even been speculations that some kind of primitive life (now possibly extinct) may be found on Titan. Titan is the main target of the NASA/ESA Cassini/Huygens mission, launched in 1997 and scheduled to arrive at Saturn on July 1, 2004. The ESA Huygens probe is designed to enter the atmosphere of Titan, and to descend by parachute to the surface. Ground-based observations are essential to optimize the return of this space mission, because they will complement the information gained from space and add confidence to the interpretation of the data. Hence, the advent of the adaptive optics system NAOS-CONICA (NACO) [1] in combination with ESO's Very Large Telescope (VLT) at the Paranal Observatory in Chile now offers a unique opportunity to study the resolved disc of Titan with high sensitivity and increased spatial resolution. Adaptive Optics (AO) systems work by means of a computer-controlled deformable mirror that counteracts the image distortion induced by atmospheric turbulence. It is based on real-time optical corrections computed from image data obtained by a special camera at very high speed, many hundreds of times each second (see e.g. ESO Press Release 25/01 , ESO PR Photos 04a-c/02, ESO PR Photos 19a-c/02, ESO PR Photos 21a-c/02, ESO Press Release 17/02, and ESO Press Release 26/03 for earlier NACO images, and ESO Press Release 11/03 for MACAO-VLTI results.) The southern smile ESO PR Photo 08a/04 ESO PR Photo 08a/04 Images of Titan on November 20, 25 and 26, 2002 Through Five Filters (VLT YEPUN + NACO) [Preview - JPEG: 522 x 400 pix - 40k] [Normal - JPEG: 1043 x 800 pix - 340k] [Hires - JPEG: 2875 x 2205 pix - 1.2M] Caption: ESO PR Photo 08a/04 shows Titan (apparent visual magnitude 8.05, apparent diameter 0.87 arcsec) as observed with the NAOS/CONICA instrument at VLT Yepun (Paranal Observatory, Chile) on November 20, 25 and 26, 2003, between 6.00 UT and 9.00 UT. The median seeing values were 1.1 arcsec and 1.5 arcsec respectively for the 20th and 25th. Deconvoluted ("sharpened") images of Titan are shown through 5 different narrow-band filters - they allow to probe in some detail structures at different altitudes and on the surface. Depending on the filter, the integration time varies from 10 to 100 seconds. While Titan shows its leading hemisphere (i.e. the one observed when Titan moves towards us) on Nov. 20, the trailing side (i.e the one we see when Titan moves away from us in its course around Saturn) - which displays less bright surface features - is observed on the last two dates. ESO PR Photo 08b/04 ESO PR Photo 08b/04 Titan Observed Through Nine Different Filters on November 26, 2002 [Preview - JPEG: 480 x 400 pix - 36k] [Normal - JPEG: 960 x 800 pix - 284k] Caption: ESO PR Photo 08b/04: Images of Titan taken on November 26, 2002 through nine different filters to probe different altitudes, ranging from the stratosphere to the surface. On this night, a stable "seeing" (image quality before adaptive optics correction) of 0.9 arcsec allowed the astronomers to attain the diffraction limit of the telescope (0.032 arcsec resolution). Due to these good observing conditions, Titan's trailing hemisphere was observed with contrasts of about 40%, allowing the detection of several bright features on this surface region, once thought to be quite dark and featureless. ESO PR Photo 08c/04 ESO PR Photo 08c/04 Titan Surface Projections [Preview - JPEG: 601 x 400 pix - 64k] [Normal - JPEG: 1201 x 800 pix - 544k] Caption: ESO PR Photo 08c/04 : Titan images obtained with NACO on November 26th, 2002. Left: Titan's surface projection on the trailing hemisphere as observed at 1.3 μm, revealing a complex brightness structure thanks to the high image contrast of about 40%. Right: a new, possibly meteorological, phenomenon observed at 2.12 μm in Titan's atmosphere, in the form of a bright feature revolving around the South Pole. A team of French astronomers [2] have recently used the NACO state-of-the-art adaptive optics system on the fourth 8.2-m VLT unit telescope, Yepun, to map the surface of Titan by means of near-infrared images and to search for changes in the dense atmosphere. These extraordinary images have a nominal resolution of 1/30th arcsec and show details of the order of 200 km on the surface of Titan. To provide the best possible views, the raw data from the instrument were subjected to deconvolution (image sharpening). Images of Titan were obtained through 9 narrow-band filters, sampling near-infrared wavelengths with large variations in methane opacity. This permits sounding of different altitudes ranging from the stratosphere to the surface. Titan harbours at 1.24 and 2.12 μm a "southern smile", that is a north-south asymmetry, while the opposite situation is observed with filters probing higher altitudes, such as 1.64, 1.75 and 2.17 μm. A high-contrast bright feature is observed at the South Pole and is apparently caused by a phenomenon in the atmosphere, at an altitude below 140 km or so. This feature was found to change its location on the images from one side of the south polar axis to the other during the week of observations. Outlook An additional series of NACO observations of Titan is foreseen later this month (April 2004). These will be a great asset in helping optimize the return of the Cassini/Huygens mission. Several of the instruments aboard the spacecraft depend on such ground-based data to better infer the properties of Titan's surface and lower atmosphere. Although the astronomers have yet to model and interpret the physical and geophysical phenomena now observed and to produce a full cartography of the surface, this first analysis provides a clear demonstration of the marvellous capabilities of the NACO imaging system. More examples of the exciting science possible with this facility will be found in a series of five papers published today in the European research journal Astronomy & Astrophysics (Vol. 47, L1 to L24).

  13. Cardio-PACs: a new opportunity

    NASA Astrophysics Data System (ADS)

    Heupler, Frederick A., Jr.; Thomas, James D.; Blume, Hartwig R.; Cecil, Robert A.; Heisler, Mary

    2000-05-01

    It is now possible to replace film-based image management in the cardiac catheterization laboratory with a Cardiology Picture Archiving and Communication System (Cardio-PACS) based on digital imaging technology. The first step in the conversion process is installation of a digital image acquisition system that is capable of generating high-quality DICOM-compatible images. The next three steps, which are the subject of this presentation, involve image display, distribution, and storage. Clinical requirements and associated cost considerations for these three steps are listed below: Image display: (1) Image quality equal to film, with DICOM format, lossless compression, image processing, desktop PC-based with color monitor, and physician-friendly imaging software; (2) Performance specifications include: acquire 30 frames/sec; replay 15 frames/sec; access to file server 5 seconds, and to archive 5 minutes; (3) Compatibility of image file, transmission, and processing formats; (4) Image manipulation: brightness, contrast, gray scale, zoom, biplane display, and quantification; (5) User-friendly control of image review. Image distribution: (1) Standard IP-based network between cardiac catheterization laboratories, file server, long-term archive, review stations, and remote sites; (2) Non-proprietary formats; (3) Bidirectional distribution. Image storage: (1) CD-ROM vs disk vs tape; (2) Verification of data integrity; (3) User-designated storage capacity for catheterization laboratory, file server, long-term archive. Costs: (1) Image acquisition equipment, file server, long-term archive; (2) Network infrastructure; (3) Review stations and software; (4) Maintenance and administration; (5) Future upgrades and expansion; (6) Personnel.

  14. Privacy-preserving photo sharing based on a public key infrastructure

    NASA Astrophysics Data System (ADS)

    Yuan, Lin; McNally, David; Küpçü, Alptekin; Ebrahimi, Touradj

    2015-09-01

    A significant number of pictures are posted to social media sites or exchanged through instant messaging and cloud-based sharing services. Most social media services offer a range of access control mechanisms to protect users privacy. As it is not in the best interest of many such services if their users restrict access to their shared pictures, most services keep users' photos unprotected which makes them available to all insiders. This paper presents an architecture for a privacy-preserving photo sharing based on an image scrambling scheme and a public key infrastructure. A secure JPEG scrambling is applied to protect regional visual information in photos. Protected images are still compatible with JPEG coding and therefore can be viewed by any one on any device. However, only those who are granted secret keys will be able to descramble the photos and view their original versions. The proposed architecture applies an attribute-based encryption along with conventional public key cryptography, to achieve secure transmission of secret keys and a fine-grained control over who may view shared photos. In addition, we demonstrate the practical feasibility of the proposed photo sharing architecture with a prototype mobile application, ProShare, which is built based on iOS platform.

  15. Storage, retrieval, and edit of digital video using Motion JPEG

    NASA Astrophysics Data System (ADS)

    Sudharsanan, Subramania I.; Lee, D. H.

    1994-04-01

    In a companion paper we describe a Micro Channel adapter card that can perform real-time JPEG (Joint Photographic Experts Group) compression of a 640 by 480 24-bit image within 1/30th of a second. Since this corresponds to NTSC video rates at considerably good perceptual quality, this system can be used for real-time capture and manipulation of continuously fed video. To facilitate capturing the compressed video in a storage medium, an IBM Bus master SCSI adapter with cache is utilized. Efficacy of the data transfer mechanism is considerably improved using the System Control Block architecture, an extension to Micro Channel bus masters. We show experimental results that the overall system can perform at compressed data rates of about 1.5 MBytes/second sustained and with sporadic peaks to about 1.8 MBytes/second depending on the image sequence content. We also describe mechanisms to access the compressed data very efficiently through special file formats. This in turn permits creation of simpler sequence editors. Another advantage of the special file format is easy control of forward, backward and slow motion playback. The proposed method can be extended for design of a video compression subsystem for a variety of personal computing systems.

  16. Improved Adaptive LSB Steganography Based on Chaos and Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Lifang; Zhao, Yao; Ni, Rongrong; Li, Ting

    2010-12-01

    We propose a novel steganographic method in JPEG images with high performance. Firstly, we propose improved adaptive LSB steganography, which can achieve high capacity while preserving the first-order statistics. Secondly, in order to minimize visual degradation of the stego image, we shuffle bits-order of the message based on chaos whose parameters are selected by the genetic algorithm. Shuffling message's bits-order provides us with a new way to improve the performance of steganography. Experimental results show that our method outperforms classical steganographic methods in image quality, while preserving characteristics of histogram and providing high capacity.

  17. The Orthanc Ecosystem for Medical Imaging.

    PubMed

    Jodogne, Sébastien

    2018-05-03

    This paper reviews the components of Orthanc, a free and open-source, highly versatile ecosystem for medical imaging. At the core of the Orthanc ecosystem, the Orthanc server is a lightweight vendor neutral archive that provides PACS managers with a powerful environment to automate and optimize the imaging flows that are very specific to each hospital. The Orthanc server can be extended with plugins that provide solutions for teleradiology, digital pathology, or enterprise-ready databases. It is shown how software developers and research engineers can easily develop external software or Web portals dealing with medical images, with minimal knowledge of the DICOM standard, thanks to the advanced programming interface of the Orthanc server. The paper concludes by introducing the Stone of Orthanc, an innovative toolkit for the cross-platform rendering of medical images.

  18. Visual information processing; Proceedings of the Meeting, Orlando, FL, Apr. 20-22, 1992

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O. (Editor); Juday, Richard D. (Editor)

    1992-01-01

    Topics discussed in these proceedings include nonlinear processing and communications; feature extraction and recognition; image gathering, interpolation, and restoration; image coding; and wavelet transform. Papers are presented on noise reduction for signals from nonlinear systems; driving nonlinear systems with chaotic signals; edge detection and image segmentation of space scenes using fractal analyses; a vision system for telerobotic operation; a fidelity analysis of image gathering, interpolation, and restoration; restoration of images degraded by motion; and information, entropy, and fidelity in visual communication. Attention is also given to image coding methods and their assessment, hybrid JPEG/recursive block coding of images, modified wavelets that accommodate causality, modified wavelet transform for unbiased frequency representation, and continuous wavelet transform of one-dimensional signals by Fourier filtering.

  19. JHelioviewer: Open-Source Software for Discovery and Image Access in the Petabyte Age (Invited)

    NASA Astrophysics Data System (ADS)

    Mueller, D.; Dimitoglou, G.; Langenberg, M.; Pagel, S.; Dau, A.; Nuhn, M.; Garcia Ortiz, J. P.; Dietert, H.; Schmidt, L.; Hughitt, V. K.; Ireland, J.; Fleck, B.

    2010-12-01

    The unprecedented torrent of data returned by the Solar Dynamics Observatory is both a blessing and a barrier: a blessing for making available data with significantly higher spatial and temporal resolution, but a barrier for scientists to access, browse and analyze them. With such staggering data volume, the data is bound to be accessible only from a few repositories and users will have to deal with data sets effectively immobile and practically difficult to download. From a scientist's perspective this poses three challenges: accessing, browsing and finding interesting data while avoiding the proverbial search for a needle in a haystack. To address these challenges, we have developed JHelioviewer, an open-source visualization software that lets users browse large data volumes both as still images and movies. We did so by deploying an efficient image encoding, storage, and dissemination solution using the JPEG 2000 standard. This solution enables users to access remote images at different resolution levels as a single data stream. Users can view, manipulate, pan, zoom, and overlay JPEG 2000 compressed data quickly, without severe network bandwidth penalties. Besides viewing data, the browser provides third-party metadata and event catalog integration to quickly locate data of interest, as well as an interface to the Virtual Solar Observatory to download science-quality data. As part of the Helioviewer Project, JHelioviewer offers intuitive ways to browse large amounts of heterogeneous data remotely and provides an extensible and customizable open-source platform for the scientific community.

  20. JPEG2000 vs. full frame wavelet packet compression for smart card medical records.

    PubMed

    Leehan, Joaquín Azpirox; Lerallut, Jean-Francois

    2006-01-01

    This paper describes a comparison among different compression methods to be used in the context of electronic health records in the newer version of "smart cards". The JPEG2000 standard is compared to a full-frame wavelet packet compression method at high (33:1 and 50:1) compression rates. Results show that the full-frame method outperforms the JPEG2K standard qualitatively and quantitatively.

  1. Fault-tolerant back-up archive using an ASP model for disaster recovery

    NASA Astrophysics Data System (ADS)

    Liu, Brent J.; Huang, H. K.; Cao, Fei; Documet, Luis; Sarti, Dennis A.

    2002-05-01

    A single point of failure in PACS during a disaster scenario is the main archive storage and server. When a major disaster occurs, it is possible to lose an entire hospital's PACS data. Few current PACS archives feature disaster recovery, but the design is limited at best. These drawbacks include the frequency with which the back-up is physically removed to an offsite facility, the operational costs associated to maintain the back-up, the ease-of-use to perform the backup consistently and efficiently, and the ease-of-use to perform the PACS image data recovery. This paper describes a novel approach towards a fault-tolerant solution for disaster recovery of short-term PACS image data using an Application Service Provider model for service. The ASP back-up archive provides instantaneous, automatic backup of acquired PACS image data and instantaneous recovery of stored PACS image data all at a low operational cost. A back-up archive server and RAID storage device is implemented offsite from the main PACS archive location. In the example of this particular hospital, it was determined that at least 2 months worth of PACS image exams were needed for back-up. Clinical data from a hospital PACS is sent to this ASP storage server in parallel to the exams being archived in the main server. A disaster scenario was simulated and the PACS exams were sent from the offsite ASP storage server back to the hospital PACS. Initially, connectivity between the main archive and the ASP storage server is established via a T-1 connection. In the future, other more cost-effective means of connectivity will be researched such as the Internet 2. A disaster scenario was initiated and the disaster recovery process using the ASP back-up archive server was success in repopulating the clinical PACS within a short period of time. The ASP back-up archive was able to recover two months of PACS image data for comparison studies with no complex operational procedures. Furthermore, no image data loss was encountered during the recovery.

  2. Review on the Celestial Sphere Positioning of FITS Format Image Based on WCS and Research on General Visualization

    NASA Astrophysics Data System (ADS)

    Song, W. M.; Fan, D. W.; Su, L. Y.; Cui, C. Z.

    2017-11-01

    Calculating the coordinate parameters recorded in the form of key/value pairs in FITS (Flexible Image Transport System) header is the key to determine FITS images' position in the celestial system. As a result, it has great significance in researching the general process of calculating the coordinate parameters. By combining CCD related parameters of astronomical telescope (such as field, focal length, and celestial coordinates in optical axis, etc.), astronomical images recognition algorithm, and WCS (World Coordinate System) theory, the parameters can be calculated effectively. CCD parameters determine the scope of star catalogue, so that they can be used to build a reference star catalogue by the corresponding celestial region of astronomical images; Star pattern recognition completes the matching between the astronomical image and reference star catalogue, and obtains a table with a certain number of stars between CCD plane coordinates and their celestial coordinates for comparison; According to different projection of the sphere to the plane, WCS can build different transfer functions between these two coordinates, and the astronomical position of image pixels can be determined by the table's data we have worked before. FITS images are used to carry out scientific data transmission and analyze as a kind of mainstream data format, but only to be viewed, edited, and analyzed in the professional astronomy software. It decides the limitation of popular science education in astronomy. The realization of a general image visualization method is significant. FITS is converted to PNG or JPEG images firstly. The coordinate parameters in the FITS header are converted to metadata in the form of AVM (Astronomy Visualization Metadata), and then the metadata is added to the PNG or JPEG header. This method can meet amateur astronomers' general needs of viewing and analyzing astronomical images in the non-astronomical software platform. The overall design flow is realized through the java program and tested by SExtractor, WorldWide Telescope, picture viewer, and other software.

  3. Home teleradiology system

    NASA Astrophysics Data System (ADS)

    Komo, Darmadi; Garra, Brian S.; Freedman, Matthew T.; Mun, Seong K.

    1997-05-01

    The Home Teleradiology Server system has been developed and installed at the Department of Radiology, Georgetown University Medical Center. The main purpose of the system is to provide a service for on-call physicians to view patients' medical images at home during off-hours. This service will reduce the overhead time required by on-call physicians to travel to the hospital, thereby increasing the efficiency of patient care and improving the total quality of the health care. Typically when a new case is conducted, the medical images generated from CT, US, and/or MRI modalities are transferred to a central server at the hospital via DICOM messages over an existing hospital network. The server has a DICOM network agent that listens to DICOM messages sent by CT, US, and MRI modalities and stores them into separate DICOM files for sending purposes. The server also has a general purpose, flexible scheduling software that can be configured to send image files to specific user(s) at certain times on any day(s) of the week. The server will then distribute the medical images to on- call physicians' homes via a high-speed modem. All file transmissions occur in the background without human interaction after the scheduling software is pre-configured accordingly. At the receiving end, the physicians' computers consist of high-end workstations that have high-speed modems to receive the medical images sent by the central server from the hospital, and DICOM compatible viewer software to view the transmitted medical images in DICOM format. A technician from the hospital, and DICOM compatible viewer software to view the transmitted medical images in DICOM format. A technician from the hospital will notify the physician(s) after all the image files have been completely sent. The physician(s) will then examine the medical images and decide if it is necessary to travel to the hospital for further examination on the patients. Overall, the Home Teleradiology system provides the on-call physicians with a cost-effective and convenient environment for viewing patients' medical images at home.

  4. An effective and efficient compression algorithm for ECG signals with irregular periods.

    PubMed

    Chou, Hsiao-Hsuan; Chen, Ying-Jui; Shiau, Yu-Chien; Kuo, Te-Son

    2006-06-01

    This paper presents an effective and efficient preprocessing algorithm for two-dimensional (2-D) electrocardiogram (ECG) compression to better compress irregular ECG signals by exploiting their inter- and intra-beat correlations. To better reveal the correlation structure, we first convert the ECG signal into a proper 2-D representation, or image. This involves a few steps including QRS detection and alignment, period sorting, and length equalization. The resulting 2-D ECG representation is then ready to be compressed by an appropriate image compression algorithm. We choose the state-of-the-art JPEG2000 for its high efficiency and flexibility. In this way, the proposed algorithm is shown to outperform some existing arts in the literature by simultaneously achieving high compression ratio (CR), low percent root mean squared difference (PRD), low maximum error (MaxErr), and low standard derivation of errors (StdErr). In particular, because the proposed period sorting method rearranges the detected heartbeats into a smoother image that is easier to compress, this algorithm is insensitive to irregular ECG periods. Thus either the irregular ECG signals or the QRS false-detection cases can be better compressed. This is a significant improvement over existing 2-D ECG compression methods. Moreover, this algorithm is not tied exclusively to JPEG2000. It can also be combined with other 2-D preprocessing methods or appropriate codecs to enhance the compression performance in irregular ECG cases.

  5. Digital storage and analysis of color Doppler echocardiograms

    NASA Technical Reports Server (NTRS)

    Chandra, S.; Thomas, J. D.

    1997-01-01

    Color Doppler flow mapping has played an important role in clinical echocardiography. Most of the clinical work, however, has been primarily qualitative. Although qualitative information is very valuable, there is considerable quantitative information stored within the velocity map that has not been extensively exploited so far. Recently, many researchers have shown interest in using the encoded velocities to address the clinical problems such as quantification of valvular regurgitation, calculation of cardiac output, and characterization of ventricular filling. In this article, we review some basic physics and engineering aspects of color Doppler echocardiography, as well as drawbacks of trying to retrieve velocities from video tape data. Digital storage, which plays a critical role in performing quantitative analysis, is discussed in some detail with special attention to velocity encoding in DICOM 3.0 (medical image storage standard) and the use of digital compression. Lossy compression can considerably reduce file size with minimal loss of information (mostly redundant); this is critical for digital storage because of the enormous amount of data generated (a 10 minute study could require 18 Gigabytes of storage capacity). Lossy JPEG compression and its impact on quantitative analysis has been studied, showing that images compressed at 27:1 using the JPEG algorithm compares favorably with directly digitized video images, the current goldstandard. Some potential applications of these velocities in analyzing the proximal convergence zones, mitral inflow, and some areas of future development are also discussed in the article.

  6. Multiple-image hiding using super resolution reconstruction in high-frequency domains

    NASA Astrophysics Data System (ADS)

    Li, Xiao-Wei; Zhao, Wu-Xiang; Wang, Jun; Wang, Qiong-Hua

    2017-12-01

    In this paper, a robust multiple-image hiding method using the computer-generated integral imaging and the modified super-resolution reconstruction algorithm is proposed. In our work, the host image is first transformed into frequency domains by cellular automata (CA), to assure the quality of the stego-image, the secret images are embedded into the CA high-frequency domains. The proposed method has the following advantages: (1) robustness to geometric attacks because of the memory-distributed property of elemental images, (2) increasing quality of the reconstructed secret images as the scheme utilizes the modified super-resolution reconstruction algorithm. The simulation results show that the proposed multiple-image hiding method outperforms other similar hiding methods and is robust to some geometric attacks, e.g., Gaussian noise and JPEG compression attacks.

  7. Docker Container Manager: A Simple Toolkit for Isolated Work with Shared Computational, Storage, and Network Resources

    NASA Astrophysics Data System (ADS)

    Polyakov, S. P.; Kryukov, A. P.; Demichev, A. P.

    2018-01-01

    We present a simple set of command line interface tools called Docker Container Manager (DCM) that allow users to create and manage Docker containers with preconfigured SSH access while keeping the users isolated from each other and restricting their access to the Docker features that could potentially disrupt the work of the server. Users can access DCM server via SSH and are automatically redirected to DCM interface tool. From there, they can create new containers, stop, restart, pause, unpause, and remove containers and view the status of the existing containers. By default, the containers are also accessible via SSH using the same private key(s) but through different server ports. Additional publicly available ports can be mapped to the respective ports of a container, allowing for some network services to be run within it. The containers are started from read-only filesystem images. Some initial images must be provided by the DCM server administrators, and after containers are configured to meet one’s needs, the changes can be saved as new images. Users can see the available images and remove their own images. DCM server administrators are provided with commands to create and delete users. All commands were implemented as Python scripts. The tools allow to deploy and debug medium-sized distributed systems for simulation in different fields on one or several local computers.

  8. An ECG storage and retrieval system embedded in client server HIS utilizing object-oriented DB.

    PubMed

    Wang, C; Ohe, K; Sakurai, T; Nagase, T; Kaihara, S

    1996-02-01

    In the University of Tokyo Hospital, the improved client server HIS has been applied to clinical practice and physicians can order prescription, laboratory examination, ECG examination and radiographic examination, etc. directly by themselves and read results of these examinations, except medical signal waves, schema and image, on UNIX workstations. Recently, we designed and developed an ECG storage and retrieval system embedded in the client server HIS utilizing object-oriented database to take the first step in dealing with digitized signal, schema and image data and show waves, graphics, and images directly to physicians by the client server HIS. The system was developed based on object-oriented analysis and design, and implemented with object-oriented database management system (OODMS) and C++ programming language. In this paper, we describe the ECG data model, functions of the storage and retrieval system, features of user interface and the result of its implementation in the HIS.

  9. Large-scale automated image analysis for computational profiling of brain tissue surrounding implanted neuroprosthetic devices using Python.

    PubMed

    Rey-Villamizar, Nicolas; Somasundar, Vinay; Megjhani, Murad; Xu, Yan; Lu, Yanbin; Padmanabhan, Raghav; Trett, Kristen; Shain, William; Roysam, Badri

    2014-01-01

    In this article, we describe the use of Python for large-scale automated server-based bio-image analysis in FARSIGHT, a free and open-source toolkit of image analysis methods for quantitative studies of complex and dynamic tissue microenvironments imaged by modern optical microscopes, including confocal, multi-spectral, multi-photon, and time-lapse systems. The core FARSIGHT modules for image segmentation, feature extraction, tracking, and machine learning are written in C++, leveraging widely used libraries including ITK, VTK, Boost, and Qt. For solving complex image analysis tasks, these modules must be combined into scripts using Python. As a concrete example, we consider the problem of analyzing 3-D multi-spectral images of brain tissue surrounding implanted neuroprosthetic devices, acquired using high-throughput multi-spectral spinning disk step-and-repeat confocal microscopy. The resulting images typically contain 5 fluorescent channels. Each channel consists of 6000 × 10,000 × 500 voxels with 16 bits/voxel, implying image sizes exceeding 250 GB. These images must be mosaicked, pre-processed to overcome imaging artifacts, and segmented to enable cellular-scale feature extraction. The features are used to identify cell types, and perform large-scale analysis for identifying spatial distributions of specific cell types relative to the device. Python was used to build a server-based script (Dell 910 PowerEdge servers with 4 sockets/server with 10 cores each, 2 threads per core and 1TB of RAM running on Red Hat Enterprise Linux linked to a RAID 5 SAN) capable of routinely handling image datasets at this scale and performing all these processing steps in a collaborative multi-user multi-platform environment. Our Python script enables efficient data storage and movement between computers and storage servers, logs all the processing steps, and performs full multi-threaded execution of all codes, including open and closed-source third party libraries.

  10. No-reference quality assessment based on visual perception

    NASA Astrophysics Data System (ADS)

    Li, Junshan; Yang, Yawei; Hu, Shuangyan; Zhang, Jiao

    2014-11-01

    The visual quality assessment of images/videos is an ongoing hot research topic, which has become more and more important for numerous image and video processing applications with the rapid development of digital imaging and communication technologies. The goal of image quality assessment (IQA) algorithms is to automatically assess the quality of images/videos in agreement with human quality judgments. Up to now, two kinds of models have been used for IQA, namely full-reference (FR) and no-reference (NR) models. For FR models, IQA algorithms interpret image quality as fidelity or similarity with a perfect image in some perceptual space. However, the reference image is not available in many practical applications, and a NR IQA approach is desired. Considering natural vision as optimized by the millions of years of evolutionary pressure, many methods attempt to achieve consistency in quality prediction by modeling salient physiological and psychological features of the human visual system (HVS). To reach this goal, researchers try to simulate HVS with image sparsity coding and supervised machine learning, which are two main features of HVS. A typical HVS captures the scenes by sparsity coding, and uses experienced knowledge to apperceive objects. In this paper, we propose a novel IQA approach based on visual perception. Firstly, a standard model of HVS is studied and analyzed, and the sparse representation of image is accomplished with the model; and then, the mapping correlation between sparse codes and subjective quality scores is trained with the regression technique of least squaresupport vector machine (LS-SVM), which gains the regressor that can predict the image quality; the visual metric of image is predicted with the trained regressor at last. We validate the performance of proposed approach on Laboratory for Image and Video Engineering (LIVE) database, the specific contents of the type of distortions present in the database are: 227 images of JPEG2000, 233 images of JPEG, 174 images of White Noise, 174 images of Gaussian Blur, 174 images of Fast Fading. The database includes subjective differential mean opinion score (DMOS) for each image. The experimental results show that the proposed approach not only can assess many kinds of distorted images quality, but also exhibits a superior accuracy and monotonicity.

  11. Data grid: a distributed solution to PACS

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoyan; Zhang, Jianguo

    2004-04-01

    In a hospital, various kinds of medical images acquired from different modalities are generally used and stored in different department and each modality usually attaches several workstations to display or process images. To do better diagnosis, radiologists or physicians often need to retrieve other kinds of images for reference. The traditional image storage solution is to buildup a large-scale PACS archive server. However, the disadvantages of pure centralized management of PACS archive server are obvious. Besides high costs, any failure of PACS archive server would cripple the entire PACS operation. Here we present a new approach to develop the storage grid in PACS, which can provide more reliable image storage and more efficient query/retrieval for the whole hospital applications. In this paper, we also give the performance evaluation by comparing the three popular technologies mirror, cluster and grid.

  12. Content-based image retrieval on mobile devices

    NASA Astrophysics Data System (ADS)

    Ahmad, Iftikhar; Abdullah, Shafaq; Kiranyaz, Serkan; Gabbouj, Moncef

    2005-03-01

    Content-based image retrieval area possesses a tremendous potential for exploration and utilization equally for researchers and people in industry due to its promising results. Expeditious retrieval of desired images requires indexing of the content in large-scale databases along with extraction of low-level features based on the content of these images. With the recent advances in wireless communication technology and availability of multimedia capable phones it has become vital to enable query operation in image databases and retrieve results based on the image content. In this paper we present a content-based image retrieval system for mobile platforms, providing the capability of content-based query to any mobile device that supports Java platform. The system consists of light-weight client application running on a Java enabled device and a server containing a servlet running inside a Java enabled web server. The server responds to image query using efficient native code from selected image database. The client application, running on a mobile phone, is able to initiate a query request, which is handled by a servlet in the server for finding closest match to the queried image. The retrieved results are transmitted over mobile network and images are displayed on the mobile phone. We conclude that such system serves as a basis of content-based information retrieval on wireless devices and needs to cope up with factors such as constraints on hand-held devices and reduced network bandwidth available in mobile environments.

  13. Making the PACS workstation a browser of image processing software: a feasibility study using inter-process communication techniques.

    PubMed

    Wang, Chunliang; Ritter, Felix; Smedby, Orjan

    2010-07-01

    To enhance the functional expandability of a picture archiving and communication systems (PACS) workstation and to facilitate the integration of third-part image-processing modules, we propose a browser-server style method. In the proposed solution, the PACS workstation shows the front-end user interface defined in an XML file while the image processing software is running in the background as a server. Inter-process communication (IPC) techniques allow an efficient exchange of image data, parameters, and user input between the PACS workstation and stand-alone image-processing software. Using a predefined communication protocol, the PACS workstation developer or image processing software developer does not need detailed information about the other system, but will still be able to achieve seamless integration between the two systems and the IPC procedure is totally transparent to the final user. A browser-server style solution was built between OsiriX (PACS workstation software) and MeVisLab (Image-Processing Software). Ten example image-processing modules were easily added to OsiriX by converting existing MeVisLab image processing networks. Image data transfer using shared memory added <10ms of processing time while the other IPC methods cost 1-5 s in our experiments. The browser-server style communication based on IPC techniques is an appealing method that allows PACS workstation developers and image processing software developers to cooperate while focusing on different interests.

  14. Digitizing the KSO white light images

    NASA Astrophysics Data System (ADS)

    Pötzi, W.

    From 1989 up to 2007 the Sun was observed at the Kanzelhöhe Observatory in white light on photographic film material. The images are on transparent sheet films and are not available to the scientific community now. With a photo scanner for transparent film material the films are now scanned and then prepared for scientific use. The programs for post processing are already finished and as an output FITS and JPEG-files are produced. The scanning should be finished end of 2011 and the data should then be available via our homepage.

  15. A joint source-channel distortion model for JPEG compressed images.

    PubMed

    Sabir, Muhammad F; Sheikh, Hamid Rahim; Heath, Robert W; Bovik, Alan C

    2006-06-01

    The need for efficient joint source-channel coding (JSCC) is growing as new multimedia services are introduced in commercial wireless communication systems. An important component of practical JSCC schemes is a distortion model that can predict the quality of compressed digital multimedia such as images and videos. The usual approach in the JSCC literature for quantifying the distortion due to quantization and channel errors is to estimate it for each image using the statistics of the image for a given signal-to-noise ratio (SNR). This is not an efficient approach in the design of real-time systems because of the computational complexity. A more useful and practical approach would be to design JSCC techniques that minimize average distortion for a large set of images based on some distortion model rather than carrying out per-image optimizations. However, models for estimating average distortion due to quantization and channel bit errors in a combined fashion for a large set of images are not available for practical image or video coding standards employing entropy coding and differential coding. This paper presents a statistical model for estimating the distortion introduced in progressive JPEG compressed images due to quantization and channel bit errors in a joint manner. Statistical modeling of important compression techniques such as Huffman coding, differential pulse-coding modulation, and run-length coding are included in the model. Examples show that the distortion in terms of peak signal-to-noise ratio (PSNR) can be predicted within a 2-dB maximum error over a variety of compression ratios and bit-error rates. To illustrate the utility of the proposed model, we present an unequal power allocation scheme as a simple application of our model. Results show that it gives a PSNR gain of around 6.5 dB at low SNRs, as compared to equal power allocation.

  16. The effects of video compression on acceptability of images for monitoring life sciences experiments

    NASA Astrophysics Data System (ADS)

    Haines, Richard F.; Chuang, Sherry L.

    1992-07-01

    Future manned space operations for Space Station Freedom will call for a variety of carefully planned multimedia digital communications, including full-frame-rate color video, to support remote operations of scientific experiments. This paper presents the results of an investigation to determine if video compression is a viable solution to transmission bandwidth constraints. It reports on the impact of different levels of compression and associated calculational parameters on image acceptability to investigators in life-sciences research at ARC. Three nonhuman life-sciences disciplines (plant, rodent, and primate biology) were selected for this study. A total of 33 subjects viewed experimental scenes in their own scientific disciplines. Ten plant scientists viewed still images of wheat stalks at various stages of growth. Each image was compressed to four different compression levels using the Joint Photographic Expert Group (JPEG) standard algorithm, and the images were presented in random order. Twelve and eleven staffmembers viewed 30-sec videotaped segments showing small rodents and a small primate, respectively. Each segment was repeated at four different compression levels in random order using an inverse cosine transform (ICT) algorithm. Each viewer made a series of subjective image-quality ratings. There was a significant difference in image ratings according to the type of scene viewed within disciplines; thus, ratings were scene dependent. Image (still and motion) acceptability does, in fact, vary according to compression level. The JPEG still-image-compression levels, even with the large range of 5:1 to 120:1 in this study, yielded equally high levels of acceptability. In contrast, the ICT algorithm for motion compression yielded a sharp decline in acceptability below 768 kb/sec. Therefore, if video compression is to be used as a solution for overcoming transmission bandwidth constraints, the effective management of the ratio and compression parameters according to scientific discipline and experiment type is critical to the success of remote experiments.

  17. The effects of video compression on acceptability of images for monitoring life sciences experiments

    NASA Technical Reports Server (NTRS)

    Haines, Richard F.; Chuang, Sherry L.

    1992-01-01

    Future manned space operations for Space Station Freedom will call for a variety of carefully planned multimedia digital communications, including full-frame-rate color video, to support remote operations of scientific experiments. This paper presents the results of an investigation to determine if video compression is a viable solution to transmission bandwidth constraints. It reports on the impact of different levels of compression and associated calculational parameters on image acceptability to investigators in life-sciences research at ARC. Three nonhuman life-sciences disciplines (plant, rodent, and primate biology) were selected for this study. A total of 33 subjects viewed experimental scenes in their own scientific disciplines. Ten plant scientists viewed still images of wheat stalks at various stages of growth. Each image was compressed to four different compression levels using the Joint Photographic Expert Group (JPEG) standard algorithm, and the images were presented in random order. Twelve and eleven staffmembers viewed 30-sec videotaped segments showing small rodents and a small primate, respectively. Each segment was repeated at four different compression levels in random order using an inverse cosine transform (ICT) algorithm. Each viewer made a series of subjective image-quality ratings. There was a significant difference in image ratings according to the type of scene viewed within disciplines; thus, ratings were scene dependent. Image (still and motion) acceptability does, in fact, vary according to compression level. The JPEG still-image-compression levels, even with the large range of 5:1 to 120:1 in this study, yielded equally high levels of acceptability. In contrast, the ICT algorithm for motion compression yielded a sharp decline in acceptability below 768 kb/sec. Therefore, if video compression is to be used as a solution for overcoming transmission bandwidth constraints, the effective management of the ratio and compression parameters according to scientific discipline and experiment type is critical to the success of remote experiments.

  18. Robust image obfuscation for privacy protection in Web 2.0 applications

    NASA Astrophysics Data System (ADS)

    Poller, Andreas; Steinebach, Martin; Liu, Huajian

    2012-03-01

    We present two approaches to robust image obfuscation based on permutation of image regions and channel intensity modulation. The proposed concept of robust image obfuscation is a step towards end-to-end security in Web 2.0 applications. It helps to protect the privacy of the users against threats caused by internet bots and web applications that extract biometric and other features from images for data-linkage purposes. The approaches described in this paper consider that images uploaded to Web 2.0 applications pass several transformations, such as scaling and JPEG compression, until the receiver downloads them. In contrast to existing approaches, our focus is on usability, therefore the primary goal is not a maximum of security but an acceptable trade-off between security and resulting image quality.

  19. Space Images for NASA JPL Android Version

    NASA Technical Reports Server (NTRS)

    Nelson, Jon D.; Gutheinz, Sandy C.; Strom, Joshua R.; Arca, Jeremy M.; Perez, Martin; Boggs, Karen; Stanboli, Alice

    2013-01-01

    This software addresses the demand for easily accessible NASA JPL images and videos by providing a user friendly and simple graphical user interface that can be run via the Android platform from any location where Internet connection is available. This app is complementary to the iPhone version of the application. A backend infrastructure stores, tracks, and retrieves space images from the JPL Photojournal and Institutional Communications Web server, and catalogs the information into a streamlined rating infrastructure. This system consists of four distinguishing components: image repository, database, server-side logic, and Android mobile application. The image repository contains images from various JPL flight projects. The database stores the image information as well as the user rating. The server-side logic retrieves the image information from the database and categorizes each image for display. The Android mobile application is an interfacing delivery system that retrieves the image information from the server for each Android mobile device user. Also created is a reporting and tracking system for charting and monitoring usage. Unlike other Android mobile image applications, this system uses the latest emerging technologies to produce image listings based directly on user input. This allows for countless combinations of images returned. The backend infrastructure uses industry-standard coding and database methods, enabling future software improvement and technology updates. The flexibility of the system design framework permits multiple levels of display possibilities and provides integration capabilities. Unique features of the software include image/video retrieval from a selected set of categories, image Web links that can be shared among e-mail users, sharing to Facebook/Twitter, marking as user's favorites, and image metadata searchable for instant results.

  20. Compression of multispectral fluorescence microscopic images based on a modified set partitioning in hierarchal trees

    NASA Astrophysics Data System (ADS)

    Mansoor, Awais; Robinson, J. Paul; Rajwa, Bartek

    2009-02-01

    Modern automated microscopic imaging techniques such as high-content screening (HCS), high-throughput screening, 4D imaging, and multispectral imaging are capable of producing hundreds to thousands of images per experiment. For quick retrieval, fast transmission, and storage economy, these images should be saved in a compressed format. A considerable number of techniques based on interband and intraband redundancies of multispectral images have been proposed in the literature for the compression of multispectral and 3D temporal data. However, these works have been carried out mostly in the elds of remote sensing and video processing. Compression for multispectral optical microscopy imaging, with its own set of specialized requirements, has remained under-investigated. Digital photography{oriented 2D compression techniques like JPEG (ISO/IEC IS 10918-1) and JPEG2000 (ISO/IEC 15444-1) are generally adopted for multispectral images which optimize visual quality but do not necessarily preserve the integrity of scientic data, not to mention the suboptimal performance of 2D compression techniques in compressing 3D images. Herein we report our work on a new low bit-rate wavelet-based compression scheme for multispectral fluorescence biological imaging. The sparsity of signicant coefficients in high-frequency subbands of multispectral microscopic images is found to be much greater than in natural images; therefore a quad-tree concept such as Said et al.'s SPIHT1 along with correlation of insignicant wavelet coefficients has been proposed to further exploit redundancy at high-frequency subbands. Our work propose a 3D extension to SPIHT, incorporating a new hierarchal inter- and intra-spectral relationship amongst the coefficients of 3D wavelet-decomposed image. The new relationship, apart from adopting the parent-child relationship of classical SPIHT, also brought forth the conditional "sibling" relationship by relating only the insignicant wavelet coefficients of subbands at the same level of decomposition. The insignicant quadtrees in dierent subbands in the high-frequency subband class are coded by a combined function to reduce redundancy. A number of experiments conducted on microscopic multispectral images have shown promising results for the proposed method over current state-of-the-art image-compression techniques.

  1. Development of a Mobile User Interface for Image-based Dietary Assessment.

    PubMed

    Kim, Sungye; Schap, Tusarebecca; Bosch, Marc; Maciejewski, Ross; Delp, Edward J; Ebert, David S; Boushey, Carol J

    2010-12-31

    In this paper, we present a mobile user interface for image-based dietary assessment. The mobile user interface provides a front end to a client-server image recognition and portion estimation software. In the client-server configuration, the user interactively records a series of food images using a built-in camera on the mobile device. Images are sent from the mobile device to the server, and the calorie content of the meal is estimated. In this paper, we describe and discuss the design and development of our mobile user interface features. We discuss the design concepts, through initial ideas and implementations. For each concept, we discuss qualitative user feedback from participants using the mobile client application. We then discuss future designs, including work on design considerations for the mobile application to allow the user to interactively correct errors in the automatic processing while reducing the user burden associated with classical pen-and-paper dietary records.

  2. JPEG XS, a new standard for visually lossless low-latency lightweight image compression

    NASA Astrophysics Data System (ADS)

    Descampe, Antonin; Keinert, Joachim; Richter, Thomas; Fößel, Siegfried; Rouvroy, Gaël.

    2017-09-01

    JPEG XS is an upcoming standard from the JPEG Committee (formally known as ISO/IEC SC29 WG1). It aims to provide an interoperable visually lossless low-latency lightweight codec for a wide range of applications including mezzanine compression in broadcast and Pro-AV markets. This requires optimal support of a wide range of implementation technologies such as FPGAs, CPUs and GPUs. Targeted use cases are professional video links, IP transport, Ethernet transport, real-time video storage, video memory buffers, and omnidirectional video capture and rendering. In addition to the evaluation of the visual transparency of the selected technologies, a detailed analysis of the hardware and software complexity as well as the latency has been done to make sure that the new codec meets the requirements of the above-mentioned use cases. In particular, the end-to-end latency has been constrained to a maximum of 32 lines. Concerning the hardware complexity, neither encoder nor decoder should require more than 50% of an FPGA similar to Xilinx Artix 7 or 25% of an FPGA similar to Altera Cyclon 5. This process resulted in a coding scheme made of an optional color transform, a wavelet transform, the entropy coding of the highest magnitude level of groups of coefficients, and the raw inclusion of the truncated wavelet coefficients. This paper presents the details and status of the standardization process, a technical description of the future standard, and the latest performance evaluation results.

  3. Parallel efficient rate control methods for JPEG 2000

    NASA Astrophysics Data System (ADS)

    Martínez-del-Amor, Miguel Á.; Bruns, Volker; Sparenberg, Heiko

    2017-09-01

    Since the introduction of JPEG 2000, several rate control methods have been proposed. Among them, post-compression rate-distortion optimization (PCRD-Opt) is the most widely used, and the one recommended by the standard. The approach followed by this method is to first compress the entire image split in code blocks, and subsequently, optimally truncate the set of generated bit streams according to the maximum target bit rate constraint. The literature proposes various strategies on how to estimate ahead of time where a block will get truncated in order to stop the execution prematurely and save time. However, none of them have been defined bearing in mind a parallel implementation. Today, multi-core and many-core architectures are becoming popular for JPEG 2000 codecs implementations. Therefore, in this paper, we analyze how some techniques for efficient rate control can be deployed in GPUs. In order to do that, the design of our GPU-based codec is extended, allowing stopping the process at a given point. This extension also harnesses a higher level of parallelism on the GPU, leading to up to 40% of speedup with 4K test material on a Titan X. In a second step, three selected rate control methods are adapted and implemented in our parallel encoder. A comparison is then carried out, and used to select the best candidate to be deployed in a GPU encoder, which gave an extra 40% of speedup in those situations where it was really employed.

  4. The President and the Galaxy

    NASA Astrophysics Data System (ADS)

    2004-12-01

    On December 9-10, 2004, the ESO Paranal Observatory was honoured with an overnight visit by His Excellency the President of the Republic of Chile, Ricardo Lagos and his wife, Mrs. Luisa Duran de Lagos. The high guests were welcomed by the ESO Director General, Dr. Catherine Cesarsky, ESO's representative in Chile, Mr. Daniel Hofstadt, and Prof. Maria Teresa Ruiz, Head of the Astronomy Department at the Universidad de Chile, as well as numerous ESO staff members working at the VLT site. The visit was characterised as private, and the President spent a considerable time in pleasant company with the Paranal staff, talking with and getting explanations from everybody. The distinguished visitors were shown the various high-tech installations at the observatory, including the Interferometric Tunnel with the VLTI delay lines and the first Auxiliary Telescope. Explanations were given by ESO astronomers and engineers and the President, a keen amateur astronomer, gained a good impression of the wide range of exciting research programmes that are carried out with the VLT. President Lagos showed a deep interest and impressed everyone present with many, highly relevant questions. Having enjoyed the spectacular sunset over the Pacific Ocean from the Residence terrace, the President met informally with the Paranal employees who had gathered for this unique occasion. Later, President Lagos visited the VLT Control Room from where the four 8.2-m Unit Telescopes and the VLT Interferometer (VLTI) are operated. Here, the President took part in an observing sequence of the spiral galaxy NGC 1097 (see PR Photo 35d/04) from the console of the MELIPAL telescope. After one more visit to the telescope platform at the top of Paranal, the President and his wife left the Observatory in the morning of December 10, 2004, flying back to Santiago. ESO PR Photo 35e/04 ESO PR Photo 35e/04 President Lagos Meets with ESO Staff at the Paranal Residencia [Preview - JPEG: 400 x 267pix - 144k] [Normal - JPEG: 640 x 427 pix - 240k] ESO PR Photo 35f/04 ESO PR Photo 35f/04 The Presidential Couple with Professor Maria Teresa Ruiz and the ESO Director General [Preview - JPEG: 500 x 400 pix - 224k] [Normal - JPEG: 1000 x 800 pix - 656k] [FullRes - JPEG: 1575 x 1260 pix - 1.0M] ESO PR Photo 35g/04 ESO PR Photo 35g/04 President Lagos with ESO Staff [Preview - JPEG: 500 x 400 pix - 192k] [Normal - JPEG: 1000 x 800 pix - 592k] [FullRes - JPEG: 1575 x 1200 pix - 1.1M] Captions: ESO PR Photo 35e/04 was obtained during President Lagos' meeting with ESO Staff at the Paranal Residencia. On ESO PR Photo 35f/04, President Lagos and Mrs. Luisa Duran de Lagos are seen at a quiet moment during the visit to the VLT Control Room, together with Prof. Maria Teresa Ruiz (far right), Head of the Astronomy Department at the Universidad de Chile, and the ESO Director General. ESO PR Photo 35g/04 shows President Lagos with some ESO staff members in the Paranal Residencia. VLT obtains a splendid photo of a unique galaxy, NGC 1097 ESO PR Photo 35d/04 ESO PR Photo 35d/04 Spiral Galaxy NGC 1097 (Melipal + VIMOS) [Preview - JPEG: 400 x 525 pix - 181k] [Normal - JPEG: 800 x 1049 pix - 757k] [FullRes - JPEG: 2296 x 3012 pix - 7.9M] Captions: ESO PR Photo 35d/04 is an almost-true colour composite based on three images made with the multi-mode VIMOS instrument on the 8.2-m Melipal (Unit Telescope 3) of ESO's Very Large Telescope. They were taken on the night of December 9-10, 2004, in the presence of the President of the Republic of Chile, Ricardo Lagos. Details are available in the Technical Note below. A unique and very beautiful image was obtained with the VIMOS instrument with President Lagos at the control desk. Located at a distance of about 45 million light-years in the southern constellation Fornax (the Furnace), NGC 1097 is a relatively bright, barred spiral galaxy of type SBb, seen face-on. At magnitude 9.5, and thus just 25 times fainter than the faintest object that can be seen with the unaided eye, it appears in small telescopes as a bright, circular disc. ESO PR Photo 35d/04, taken on the night of December 9 to 10, 2004 with the VIsible Multi-Object Spectrograph ("VIMOS), a four-channel multiobject spectrograph and imager attached to the 8.2-m VLT Melipal telescope, shows that the real structure is much more complicated. NGC 1097 is indeed a most interesting object in many respects. As this striking image reveals, NGC 1097 presents a centre that consists of a broken ring of bright knots surrounding the galaxy's nucleus. The sizes of these knots - presumably gigantic bubbles of hydrogen atoms having lost one electron (HII regions) through the intense radiation from luminous massive stars - range from roughly 750 to 2000 light-years. The presence of these knots suggests that an energetic burst of star formation has recently occurred. NGC 1097 is also known as an example of the so-called LINER (Low-Ionization Nuclear Emission Region Galaxies) class. Objects of this type are believed to be low-luminosity examples of Active Galactic Nuclei (AGN), whose emission is thought to arise from matter (gas and stars) falling into oblivion in a central black hole. There is indeed much evidence that a supermassive black hole is located at the very centre of NGC 1097, with a mass of several tens of million times the mass of the Sun. This is at least ten times more massive than the central black hole in our own Milky Way. However, NGC 1097 possesses a comparatively faint nucleus only, and the black hole in its centre must be on a very strict "diet": only a small amount of gas and stars is apparently being swallowed by the black hole at any given moment. A turbulent past As can be clearly seen in the upper part of PR Photo 35d/04, NGC 1097 also has a small galaxy companion; it is designated NGC 1097A and is located about 42,000 light-years away from the centre of NGC 1097. This peculiar elliptical galaxy is 25 times fainter than its big brother and has a "box-like" shape, not unlike NGC 6771, the smallest of the three galaxies that make up the famous Devil's Mask, cf. ESO PR Photo 12/04. There is evidence that NGC 1097 and NGC 1097A have been interacting in the recent past. Another piece of evidence for this galaxy's tumultuous past is the presence of four jets - not visible on this image - discovered in the 1970's on photographic plates. These jets are now believed to be the captured remains of a disrupted dwarf galaxy that passed through the inner part of the disc of NGC 1097. Moreover, another interesting feature of this active galaxy is the fact that no less than two supernovae were detected inside it within a time span of only four years. SN 1999eu was discovered by Japanese amateur Masakatsu Aoki (Toyama, Japan) on November 5, 1999. This 17th-magnitude supernova was a peculiar Type II supernova, the end result of the core collapse of a very massive star. And in the night of January 5 to 6, 2003, Reverend Robert Evans (Australia) discovered another Type II supernova of 15th magnitude. Also visible in this very nice image which was taken during very good sky conditions - the seeing was well below 1 arcsec - are a multitude of background galaxies of different colours and shapes. Given the fact that the total exposure time for this three-colour image was just 11 min, it is a remarkable feat, demonstrating once again the very high efficiency of the VLT.

  5. An Efficient Algorithm for Server Thermal Fault Diagnosis Based on Infrared Image

    NASA Astrophysics Data System (ADS)

    Liu, Hang; Xie, Ting; Ran, Jian; Gao, Shan

    2017-10-01

    It is essential for a data center to maintain server security and stability. Long-time overload operation or high room temperature may cause service disruption even a server crash, which would result in great economic loss for business. Currently, the methods to avoid server outages are monitoring and forecasting. Thermal camera can provide fine texture information for monitoring and intelligent thermal management in large data center. This paper presents an efficient method for server thermal fault monitoring and diagnosis based on infrared image. Initially thermal distribution of server is standardized and the interest regions of the image are segmented manually. Then the texture feature, Hu moments feature as well as modified entropy feature are extracted from the segmented regions. These characteristics are applied to analyze and classify thermal faults, and then make efficient energy-saving thermal management decisions such as job migration. For the larger feature space, the principal component analysis is employed to reduce the feature dimensions, and guarantee high processing speed without losing the fault feature information. Finally, different feature vectors are taken as input for SVM training, and do the thermal fault diagnosis after getting the optimized SVM classifier. This method supports suggestions for optimizing data center management, it can improve air conditioning efficiency and reduce the energy consumption of the data center. The experimental results show that the maximum detection accuracy is 81.5%.

  6. Aladin Lite: Lightweight sky atlas for browsers

    NASA Astrophysics Data System (ADS)

    Boch, Thomas

    2014-02-01

    Aladin Lite is a lightweight version of the Aladin tool, running in the browser and geared towards simple visualization of a sky region. It allows visualization of image surveys (JPEG multi-resolution HEALPix all-sky surveys) and permits superimposing tabular (VOTable) and footprints (STC-S) data. Aladin Lite is powered by HTML5 canvas technology and is easily embeddable on any web page and can also be controlled through a Javacript API.

  7. Impact of JPEG2000 compression on endmember extraction and unmixing of remotely sensed hyperspectral data

    NASA Astrophysics Data System (ADS)

    Martin, Gabriel; Gonzalez-Ruiz, Vicente; Plaza, Antonio; Ortiz, Juan P.; Garcia, Inmaculada

    2010-07-01

    Lossy hyperspectral image compression has received considerable interest in recent years due to the extremely high dimensionality of the data. However, the impact of lossy compression on spectral unmixing techniques has not been widely studied. These techniques characterize mixed pixels (resulting from insufficient spatial resolution) in terms of a suitable combination of spectrally pure substances (called endmembers) weighted by their estimated fractional abundances. This paper focuses on the impact of JPEG2000-based lossy compression of hyperspectral images on the quality of the endmembers extracted by different algorithms. The three considered algorithms are the orthogonal subspace projection (OSP), which uses only spatial information, and the automatic morphological endmember extraction (AMEE) and spatial spectral endmember extraction (SSEE), which integrate both spatial and spectral information in the search for endmembers. The impact of compression on the resulting abundance estimation based on the endmembers derived by different methods is also substantiated. Experimental results are conducted using a hyperspectral data set collected by NASA Jet Propulsion Laboratory over the Cuprite mining district in Nevada. The experimental results are quantitatively analyzed using reference information available from U.S. Geological Survey, resulting in recommendations to specialists interested in applying endmember extraction and unmixing algorithms to compressed hyperspectral data.

  8. Compression for radiological images

    NASA Astrophysics Data System (ADS)

    Wilson, Dennis L.

    1992-07-01

    The viewing of radiological images has peculiarities that must be taken into account in the design of a compression technique. The images may be manipulated on a workstation to change the contrast, to change the center of the brightness levels that are viewed, and even to invert the images. Because of the possible consequences of losing information in a medical application, bit preserving compression is used for the images used for diagnosis. However, for archiving the images may be compressed to 10 of their original size. A compression technique based on the Discrete Cosine Transform (DCT) takes the viewing factors into account by compressing the changes in the local brightness levels. The compression technique is a variation of the CCITT JPEG compression that suppresses the blocking of the DCT except in areas of very high contrast.

  9. Image-based electronic patient records for secured collaborative medical applications.

    PubMed

    Zhang, Jianguo; Sun, Jianyong; Yang, Yuanyuan; Liang, Chenwen; Yao, Yihong; Cai, Weihua; Jin, Jin; Zhang, Guozhen; Sun, Kun

    2005-01-01

    We developed a Web-based system to interactively display image-based electronic patient records (EPR) for secured intranet and Internet collaborative medical applications. The system consists of four major components: EPR DICOM gateway (EPR-GW), Image-based EPR repository server (EPR-Server), Web Server and EPR DICOM viewer (EPR-Viewer). In the EPR-GW and EPR-Viewer, the security modules of Digital Signature and Authentication are integrated to perform the security processing on the EPR data with integrity and authenticity. The privacy of EPR in data communication and exchanging is provided by SSL/TLS-based secure communication. This presentation gave a new approach to create and manage image-based EPR from actual patient records, and also presented a way to use Web technology and DICOM standard to build an open architecture for collaborative medical applications.

  10. Deepest Wide-Field Colour Image in the Southern Sky

    NASA Astrophysics Data System (ADS)

    2003-01-01

    LA SILLA CAMERA OBSERVES CHANDRA DEEP FIELD SOUTH ESO PR Photo 02a/03 ESO PR Photo 02a/03 [Preview - JPEG: 400 x 437 pix - 95k] [Normal - JPEG: 800 x 873 pix - 904k] [HiRes - JPEG: 4000 x 4366 pix - 23.1M] Caption : PR Photo 02a/03 shows a three-colour composite image of the Chandra Deep Field South (CDF-S) , obtained with the Wide Field Imager (WFI) camera on the 2.2-m MPG/ESO telescope at the ESO La Silla Observatory (Chile). It was produced by the combination of about 450 images with a total exposure time of nearly 50 hours. The field measures 36 x 34 arcmin 2 ; North is up and East is left. Technical information is available below. The combined efforts of three European teams of astronomers, targeting the same sky field in the southern constellation Fornax (The Oven) have enabled them to construct a very deep, true-colour image - opening an exceptionally clear view towards the distant universe . The image ( PR Photo 02a/03 ) covers an area somewhat larger than the full moon. It displays more than 100,000 galaxies, several thousand stars and hundreds of quasars. It is based on images with a total exposure time of nearly 50 hours, collected under good observing conditions with the Wide Field Imager (WFI) on the MPG/ESO 2.2m telescope at the ESO La Silla Observatory (Chile) - many of them extracted from the ESO Science Data Archive . The position of this southern sky field was chosen by Riccardo Giacconi (Nobel Laureate in Physics 2002) at a time when he was Director General of ESO, together with Piero Rosati (ESO). It was selected as a sky region towards which the NASA Chandra X-ray satellite observatory , launched in July 1999, would be pointed while carrying out a very long exposure (lasting a total of 1 million seconds, or 278 hours) in order to detect the faintest possible X-ray sources. The field is now known as the Chandra Deep Field South (CDF-S) . The new WFI photo of CDF-S does not reach quite as deep as the available images of the "Hubble Deep Fields" (HDF-N in the northern and HDF-S in the southern sky, cf. e.g. ESO PR Photo 35a/98 ), but the field-of-view is about 200 times larger. The present image displays about 50 times more galaxies than the HDF images, and therefore provides a more representative view of the universe . The WFI CDF-S image will now form a most useful basis for the very extensive and systematic census of the population of distant galaxies and quasars, allowing at once a detailed study of all evolutionary stages of the universe since it was about 2 billion years old . These investigations have started and are expected to provide information about the evolution of galaxies in unprecedented detail. They will offer insights into the history of star formation and how the internal structure of galaxies changes with time and, not least, throw light on how these two evolutionary aspects are interconnected. GALAXIES IN THE WFI IMAGE ESO PR Photo 02b/03 ESO PR Photo 02b/03 [Preview - JPEG: 488 x 400 pix - 112k] [Normal - JPEG: 896 x 800 pix - 1.0M] [Full-Res - JPEG: 2591 x 2313 pix - 8.6M] Caption : PR Photo 02b/03 contains a collection of twelve subfields from the full WFI Chandra Deep Field South (WFI CDF-S), centred on (pairs or groups of) galaxies. Each of the subfields measures 2.5 x 2.5 arcmin 2 (635 x 658 pix 2 ; 1 pixel = 0.238 arcsec). North is up and East is left. Technical information is available below. The WFI CDF-S colour image - of which the full field is shown in PR Photo 02a/03 - was constructed from all available observations in the optical B- ,V- and R-bands obtained under good conditions with the Wide Field Imager (WFI) on the 2.2-m MPG/ESO telescope at the ESO La Silla Observatory (Chile), and now stored in the ESO Science Data Archive. It is the "deepest" image ever taken with this instrument. It covers a sky field measuring 36 x 34 arcmin 2 , i.e., an area somewhat larger than that of the full moon. The observations were collected during a period of nearly four years, beginning in January 1999 when the WFI instrument was first installed (cf. ESO PR 02/99 ) and ending in October 2002. Altogether, nearly 50 hours of exposure were collected in the three filters combined here, cf. the technical information below. Although it is possible to identify more than 100,000 galaxies in the image - some of which are shown in PR Photo 02b/03 - it is still remarkably "empty" by astronomical standards. Even the brightest stars in the field (of visual magnitude 9) can hardly be seen by human observers with binoculars. In fact, the area density of bright, nearby galaxies is only half of what it is in "normal" sky fields. Comparatively empty fields like this one provide an unsually clear view towards the distant regions in the universe and thus open a window towards the earliest cosmic times . Research projects in the Chandra Deep Field South ESO PR Photo 02c/03 ESO PR Photo 02c/03 [Preview - JPEG: 400 x 513 pix - 112k] [Normal - JPEG: 800 x 1026 pix - 1.2M] [Full-Res - JPEG: 1717 x 2201 pix - 5.5M] ESO PR Photo 02d/03 ESO PR Photo 02d/03 [Preview - JPEG: 400 x 469 pix - 112k] [Normal - JPEG: 800 x 937 pix - 1.0M] [Full-Res - JPEG: 2545 x 2980 pix - 10.7M] Caption : PR Photo 02c-d/03 shows two sky fields within the WFI image of CDF-S, reproduced at full (pixel) size to illustrate the exceptional information richness of these data. The subfields measure 6.8 x 7.8 arcmin 2 (1717 x 1975 pixels) and 10.1 x 10.5 arcmin 2 (2545 x 2635 pixels), respectively. North is up and East is left. Technical information is available below. Astronomers from different teams and disciplines have been quick to join forces in a world-wide co-ordinated effort around the Chandra Deep Field South. Observations of this area are now being performed by some of the most powerful astronomical facilities and instruments. They include space-based X-ray and infrared observations by the ESA XMM-Newton , the NASA CHANDRA , Hubble Space Telescope (HST) and soon SIRTF (scheduled for launch in a few months), as well as imaging and spectroscopical observations in the infrared and optical part of the spectrum by telescopes at the ground-based observatories of ESO (La Silla and Paranal) and NOAO (Kitt Peak and Tololo). A huge database is currently being created that will help to analyse the evolution of galaxies in all currently feasible respects. All participating teams have agreed to make their data on this field publicly available, thus providing the world-wide astronomical community with a unique opportunity to perform competitive research, joining forces within this vast scientific project. Concerted observations The optical true-colour WFI image presented here forms an important part of this broad, concerted approach. It combines observations of three scientific teams that have engaged in complementary scientific projects, thereby capitalizing on this very powerful combination of their individual observations. The following teams are involved in this work: * COMBO-17 (Classifying Objects by Medium-Band Observations in 17 filters) : an international collaboration led by Christian Wolf and other scientists at the Max-Planck-Institut für Astronomie (MPIA, Heidelberg, Germany). This team used 51 hours of WFI observing time to obtain images through five broad-band and twelve medium-band optical filters in the visual spectral region in order to measure the distances (by means of "photometric redshifts") and star-formation rates of about 10,000 galaxies, thereby also revealing their evolutionary status. * EIS (ESO Imaging Survey) : a team of visiting astronomers from the ESO community and beyond, led by Luiz da Costa (ESO). They observed the CDF-S for 44 hours in six optical bands with the WFI camera on the MPG/ESO 2.2-m telescope and 28 hours in two near-infrared bands with the SOFI instrument at the ESO 3.5-m New Technology Telescope (NTT) , both at La Silla. These observations form part of the Deep Public Imaging Survey that covers a total sky area of 3 square degrees. * GOODS (The Great Observatories Origins Deep Survey) : another international team (on the ESO side, led by Catherine Cesarsky ) that focusses on the coordination of deep space- and ground-based observations on a smaller, central area of the CDF-S in order to image the galaxies in many differerent spectral wavebands, from X-rays to radio. GOODS has contributed with 40 hours of WFI time for observations in three broad-band filters that were designed for the selection of targets to be spectroscopically observed with the ESO Very Large Telescope (VLT) at the Paranal Observatory (Chile), for which over 200 hours of observations are planned. About 10,000 galaxies will be spectroscopically observed in order to determine their redshift (distance), star formation rate, etc. Another important contribution to this large research undertaking will come from the GEMS project. This is a "HST treasury programme" (with Hans-Walter Rix from MPIA as Principal Investigator) which observes the 10,000 galaxies identified in COMBO-17 - and eventually the entire WFI-field with HST - to show the evolution of their shapes with time. Great questions With the combination of data from many wavelength ranges now at hand, the astronomers are embarking upon studies of the many different processes in the universe. They expect to shed more light on several important cosmological questions, such as: * How and when was the first generation of stars born? * When exactly was the neutral hydrogen in the universe ionized the first time by powerful radiation emitted from the first stars and active galactic nuclei? * How did galaxies and groups of galaxies evolve during the past 13 billion years? * What is the true nature of those elusive objects that are only seen at the infrared and submillimetre wavelengths (cf. ESO PR 23/02 )? * Which fraction of galaxies had an "active" nucleus (probably with a black hole at the centre) in their past, and how long did this phase last? Moreover, since these extensive optical observations were obtained in the course of a dozen observing periods during several years, it is also possible to perform studies of certain variable phenomena: * How many variable sources are seen and what are their types and properties? * How many supernovae are detected per time interval, i.e. what is the supernovae frequency at different cosmic epochs? * How do those processes depend on each other? This is just a short and very incomplete list of questions astronomers world-wide will address using all the complementary observations. No doubt that the coming studies of the Chandra Deep Field South - with this and other data - will be most exciting and instructive! Other wide-field images Other wide-field images from the WFI have been published in various ESO press releases during the past four years - they are also available at the WFI Photo Gallery . A collection of full-resolution files (TIFF-format) is available on a WFI CD-ROM . Technical Information The very extensive data reduction and colour image processing needed to produce these images were performed by Mischa Schirmer and Thomas Erben at the "Wide Field Expertise Center" of the Institut für Astrophysik und Extraterrestrische Forschung der Universität Bonn (IAEF) in Germany. It was done by means of a software pipeline specialised for reduction of multiple CCD wide-field imaging camera data. This pipeline is mainly based on publicly available software modules and algorithms ( EIS , FLIPS , LDAC , Terapix , Wifix ). The image was constructed from about 150 exposures in each of the following wavebands: B-band (centred at wavelength 456 nm; here rendered as blue, 15.8 hours total exposure time), V-band (540 nm; green, 15.6 hours) and R-band (652 nm; red, 17.8 hours). Only images taken under sufficiently good observing conditions (defined as seeing less than 1.1 arcsec) were included. In total, 450 images were assembled to produce this colour image, together with about as many calibration images (biases, darks and flats). More than 2 Terabyte (TB) of temporary files were produced during the extensive data reduction. Parallel processing of all data sets took about two weeks on a four-processor Sun Enterprise 450 workstation and a 1.8 GHz dual processor Linux PC. The final colour image was assembled in Adobe Photoshop. The observations were performed by ESO (GOODS, EIS) and the COMBO-17 collaboration in the period 1/1999-10/2002.

  11. Global Imagery Browse Services (GIBS) - Rapidly Serving NASA Imagery for Applications and Science Users

    NASA Astrophysics Data System (ADS)

    Schmaltz, J. E.; Ilavajhala, S.; Plesea, L.; Hall, J. R.; Boller, R. A.; Chang, G.; Sadaqathullah, S.; Kim, R.; Murphy, K. J.; Thompson, C. K.

    2012-12-01

    Expedited processing of imagery from NASA satellites for near-real time use by non-science applications users has a long history, especially since the beginning of the Terra and Aqua missions. Several years ago, the Land Atmosphere Near-real-time Capability for EOS (LANCE) was created to greatly expand the range of near-real time data products from a variety of Earth Observing System (EOS) instruments. NASA's Earth Observing System Data and Information System (EOSDIS) began exploring methods to distribute these data as imagery in an intuitive, geo-referenced format, which would be available within three hours of acquisition. Toward this end, EOSDIS has developed the Global Imagery Browse Services (GIBS, http://earthdata.nasa.gov/gibs) to provide highly responsive, scalable, and expandable imagery services. The baseline technology chosen for GIBS was a Tiled Web Mapping Service (TWMS) developed at the Jet Propulsion Laboratory. Using this, global images and mosaics are divided into tiles with fixed bounding boxes for a pyramid of fixed resolutions. Initially, the satellite imagery is created at the existing data systems for each sensor, ensuring the oversight of those most knowledgeable about the science. There, the satellite data is geolocated and converted to an image format such as JPEG, TIFF, or PNG. The GIBS ingest server retrieves imagery from the various data systems and converts them into image tiles, which are stored in a highly-optimized raster format named Meta Raster Format (MRF). The image tiles are then served to users via HTTP by means of an Apache module. Services are available for the entire globe (lat-long projection) and for both polar regions (polar stereographic projection). Requests to the services can be made with the non-standard, but widely known, TWMS format or via the well-known OGC Web Map Tile Service (WMTS) standard format. Standard OGC Web Map Service (WMS) access to the GIBS server is also available. In addition, users may request a KML pyramid. This variety of access methods allows stakeholders to develop visualization/browse clients for a diverse variety of specific audiences. Currently, EOSDIS is providing an OpenLayers web client, Worldview (http://earthdata.nasa.gov/worldview), as an interface to GIBS. A variety of other existing clients can also be developed using such tools as Google Earth, Google Earth browser Plugin, ESRI's Adobe Flash/Flex Client Library, NASA World Wind, Perceptive Pixel Client, Esri's iOS Client Library, and OpenLayers for Mobile. The imagery browse capabilities from GIBS can be combined with other EOSDIS services (i.e. ECHO OpenSearch) via a client that ties them both together to provide an interface that enables data download from the onscreen imagery. Future plans for GIBS include providing imagery based on science quality data from the entire data record of these EOS instruments.

  12. Novel Algorithm for Classification of Medical Images

    NASA Astrophysics Data System (ADS)

    Bhushan, Bharat; Juneja, Monika

    2010-11-01

    Content-based image retrieval (CBIR) methods in medical image databases have been designed to support specific tasks, such as retrieval of medical images. These methods cannot be transferred to other medical applications since different imaging modalities require different types of processing. To enable content-based queries in diverse collections of medical images, the retrieval system must be familiar with the current Image class prior to the query processing. Further, almost all of them deal with the DICOM imaging format. In this paper a novel algorithm based on energy information obtained from wavelet transform for the classification of medical images according to their modalities is described. For this two types of wavelets have been used and have been shown that energy obtained in either case is quite distinct for each of the body part. This technique can be successfully applied to different image formats. The results are shown for JPEG imaging format.

  13. Client/server approach to image capturing

    NASA Astrophysics Data System (ADS)

    Tuijn, Chris; Stokes, Earle

    1998-01-01

    The diversity of the digital image capturing devices on the market today is quite astonishing and ranges from low-cost CCD scanners to digital cameras (for both action and stand-still scenes), mid-end CCD scanners for desktop publishing and pre- press applications and high-end CCD flatbed scanners and drum- scanners with photo multiplier technology. Each device and market segment has its own specific needs which explains the diversity of the associated scanner applications. What all those applications have in common is the need to communicate with a particular device to import the digital images; after the import, additional image processing might be needed as well as color management operations. Although the specific requirements for all of these applications might differ considerably, a number of image capturing and color management facilities as well as other services are needed which can be shared. In this paper, we propose a client/server architecture for scanning and image editing applications which can be used as a common component for all these applications. One of the principal components of the scan server is the input capturing module. The specification of the input jobs is based on a generic input device model. Through this model we make abstraction of the specific scanner parameters and define the scan job definitions by a number of absolute parameters. As a result, scan job definitions will be less dependent on a particular scanner and have a more universal meaning. In this context, we also elaborate on the interaction of the generic parameters and the color characterization (i.e., the ICC profile). Other topics that are covered are the scheduling and parallel processing capabilities of the server, the image processing facilities, the interaction with the ICC engine, the communication facilities (both in-memory and over the network) and the different client architectures (stand-alone applications, TWAIN servers, plug-ins, OLE or Apple-event driven applications). This paper is structured as follows. In the introduction, we further motive the need for a scan server-based architecture. In the second section, we give a brief architectural overview of the scan server and the other components it is connected to. The third chapter exposes the generic model for input devices as well as the image processing model; the fourth chapter reveals the different shapes the scanning applications (or modules) can have. In the last section, we briefly summarize the presented material and point out trends for future development.

  14. Meteosat Indian Ocean Data Coverage (IODC): Full Disk - NOAA GOES

    Science.gov Websites

    Geostationary Satellite Server » DOC » NOAA » NESDIS » OSPO NOAA GOES Geostationary Satellite Server NOAA GOES Geostationary Satellite Server Click to Search GENERAL Home Channel Overview Site loops. These images are updated every six hours from data provided by Europe's Meteorological Satellite

  15. LDPC-based iterative joint source-channel decoding for JPEG2000.

    PubMed

    Pu, Lingling; Wu, Zhenyu; Bilgin, Ali; Marcellin, Michael W; Vasic, Bane

    2007-02-01

    A framework is proposed for iterative joint source-channel decoding of JPEG2000 codestreams. At the encoder, JPEG2000 is used to perform source coding with certain error-resilience (ER) modes, and LDPC codes are used to perform channel coding. During decoding, the source decoder uses the ER modes to identify corrupt sections of the codestream and provides this information to the channel decoder. Decoding is carried out jointly in an iterative fashion. Experimental results indicate that the proposed method requires fewer iterations and improves overall system performance.

  16. Using Purpose-Built Functions and Block Hashes to Enable Small Block and Sub-file Forensics

    DTIC Science & Technology

    2010-01-01

    JPEGs. We tested precarve using the nps-2009-canon2-gen6 (Garfinkel et al., 2009) disk image. The disk image was created with a 32 MB SD card and a...analysis of n-grams in the fragment. Fig. 1 e Usage of a 160 GB iPod reported by iTunes 8.2.1 (6) (top), as reported by the file system (bottom center), and...as computing with random sampling (bottom right). Note that iTunes usage actually in GiB, even though the program displays the “GB” label. Fig. 2 e

  17. Reproducibility of the NEPTUNE descriptor-based scoring system on whole-slide images and histologic and ultrastructural digital images.

    PubMed

    Barisoni, Laura; Troost, Jonathan P; Nast, Cynthia; Bagnasco, Serena; Avila-Casado, Carmen; Hodgin, Jeffrey; Palmer, Matthew; Rosenberg, Avi; Gasim, Adil; Liensziewski, Chrysta; Merlino, Lino; Chien, Hui-Ping; Chang, Anthony; Meehan, Shane M; Gaut, Joseph; Song, Peter; Holzman, Lawrence; Gibson, Debbie; Kretzler, Matthias; Gillespie, Brenda W; Hewitt, Stephen M

    2016-07-01

    The multicenter Nephrotic Syndrome Study Network (NEPTUNE) digital pathology scoring system employs a novel and comprehensive methodology to document pathologic features from whole-slide images, immunofluorescence and ultrastructural digital images. To estimate inter- and intra-reader concordance of this descriptor-based approach, data from 12 pathologists (eight NEPTUNE and four non-NEPTUNE) with experience from training to 30 years were collected. A descriptor reference manual was generated and a webinar-based protocol for consensus/cross-training implemented. Intra-reader concordance for 51 glomerular descriptors was evaluated on jpeg images by seven NEPTUNE pathologists scoring 131 glomeruli three times (Tests I, II, and III), each test following a consensus webinar review. Inter-reader concordance of glomerular descriptors was evaluated in 315 glomeruli by all pathologists; interstitial fibrosis and tubular atrophy (244 cases, whole-slide images) and four ultrastructural podocyte descriptors (178 cases, jpeg images) were evaluated once by six and five pathologists, respectively. Cohen's kappa for inter-reader concordance for 48/51 glomerular descriptors with sufficient observations was moderate (0.40

  18. Optimal color coding for compression of true color images

    NASA Astrophysics Data System (ADS)

    Musatenko, Yurij S.; Kurashov, Vitalij N.

    1998-11-01

    In the paper we present the method that improves lossy compression of the true color or other multispectral images. The essence of the method is to project initial color planes into Karhunen-Loeve (KL) basis that gives completely decorrelated representation for the image and to compress basis functions instead of the planes. To do that the new fast algorithm of true KL basis construction with low memory consumption is suggested and our recently proposed scheme for finding optimal losses of Kl functions while compression is used. Compare to standard JPEG compression of the CMYK images the method provides the PSNR gain from 0.2 to 2 dB for the convenient compression ratios. Experimental results are obtained for high resolution CMYK images. It is demonstrated that presented scheme could work on common hardware.

  19. An extensible and lightweight architecture for adaptive server applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorton, Ian; Liu, Yan; Trivedi, Nihar

    2008-07-10

    Server applications augmented with behavioral adaptation logic can react to environmental changes, creating self-managing server applications with improved quality of service at runtime. However, developing adaptive server applications is challenging due to the complexity of the underlying server technologies and highly dynamic application environments. This paper presents an architecture framework, the Adaptive Server Framework (ASF), to facilitate the development of adaptive behavior for legacy server applications. ASF provides a clear separation between the implementation of adaptive behavior and the business logic of the server application. This means a server application can be extended with programmable adaptive features through the definitionmore » and implementation of control components defined in ASF. Furthermore, ASF is a lightweight architecture in that it incurs low CPU overhead and memory usage. We demonstrate the effectiveness of ASF through a case study, in which a server application dynamically determines the resolution and quality to scale an image based on the load of the server and network connection speed. The experimental evaluation demonstrates the erformance gains possible by adaptive behavior and the low overhead introduced by ASF.« less

  20. Map_plot and bgg_plot: software for integration of geoscience datasets

    NASA Astrophysics Data System (ADS)

    Gaillot, Philippe; Punongbayan, Jane T.; Rea, Brice

    2004-02-01

    Since 1985, the Ocean Drilling Program (ODP) has been supporting multidisciplinary research in exploring the structure and history of Earth beneath the oceans. After more than 200 Legs, complementary datasets covering different geological environments, periods and space scales have been obtained and distributed world-wide using the ODP-Janus and Lamont Doherty Earth Observatory-Borehole Research Group (LDEO-BRG) database servers. In Earth Sciences, more than in any other science, the ensemble of these data is characterized by heterogeneous formats and graphical representation modes. In order to fully and quickly assess this information, a set of Unix/Linux and Generic Mapping Tool-based C programs has been designed to convert and integrate datasets acquired during the present ODP and the future Integrated ODP (IODP) Legs. Using ODP Leg 199 datasets, we show examples of the capabilities of the proposed programs. The program map_plot is used to easily display datasets onto 2-D maps. The program bgg_plot (borehole geology and geophysics plot) displays data with respect to depth and/or time. The latter program includes depth shifting, filtering and plotting of core summary information, continuous and discrete-sample core measurements (e.g. physical properties, geochemistry, etc.), in situ continuous logs, magneto- and bio-stratigraphies, specific sedimentological analyses (lithology, grain size, texture, porosity, etc.), as well as core and borehole wall images. Outputs from both programs are initially produced in PostScript format that can be easily converted to Portable Document Format (PDF) or standard image formats (GIF, JPEG, etc.) using widely distributed conversion programs. Based on command line operations and customization of parameter files, these programs can be included in other shell- or database-scripts, automating plotting procedures of data requests. As an open source software, these programs can be customized and interfaced to fulfill any specific plotting need of geoscientists using ODP-like datasets.

  1. Low bit rate coding of Earth science images

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Chung, Wilson C.; Smith, Mark J. T.

    1993-01-01

    In this paper, the authors discuss compression based on some new ideas in vector quantization and their incorporation in a sub-band coding framework. Several variations are considered, which collectively address many of the individual compression needs within the earth science community. The approach taken in this work is based on some recent advances in the area of variable rate residual vector quantization (RVQ). This new RVQ method is considered separately and in conjunction with sub-band image decomposition. Very good results are achieved in coding a variety of earth science images. The last section of the paper provides some comparisons that illustrate the improvement in performance attributable to this approach relative the the JPEG coding standard.

  2. Two-dimensional compression of surface electromyographic signals using column-correlation sorting and image encoders.

    PubMed

    Costa, Marcus V C; Carvalho, Joao L A; Berger, Pedro A; Zaghetto, Alexandre; da Rocha, Adson F; Nascimento, Francisco A O

    2009-01-01

    We present a new preprocessing technique for two-dimensional compression of surface electromyographic (S-EMG) signals, based on correlation sorting. We show that the JPEG2000 coding system (originally designed for compression of still images) and the H.264/AVC encoder (video compression algorithm operating in intraframe mode) can be used for compression of S-EMG signals. We compare the performance of these two off-the-shelf image compression algorithms for S-EMG compression, with and without the proposed preprocessing step. Compression of both isotonic and isometric contraction S-EMG signals is evaluated. The proposed methods were compared with other S-EMG compression algorithms from the literature.

  3. Compression of CCD raw images for digital still cameras

    NASA Astrophysics Data System (ADS)

    Sriram, Parthasarathy; Sudharsanan, Subramania

    2005-03-01

    Lossless compression of raw CCD images captured using color filter arrays has several benefits. The benefits include improved storage capacity, reduced memory bandwidth, and lower power consumption for digital still camera processors. The paper discusses the benefits in detail and proposes the use of a computationally efficient block adaptive scheme for lossless compression. Experimental results are provided that indicate that the scheme performs well for CCD raw images attaining compression factors of more than two. The block adaptive method also compares favorably with JPEG-LS. A discussion is provided indicating how the proposed lossless coding scheme can be incorporated into digital still camera processors enabling lower memory bandwidth and storage requirements.

  4. Informatics in radiology (infoRAD): Vendor-neutral case input into a server-based digital teaching file system.

    PubMed

    Kamauu, Aaron W C; DuVall, Scott L; Robison, Reid J; Liimatta, Andrew P; Wiggins, Richard H; Avrin, David E

    2006-01-01

    Although digital teaching files are important to radiology education, there are no current satisfactory solutions for export of Digital Imaging and Communications in Medicine (DICOM) images from picture archiving and communication systems (PACS) in desktop publishing format. A vendor-neutral digital teaching file, the Radiology Interesting Case Server (RadICS), offers an efficient tool for harvesting interesting cases from PACS without requiring modifications of the PACS configurations. Radiologists push imaging studies from PACS to RadICS via the standard DICOM Send process, and the RadICS server automatically converts the DICOM images into the Joint Photographic Experts Group format, a common desktop publishing format. They can then select key images and create an interesting case series at the PACS workstation. RadICS was tested successfully against multiple unmodified commercial PACS. Using RadICS, radiologists are able to harvest and author interesting cases at the point of clinical interpretation with minimal disruption in clinical work flow. RSNA, 2006

  5. Development of a Mobile User Interface for Image-based Dietary Assessment

    PubMed Central

    Kim, SungYe; Schap, TusaRebecca; Bosch, Marc; Maciejewski, Ross; Delp, Edward J.; Ebert, David S.; Boushey, Carol J.

    2011-01-01

    In this paper, we present a mobile user interface for image-based dietary assessment. The mobile user interface provides a front end to a client-server image recognition and portion estimation software. In the client-server configuration, the user interactively records a series of food images using a built-in camera on the mobile device. Images are sent from the mobile device to the server, and the calorie content of the meal is estimated. In this paper, we describe and discuss the design and development of our mobile user interface features. We discuss the design concepts, through initial ideas and implementations. For each concept, we discuss qualitative user feedback from participants using the mobile client application. We then discuss future designs, including work on design considerations for the mobile application to allow the user to interactively correct errors in the automatic processing while reducing the user burden associated with classical pen-and-paper dietary records. PMID:24455755

  6. GUI implementation of image encryption and decryption using Open CV-Python script on secured TFTP protocol

    NASA Astrophysics Data System (ADS)

    Reddy, K. Rasool; Rao, Ch. Madhava

    2018-04-01

    Currently safety is one of the primary concerns in the transmission of images due to increasing the use of images within the industrial applications. So it's necessary to secure the image facts from unauthorized individuals. There are various strategies are investigated to secure the facts. In that encryption is certainly one of maximum distinguished method. This paper gives a sophisticated Rijndael (AES) algorithm to shield the facts from unauthorized humans. Here Exponential Key Change (EKE) concept is also introduced to exchange the key between client and server. The things are exchange in a network among client and server through a simple protocol is known as Trivial File Transfer Protocol (TFTP). This protocol is used mainly in embedded servers to transfer the data and also provide protection to the data if protection capabilities are integrated. In this paper, implementing a GUI environment for image encryption and decryption. All these experiments carried out on Linux environment the usage of Open CV-Python script.

  7. Web tools for large-scale 3D biological images and atlases

    PubMed Central

    2012-01-01

    Background Large-scale volumetric biomedical image data of three or more dimensions are a significant challenge for distributed browsing and visualisation. Many images now exceed 10GB which for most users is too large to handle in terms of computer RAM and network bandwidth. This is aggravated when users need to access tens or hundreds of such images from an archive. Here we solve the problem for 2D section views through archive data delivering compressed tiled images enabling users to browse through very-large volume data in the context of a standard web-browser. The system provides an interactive visualisation for grey-level and colour 3D images including multiple image layers and spatial-data overlay. Results The standard Internet Imaging Protocol (IIP) has been extended to enable arbitrary 2D sectioning of 3D data as well a multi-layered images and indexed overlays. The extended protocol is termed IIP3D and we have implemented a matching server to deliver the protocol and a series of Ajax/Javascript client codes that will run in an Internet browser. We have tested the server software on a low-cost linux-based server for image volumes up to 135GB and 64 simultaneous users. The section views are delivered with response times independent of scale and orientation. The exemplar client provided multi-layer image views with user-controlled colour-filtering and overlays. Conclusions Interactive browsing of arbitrary sections through large biomedical-image volumes is made possible by use of an extended internet protocol and efficient server-based image tiling. The tools open the possibility of enabling fast access to large image archives without the requirement of whole image download and client computers with very large memory configurations. The system was demonstrated using a range of medical and biomedical image data extending up to 135GB for a single image volume. PMID:22676296

  8. ImageJS: Personalized, participated, pervasive, and reproducible image bioinformatics in the web browser

    PubMed Central

    Almeida, Jonas S.; Iriabho, Egiebade E.; Gorrepati, Vijaya L.; Wilkinson, Sean R.; Grüneberg, Alexander; Robbins, David E.; Hackney, James R.

    2012-01-01

    Background: Image bioinformatics infrastructure typically relies on a combination of server-side high-performance computing and client desktop applications tailored for graphic rendering. On the server side, matrix manipulation environments are often used as the back-end where deployment of specialized analytical workflows takes place. However, neither the server-side nor the client-side desktop solution, by themselves or combined, is conducive to the emergence of open, collaborative, computational ecosystems for image analysis that are both self-sustained and user driven. Materials and Methods: ImageJS was developed as a browser-based webApp, untethered from a server-side backend, by making use of recent advances in the modern web browser such as a very efficient compiler, high-end graphical rendering capabilities, and I/O tailored for code migration. Results: Multiple versioned code hosting services were used to develop distinct ImageJS modules to illustrate its amenability to collaborative deployment without compromise of reproducibility or provenance. The illustrative examples include modules for image segmentation, feature extraction, and filtering. The deployment of image analysis by code migration is in sharp contrast with the more conventional, heavier, and less safe reliance on data transfer. Accordingly, code and data are loaded into the browser by exactly the same script tag loading mechanism, which offers a number of interesting applications that would be hard to attain with more conventional platforms, such as NIH's popular ImageJ application. Conclusions: The modern web browser was found to be advantageous for image bioinformatics in both the research and clinical environments. This conclusion reflects advantages in deployment scalability and analysis reproducibility, as well as the critical ability to deliver advanced computational statistical procedures machines where access to sensitive data is controlled, that is, without local “download and installation”. PMID:22934238

  9. ImageJS: Personalized, participated, pervasive, and reproducible image bioinformatics in the web browser.

    PubMed

    Almeida, Jonas S; Iriabho, Egiebade E; Gorrepati, Vijaya L; Wilkinson, Sean R; Grüneberg, Alexander; Robbins, David E; Hackney, James R

    2012-01-01

    Image bioinformatics infrastructure typically relies on a combination of server-side high-performance computing and client desktop applications tailored for graphic rendering. On the server side, matrix manipulation environments are often used as the back-end where deployment of specialized analytical workflows takes place. However, neither the server-side nor the client-side desktop solution, by themselves or combined, is conducive to the emergence of open, collaborative, computational ecosystems for image analysis that are both self-sustained and user driven. ImageJS was developed as a browser-based webApp, untethered from a server-side backend, by making use of recent advances in the modern web browser such as a very efficient compiler, high-end graphical rendering capabilities, and I/O tailored for code migration. Multiple versioned code hosting services were used to develop distinct ImageJS modules to illustrate its amenability to collaborative deployment without compromise of reproducibility or provenance. The illustrative examples include modules for image segmentation, feature extraction, and filtering. The deployment of image analysis by code migration is in sharp contrast with the more conventional, heavier, and less safe reliance on data transfer. Accordingly, code and data are loaded into the browser by exactly the same script tag loading mechanism, which offers a number of interesting applications that would be hard to attain with more conventional platforms, such as NIH's popular ImageJ application. The modern web browser was found to be advantageous for image bioinformatics in both the research and clinical environments. This conclusion reflects advantages in deployment scalability and analysis reproducibility, as well as the critical ability to deliver advanced computational statistical procedures machines where access to sensitive data is controlled, that is, without local "download and installation".

  10. Classification of galaxy type from images using Microsoft R Server

    NASA Astrophysics Data System (ADS)

    de Vries, Andrie

    2017-06-01

    Many astronomers working in the field of AstroInformatics write code as part of their work. Although the programming language of choice is Python, a small number (8%) use R. R has its specific strengths in the domain of statistics, and is often viewed as limited in the size of data it can handle. However, Microsoft R Server is a product that removes these limitations by being able to process much larger amounts of data. I present some highlights of R Server, by illustrating how to fit a convolutional neural network using R. The specific task is to classify galaxies, using only images extracted from the Sloan Digital Skyserver.

  11. The DICOM-based radiation therapy information system

    NASA Astrophysics Data System (ADS)

    Law, Maria Y. Y.; Chan, Lawrence W. C.; Zhang, Xiaoyan; Zhang, Jianguo

    2004-04-01

    Similar to DICOM for PACS (Picture Archiving and Communication System), standards for radiotherapy (RT) information have been ratified with seven DICOM-RT objects and their IODs (Information Object Definitions), which are more than just images. This presentation describes how a DICOM-based RT Information System Server can be built based on the PACS technology and its data model for a web-based distribution. Methods: The RT information System consists of a Modality Simulator, a data format translator, a RT Gateway, the DICOM RT Server, and the Web-based Application Server. The DICOM RT Server was designed based on a PACS data model and was connected to a Web application Server for distribution of the RT information including therapeutic plans, structures, dose distribution, images and records. Various DICOM RT objects of the patient transmitted to the RT Server were routed to the Web Application Server where the contents of the DICOM RT objects were decoded and mapped to the corresponding location of the RT data model for display in the specially-designed Graphic User Interface. The non-DICOM objects were first rendered to DICOM RT Objects in the translator before they were sent to the RT Server. Results: Ten clinical cases have been collected from different hopsitals for evaluation of the DICOM-based RT Information System. They were successfully routed through the data flow and displayed in the client workstation of the RT information System. Conclusion: Using the DICOM-RT standards, integration of RT data from different vendors is possible.

  12. Computer image analysis in obtaining characteristics of images: greenhouse tomatoes in the process of generating learning sets of artificial neural networks

    NASA Astrophysics Data System (ADS)

    Zaborowicz, M.; Przybył, J.; Koszela, K.; Boniecki, P.; Mueller, W.; Raba, B.; Lewicki, A.; Przybył, K.

    2014-04-01

    The aim of the project was to make the software which on the basis on image of greenhouse tomato allows for the extraction of its characteristics. Data gathered during the image analysis and processing were used to build learning sets of artificial neural networks. Program enables to process pictures in jpeg format, acquisition of statistical information of the picture and export them to an external file. Produced software is intended to batch analyze collected research material and obtained information saved as a csv file. Program allows for analysis of 33 independent parameters implicitly to describe tested image. The application is dedicated to processing and image analysis of greenhouse tomatoes. The program can be used for analysis of other fruits and vegetables of a spherical shape.

  13. Recognition of rotated images using the multi-valued neuron and rotation-invariant 2D Fourier descriptors

    NASA Astrophysics Data System (ADS)

    Aizenberg, Evgeni; Bigio, Irving J.; Rodriguez-Diaz, Eladio

    2012-03-01

    The Fourier descriptors paradigm is a well-established approach for affine-invariant characterization of shape contours. In the work presented here, we extend this method to images, and obtain a 2D Fourier representation that is invariant to image rotation. The proposed technique retains phase uniqueness, and therefore structural image information is not lost. Rotation-invariant phase coefficients were used to train a single multi-valued neuron (MVN) to recognize satellite and human face images rotated by a wide range of angles. Experiments yielded 100% and 96.43% classification rate for each data set, respectively. Recognition performance was additionally evaluated under effects of lossy JPEG compression and additive Gaussian noise. Preliminary results show that the derived rotation-invariant features combined with the MVN provide a promising scheme for efficient recognition of rotated images.

  14. On-demand server-side image processing for web-based DICOM image display

    NASA Astrophysics Data System (ADS)

    Sakusabe, Takaya; Kimura, Michio; Onogi, Yuzo

    2000-04-01

    Low cost image delivery is needed in modern networked hospitals. If a hospital has hundreds of clients, cost of client systems is a big problem. Naturally, a Web-based system is the most effective solution. But a Web browser could not display medical images with certain image processing such as a lookup table transformation. We developed a Web-based medical image display system using Web browser and on-demand server-side image processing. All images displayed on a Web page are generated from DICOM files on a server, delivered on-demand. User interaction on the Web page is handled by a client-side scripting technology such as JavaScript. This combination makes a look-and-feel of an imaging workstation not only for its functionality but also for its speed. Real time update of images with tracing mouse motion is achieved on Web browser without any client-side image processing which may be done by client-side plug-in technology such as Java Applets or ActiveX. We tested performance of the system in three cases. Single client, small number of clients in a fast speed network, and large number of clients in a normal speed network. The result shows that there are very slight overhead for communication and very scalable in number of clients.

  15. USNO Image and Catalog Archive Server - Naval Oceanography Portal

    Science.gov Websites

    are here: Home › USNO › Astrometry › Optical/IR Products › USNO Image and Catalog Archive Server USNO Logo USNO Navigation Optical/IR Products NOMAD UCAC URAT USNO-B1.0 Double Stars Solar System Link Disclaimer This is an official U.S. Navy web site. Security & Privacy Policy Veterans Crisis

  16. Clinical experiences with an ASP model backup archive for PACS images

    NASA Astrophysics Data System (ADS)

    Liu, Brent J.; Cao, Fei; Documet, Luis; Huang, H. K.; Muldoon, Jean

    2003-05-01

    Last year we presented a Fault-Tolerant Backup Archive using an Application Service Provider (ASP) model for disaster recovery. The purpose of this paper is to update and provide clinical experiences related towards implementing the ASP model archive solution for short-term backup of clinical PACS image data as well as possible applications other than disaster recovery. The ASP backup archive provides instantaneous, automatic backup of acquired PACS image data and instantaneous recovery of stored PACS image data all at a low operational cost and with little human intervention. This solution can be used for a variety of scheduled and unscheduled downtimes that occur on the main PACS archive. A backup archive server with hierarchical storage was implemented offsite from the main PACS archive location. Clinical data from a hospital PACS is sent to this ASP storage server in parallel to the exams being archived in the main server. Initially, connectivity between the main archive and the ASP storage server is established via a T-1 connection. In the future, other more cost-effective means of connectivity will be researched such as the Internet 2. We have integrated the ASP model backup archive with a clinical PACS at Saint John's Health Center and has been operational for over 6 months. Pitfalls encountered during integration with a live clinical PACS and the impact to clinical workflow will be discussed. In addition, estimations of the cost of establishing such a solution as well as the cost charged to the users will be included. Clinical downtime scenarios, such as a scheduled mandatory downtime and an unscheduled downtime due to a disaster event to the main archive, were simulated and the PACS exams were sent successfully from the offsite ASP storage server back to the hospital PACS in less than 1 day. The ASP backup archive was able to recover PACS image data for comparison studies with no complex operational procedures. Furthermore, no image data loss was encountered during the recovery. During any clinical downtime scenario, the ASP backup archive server can repopulate a clinical PACS quickly with the majority of studies available for comparison during the interim until the main PACS archive is fully recovered.

  17. Baseline coastal oblique aerial photographs collected from Pensacola, Florida, to Breton Islands, Louisiana, February 7, 2012

    USGS Publications Warehouse

    Morgan, Karen L.M.; Krohn, M. Dennis; Doran, Kara; Guy, Kristy K.

    2013-01-01

    The U.S. Geological Survey (USGS) conducts baseline and storm response photography missions to document and understand the changes in vulnerability of the Nation's coasts to extreme storms (Morgan, 2009). On February 7, 2012, the USGS conducted an oblique aerial photographic survey from Pensacola, Fla., to Breton Islands, La., aboard a Piper Navajo Chieftain at an altitude of 500 feet (ft) and approximately 1,000 ft offshore. This mission was flown to collect baseline data for assessing incremental changes since the last survey, and the data can be used in the assessment of future coastal change. The photographs provided here are Joint Photographic Experts Group (JPEG) images. The photograph locations are an estimate of the position of the aircraft and do not indicate the location of the feature in the images (see the Navigation Data page). These photos document the configuration of the barrier islands and other coastal features at the time of the survey. The header of each photo is populated with time of collection, Global Positioning System (GPS) latitude, GPS longitude, GPS position (latitude and longitude), keywords, credit, artist (photographer), caption, copyright, and contact information using EXIFtools (Subino and others, 2012). Photographs can be opened directly with any JPEG-compatible image viewer by clicking on a thumbnail on the contact sheet. Table 1 provides detailed information about the assigned location, name, data, and time the photograph was taken along with links to the photograph. In addition to the photographs, a Google Earth Keyhole Markup Language (KML) file is provided and can be used to view the images by clicking on the marker and then clicking on either the thumbnail or the link above the thumbnail. The KML files were created using the photographic navigation files (see the Photos and Maps page).

  18. JHelioviewer: Open-Source Software for Discovery and Image Access in the Petabyte Age

    NASA Astrophysics Data System (ADS)

    Mueller, D.; Dimitoglou, G.; Garcia Ortiz, J.; Langenberg, M.; Nuhn, M.; Dau, A.; Pagel, S.; Schmidt, L.; Hughitt, V. K.; Ireland, J.; Fleck, B.

    2011-12-01

    The unprecedented torrent of data returned by the Solar Dynamics Observatory is both a blessing and a barrier: a blessing for making available data with significantly higher spatial and temporal resolution, but a barrier for scientists to access, browse and analyze them. With such staggering data volume, the data is accessible only from a few repositories and users have to deal with data sets effectively immobile and practically difficult to download. From a scientist's perspective this poses three challenges: accessing, browsing and finding interesting data while avoiding the proverbial search for a needle in a haystack. To address these challenges, we have developed JHelioviewer, an open-source visualization software that lets users browse large data volumes both as still images and movies. We did so by deploying an efficient image encoding, storage, and dissemination solution using the JPEG 2000 standard. This solution enables users to access remote images at different resolution levels as a single data stream. Users can view, manipulate, pan, zoom, and overlay JPEG 2000 compressed data quickly, without severe network bandwidth penalties. Besides viewing data, the browser provides third-party metadata and event catalog integration to quickly locate data of interest, as well as an interface to the Virtual Solar Observatory to download science-quality data. As part of the ESA/NASA Helioviewer Project, JHelioviewer offers intuitive ways to browse large amounts of heterogeneous data remotely and provides an extensible and customizable open-source platform for the scientific community. In addition, the easy-to-use graphical user interface enables the general public and educators to access, enjoy and reuse data from space missions without barriers.

  19. Modeling And Simulation Of Multimedia Communication Networks

    NASA Astrophysics Data System (ADS)

    Vallee, Richard; Orozco-Barbosa, Luis; Georganas, Nicolas D.

    1989-05-01

    In this paper, we present a simulation study of a browsing system involving radiological image servers. The proposed IEEE 802.6 DQDB MAN standard is designated as the computer network to transfer radiological images from file servers to medical workstations, and to simultaneously support real time voice communications. Storage and transmission of original raster scanned images and images compressed according to pyramid data structures are considered. Different types of browsing as well as various image sizes and bit rates in the DQDB MAN are also compared. The elapsed time, measured from the time an image request is issued until the image is displayed on the monitor, is the parameter considered to evaluate the system performance. Simulation results show that image browsing can be supported by the DQDB MAN.

  20. Teaching Resources

    Science.gov Websites

    & Legislation Links Discussion Lists Quick Links AAPT eMentoring ComPADRE Review of High School Take Physics" Poster Why Physics Poster Thumbnail Download normal resolution JPEG Download high resolution JPEG Download Spanish Version Recruiting Physics Students in High School (FED newsletter article

  1. List-mode PET image reconstruction for motion correction using the Intel XEON PHI co-processor

    NASA Astrophysics Data System (ADS)

    Ryder, W. J.; Angelis, G. I.; Bashar, R.; Gillam, J. E.; Fulton, R.; Meikle, S.

    2014-03-01

    List-mode image reconstruction with motion correction is computationally expensive, as it requires projection of hundreds of millions of rays through a 3D array. To decrease reconstruction time it is possible to use symmetric multiprocessing computers or graphics processing units. The former can have high financial costs, while the latter can require refactoring of algorithms. The Xeon Phi is a new co-processor card with a Many Integrated Core architecture that can run 4 multiple-instruction, multiple data threads per core with each thread having a 512-bit single instruction, multiple data vector register. Thus, it is possible to run in the region of 220 threads simultaneously. The aim of this study was to investigate whether the Xeon Phi co-processor card is a viable alternative to an x86 Linux server for accelerating List-mode PET image reconstruction for motion correction. An existing list-mode image reconstruction algorithm with motion correction was ported to run on the Xeon Phi coprocessor with the multi-threading implemented using pthreads. There were no differences between images reconstructed using the Phi co-processor card and images reconstructed using the same algorithm run on a Linux server. However, it was found that the reconstruction runtimes were 3 times greater for the Phi than the server. A new version of the image reconstruction algorithm was developed in C++ using OpenMP for mutli-threading and the Phi runtimes decreased to 1.67 times that of the host Linux server. Data transfer from the host to co-processor card was found to be a rate-limiting step; this needs to be carefully considered in order to maximize runtime speeds. When considering the purchase price of a Linux workstation with Xeon Phi co-processor card and top of the range Linux server, the former is a cost-effective computation resource for list-mode image reconstruction. A multi-Phi workstation could be a viable alternative to cluster computers at a lower cost for medical imaging applications.

  2. Method for measuring anterior chamber volume by image analysis

    NASA Astrophysics Data System (ADS)

    Zhai, Gaoshou; Zhang, Junhong; Wang, Ruichang; Wang, Bingsong; Wang, Ningli

    2007-12-01

    Anterior chamber volume (ACV) is very important for an oculist to make rational pathological diagnosis as to patients who have some optic diseases such as glaucoma and etc., yet it is always difficult to be measured accurately. In this paper, a method is devised to measure anterior chamber volumes based on JPEG-formatted image files that have been transformed from medical images using the anterior-chamber optical coherence tomographer (AC-OCT) and corresponding image-processing software. The corresponding algorithms for image analysis and ACV calculation are implemented in VC++ and a series of anterior chamber images of typical patients are analyzed, while anterior chamber volumes are calculated and are verified that they are in accord with clinical observation. It shows that the measurement method is effective and feasible and it has potential to improve accuracy of ACV calculation. Meanwhile, some measures should be taken to simplify the handcraft preprocess working as to images.

  3. A web-based solution for 3D medical image visualization

    NASA Astrophysics Data System (ADS)

    Hou, Xiaoshuai; Sun, Jianyong; Zhang, Jianguo

    2015-03-01

    In this presentation, we present a web-based 3D medical image visualization solution which enables interactive large medical image data processing and visualization over the web platform. To improve the efficiency of our solution, we adopt GPU accelerated techniques to process images on the server side while rapidly transferring images to the HTML5 supported web browser on the client side. Compared to traditional local visualization solution, our solution doesn't require the users to install extra software or download the whole volume dataset from PACS server. By designing this web-based solution, it is feasible for users to access the 3D medical image visualization service wherever the internet is available.

  4. Effect of video server topology on contingency capacity requirements

    NASA Astrophysics Data System (ADS)

    Kienzle, Martin G.; Dan, Asit; Sitaram, Dinkar; Tetzlaff, William H.

    1996-03-01

    Video servers need to assign a fixed set of resources to each video stream in order to guarantee on-time delivery of the video data. If a server has insufficient resources to guarantee the delivery, it must reject the stream request rather than slowing down all existing streams. Large scale video servers are being built as clusters of smaller components, so as to be economical, scalable, and highly available. This paper uses a blocking model developed for telephone systems to evaluate video server cluster topologies. The goal is to achieve high utilization of the components and low per-stream cost combined with low blocking probability and high user satisfaction. The analysis shows substantial economies of scale achieved by larger server images. Simple distributed server architectures can result in partitioning of resources with low achievable resource utilization. By comparing achievable resource utilization of partitioned and monolithic servers, we quantify the cost of partitioning. Next, we present an architecture for a distributed server system that avoids resource partitioning and results in highly efficient server clusters. Finally, we show how, in these server clusters, further optimizations can be achieved through caching and batching of video streams.

  5. Demonstration of Inexact Computing Implemented in the JPEG Compression Algorithm using Probabilistic Boolean Logic applied to CMOS Components

    DTIC Science & Technology

    2015-12-24

    Signal to Noise Ratio SPICE Simulation Program with Integrated Circuit Emphasis TIFF Tagged Image File Format USC University of Southern California xvii...sources can create errors in digital circuits. These effects can be simulated using Simulation Program with Integrated Circuit Emphasis ( SPICE ) or...compute summary statistics. 4.1 Circuit Simulations Noisy analog circuits can be simulated in SPICE or Cadence SpectreTM software via noisy voltage

  6. Embedded wavelet packet transform technique for texture compression

    NASA Astrophysics Data System (ADS)

    Li, Jin; Cheng, Po-Yuen; Kuo, C.-C. Jay

    1995-09-01

    A highly efficient texture compression scheme is proposed in this research. With this scheme, energy compaction of texture images is first achieved by the wavelet packet transform, and an embedding approach is then adopted for the coding of the wavelet packet transform coefficients. By comparing the proposed algorithm with the JPEG standard, FBI wavelet/scalar quantization standard and the EZW scheme with extensive experimental results, we observe a significant improvement in the rate-distortion performance and visual quality.

  7. Resource Allocation in Dynamic Environments

    DTIC Science & Technology

    2012-10-01

    Utility Curve for the TOC Camera 42 Figure 20: Utility Curves for Ground Vehicle Camera and Squad Camera 43 Figure 21: Facial - Recognition Utility...A Facial - Recognition Server (FRS) can receive images from smartphones the squads use, compare them to a local database, and then return the...fallback. In addition, each squad has the ability to capture images with a smartphone and send them to a Facial - Recognition Server in the TOC to

  8. Adapting the ISO 20462 softcopy ruler method for online image quality studies

    NASA Astrophysics Data System (ADS)

    Burns, Peter D.; Phillips, Jonathan B.; Williams, Don

    2013-01-01

    In this paper we address the problem of Image Quality Assessment of no reference metrics, focusing on JPEG corrupted images. In general no reference metrics are not able to measure with the same performance the distortions within their possible range and with respect to different image contents. The crosstalk between content and distortion signals influences the human perception. We here propose two strategies to improve the correlation between subjective and objective quality data. The first strategy is based on grouping the images according to their spatial complexity. The second one is based on a frequency analysis. Both the strategies are tested on two databases available in the literature. The results show an improvement in the correlations between no reference metrics and psycho-visual data, evaluated in terms of the Pearson Correlation Coefficient.

  9. Effects of Image Compression on Automatic Count of Immunohistochemically Stained Nuclei in Digital Images

    PubMed Central

    López, Carlos; Lejeune, Marylène; Escrivà, Patricia; Bosch, Ramón; Salvadó, Maria Teresa; Pons, Lluis E.; Baucells, Jordi; Cugat, Xavier; Álvaro, Tomás; Jaén, Joaquín

    2008-01-01

    This study investigates the effects of digital image compression on automatic quantification of immunohistochemical nuclear markers. We examined 188 images with a previously validated computer-assisted analysis system. A first group was composed of 47 images captured in TIFF format, and other three contained the same images converted from TIFF to JPEG format with 3×, 23× and 46× compression. Counts of TIFF format images were compared with the other three groups. Overall, differences in the count of the images increased with the percentage of compression. Low-complexity images (≤100 cells/field, without clusters or with small-area clusters) had small differences (<5 cells/field in 95–100% of cases) and high-complexity images showed substantial differences (<35–50 cells/field in 95–100% of cases). Compression does not compromise the accuracy of immunohistochemical nuclear marker counts obtained by computer-assisted analysis systems for digital images with low complexity and could be an efficient method for storing these images. PMID:18755997

  10. Teleconsultation in diagnostic pathology: experience from Iran and Germany with the use of two European telepathology servers.

    PubMed

    Mireskandari, Masoud; Kayser, Gian; Hufnagl, Peter; Schrader, Thomas; Kayser, Klaus

    2004-01-01

    Eighty pathology cases were sent independently to each of two telepathology servers. Cases were submitted from the Department of Pathology at the University of Kerman in Iran (40 cases) and from the Institute of Pathology in Berlin, Germany (40 cases). The telepathology servers were located in Berlin (the UICC server) and Basel in Switzerland (the iPATH server). A scoring system was developed to quantify the differences between the diagnoses of the referring pathologist and the remote expert. Preparation of the cases, as well as the submission of images, took considerably longer from Kerman than from Berlin; this was independent of the server system. The Kerman delay was mainly associated with a slower transmission rate and longer image preparation. The diagnostic gap between referrers' and experts' diagnoses was greater with the iPATH system, but not significantly so. The experts' response time was considerably shorter for the iPATH system. The results showed that telepathology is feasible for requesting pathologists working in a developing country or in an industrialized country. The key factor in the quality of the service is the work of the experts: they should be selected according to their diagnostic expertise, and their commitment to the provision of telepathology services is critical.

  11. Web-based document image processing

    NASA Astrophysics Data System (ADS)

    Walker, Frank L.; Thoma, George R.

    1999-12-01

    Increasing numbers of research libraries are turning to the Internet for electron interlibrary loan and for document delivery to patrons. This has been made possible through the widespread adoption of software such as Ariel and DocView. Ariel, a product of the Research Libraries Group, converts paper-based documents to monochrome bitmapped images, and delivers them over the Internet. The National Library of Medicine's DocView is primarily designed for library patrons are beginning to reap the benefits of this new technology, barriers exist, e.g., differences in image file format, that lead to difficulties in the use of library document information. To research how to overcome such barriers, the Communications Engineering Branch of the Lister Hill National Center for Biomedical Communications, an R and D division of NLM, has developed a web site called the DocMorph Server. This is part of an ongoing intramural R and D program in document imaging that has spanned many aspects of electronic document conversion and preservation, Internet document transmission and document usage. The DocMorph Server Web site is designed to fill two roles. First, in a role that will benefit both libraries and their patrons, it allows Internet users to upload scanned image files for conversion to alternative formats, thereby enabling wider delivery and easier usage of library document information. Second, the DocMorph Server provides the design team an active test bed for evaluating the effectiveness and utility of new document image processing algorithms and functions, so that they may be evaluated for possible inclusion in other image processing software products being developed at NLM or elsewhere. This paper describes the design of the prototype DocMorph Server and the image processing functions being implemented on it.

  12. Sharpest Ever VLT Images at NAOS-CONICA "First Light"

    NASA Astrophysics Data System (ADS)

    2001-12-01

    Very Promising Start-Up of New Adaptive Optics Instrument at Paranal Summary A team of astronomers and engineers from French and German research institutes and ESO at the Paranal Observatory is celebrating the successful accomplishment of "First Light" for the NAOS-CONICA Adaptive Optics facility . With this event, another important milestone for the Very Large Telescope (VLT) project has been passed. Normally, the achievable image sharpness of a ground-based telescope is limited by the effect of atmospheric turbulence. However, with the Adaptive Optics (AO) technique, this drawback can be overcome and the telescope produces images that are at the theoretical limit, i.e., as sharp as if it were in space . Adaptive Optics works by means of a computer-controlled, flexible mirror that counteracts the image distortion induced by atmospheric turbulence in real time. The larger the main mirror of the telescope is, and the shorter the wavelength of the observed light, the sharper will be the images recorded. During a preceding four-week period of hard and concentrated work, the expert team assembled and installed this major astronomical instrument at the 8.2-m VLT YEPUN Unit Telescope (UT4). On November 25, 2001, following careful adjustments of this complex apparatus, a steady stream of photons from a southern star bounced off the computer-controlled deformable mirror inside NAOS and proceeded to form in CONICA the sharpest image produced so far by one of the VLT telescopes. With a core angular diameter of only 0.07 arcsec, this image is near the theoretical limit possible for a telescope of this size and at the infrared wavelength used for this demonstration (the K-band at 2.2 µm). Subsequent tests reached the spectacular performance of 0.04 arcsec in the J-band (wavelength 1.2 µm). "I am proud of this impressive achievement", says ESO Director General Catherine Cesarsky. "It shows the true potential of European science and technology and it provides a fine demonstration of the value of international collaboration. ESO and its partner institutes and companies in France and Germany have worked a long time towards this goal - with the first, extremely promising results, we shall soon be able to offer a new and fully tuned instrument to our wide research community." The NAOS adaptive optics corrector was built, under an ESO contract, by Office National d'Etudes et de Recherches Aérospatiales (ONERA) , Laboratoire d'Astrophysique de Grenoble (LAOG) and the DESPA and DASGAL laboratories of the Observatoire de Paris in France, in collaboration with ESO. The CONICA infra-red camera was built, under an ESO contract, by the Max-Planck-Institut für Astronomie (MPIA) (Heidelberg) and the Max-Planck Institut für Extraterrestrische Physik (MPE) (Garching) in Germany, in collaboration with ESO. The present event happens less than four weeks after "First Fringes" were achieved for the VLT Interferometer (VLTI) with two of the 8.2-m Unit Telescopes. No wonder that a spirit of great enthusiasm reigns at Paranal! Information for the media: ESO is producing a Video News Release ( ESO Video News Reel No. 13 ) with sequences from the NAOS-CONICA "First Light" event at Paranal, a computer animation illustrating the principle of adaptive optics in NAOS-CONICA, as well as the first astronomical images obtained. In addition to the usual distribution, this VNR will also be transmitted via satellite Friday 7 December 2001 from 09:00 to 09:15 CET (10:00 to 10:15 UT) on "Europe by Satellite" . These video images may be used free of charge by broadcasters. Satellite details, the script and the shotlist will be on-line from 6 December on the ESA TV Service Website http://television.esa.int. Also a pre-view Real Video Stream of the video news release will be available as of that date from this URL. Video Clip 07/01 : Various video scenes related to the NAOS-CONICA "First Light" Event ( ESO Video News Reel No. 13 ). PR Photo 33a/01 : NAOS-CONICA "First light" image of an 8-mag star. PR Photo 33b/01 : The moment of "First Light" at the YEPUN Control Consoles. PR Photo 33c/01 : Image of NGC 3603 (K-band) area (NAOS-CONICA) . PR Photo 33d/01 : Image of NGC 3603 wider field (ISAAC) PR Photo 33e/01 : I-band HST-WFPC2 image of NGC 3603 field . PR Photo 33f/01 : Animated GIF, with NAOS-CONICA (K-band) and HST-WFPC2 (I-band) images of NGC 3603 area PR Photo 33g/01 : Image of the Becklin-Neugebauer Object . PR Photo 33h/01 : Image of a very close double star . PR Photo 33i/01 : Image of a 17-magnitude reference star PR Photo 33j/01 : Image of the central area of the 30 Dor star cluster . PR Photo 33k/01 : The top of the Paranal Mountain (November 25, 2001). PR Photo 33l/01 : The NAOS-CONICA instrument attached to VLT YEPUN.. A very special moment at Paranal! First light for NAOS-CONICA at the VLT - PR Video Clip 07/01] ESO PR Video Clip 07/01 "First Light for NAOS-CONICA" (25 November 2001) (3850 frames/2:34 min) [MPEG Video+Audio; 160x120 pix; 3.6Mb] [MPEG Video+Audio; 320x240 pix; 8.9Mb] [RealMedia; streaming; 34kps] [RealMedia; streaming; 200kps] ESO Video Clip 07/01 provides some background scenes and images around the NAOS-CONICA "First Light" event on November 25, 2001 (extracted from ESO Video News Reel No. 13 ). Contents: NGC 3603 image from ISAAC and a smaller field as observed by NAOS-CONICA ; the Paranal platform in the afternoon, before the event; YEPUN and NAOS-CONICA with cryostat sounds; Tension is rising in the VLT Control Room; Wavefront Sensor display; the "Loop is Closed"; happy team members; the first corrected image on the screen; Images of NGC 3603 by HST and VLT; 30 Doradus central cluster; BN Object in Orion; Statement by the Head of the ESO Instrument Division. ESO PR Photo 33a/01 ESO PR Photo 33a/01 [Preview - JPEG: 317 x 400 pix - 27k] [Normal - JPEG: 800 x 634 pix - 176k] ESO PR Photo 33b/01 ESO PR Photo 33b/01 [Preview - JPEG: 400 x 322 pix - 176k] [Normal - JPEG: 800 x 644 pix - 360k] ESO PR Photo 33a/01 shows the first image in the infrared K-band (wavelength 2.2 µm) of a star (visual magnitude 8) obtained - before (left) and after (right) the adaptive optics was switched on (see the text). The middle panel displays the 3-D intensity profiles of these images, demonstrating the tremendous gain, both in image sharpness and central intensity. ESO PR Photo 33b/01 shows some of the NAOS-CONICA team members in the VLT Control Room at the moment of "First Light" in the night between November 25-26, 2001. From left to right: Thierry Fusco (ONERA), Clemens Storz (MPIA), Robin Arsenault (ESO), Gerard Rousset (ONERA). The numerous boxes with the many NAOS and CONICA parts arrived at the ESO Paranal Observatory on October 24, 2001. Astronomers and engineers from ESO and the participating institutes and organisations then began the painstaking assembly of these very complex instruments on one of the Nasmyth platforms on the fourth VLT 8.2-m Unit Telescope, YEPUN . Then followed days of technical tests and adjustments, working around the clock. In the afternoon of Sunday, November 25, the team finally declared the instrument fit to attempt its "First Light" observation. The YEPUN dome was opened at sunset and a small, rather apprehensive group gathered in the VLT Control Room, peering intensively at the computer screens over the shoulders of their colleagues, the telescope and instrument operators. Time passed imperceptibly to those present, as the basic calibrations required at this early stage to bring NAOS-CONICA to full operational state were successfully completed. Everybody sensed the special moment approaching when, finally, the telescope operator pushed a button and the giant telescope started to turn smoothly towards the first test object, an otherwise undistinguished star in our Milky Way. Its non-corrected infra-red image was recorded by the CONICA detector array and soon appeared on the computer screen. It was already very good by astronomical standards, with a diameter of only 0.50 arsec (FWHM), cf. PR Photo 33a/01 (left) . Then, by another command, the instrument operator switched on the NAOS adaptive optics system , thereby "closing the loop" for the first time on a sky field, by using that ordinary star as a reference light source to measure the atmospheric turbulence. Obediently, the deformable mirror in NAOS began to follow the "orders" that were issued 500 times per second by its powerful control computer.... As if by magics, that stellar image on the computer screen pulled itself together....! What seconds before had been a jumping, rather blurry patch of light suddenly became a rock-steady, razor-sharp and brilliant spot of light. The entire room burst into applause - there were happy faces and smiles all over, and then the operator announced the measured image diameter - a truly impressive 0.068 arcsec, already at this first try, cf. PR Photo 33a/01 (right) ! All the team members who were lucky to be there sent a special thought to those many others who had also put in over four years' hard and dedicated work to make this event a reality. The time of this historical moment was November 25, 2001, 23:00 Chilean time (November 26, 2001, 02:00 am UT) . During this and the following nights, more images were made of astronomcal objects, opening a new chapter of the long tradition of Adaptive Optics at ESO. More information about the NAOS-CONICA international collaboration , technical details about this instrument and its special advantages are available below. The first images The star-forming region around NGC 3603 ESO PR Photo 33c/01 ESO PR Photo 33c/01 [Preview - JPEG: 326 x 400 pix - 200k] [Normal - JPEG: 651 x 800 pix - 480k] ESO PR Photo 33d/01 ESO PR Photo 33d/01 [Preview - JPEG: 348 x 400 pix - 240k] [Normal - JPEG: 695 x 800 pix - 592k] Caption : PR Photo 33c/01 displays a NAOS-CONICA image of the starburst cluster NGC 3603, obtained during the second night of NAOS-CONICA operation. The sky region shown is some 20 arcsec to the North of the centre of the cluster. NAOS was compensating atmospheric disturbances by analyzing light from the central star with its visual wavefront sensor, while CONICA was observing in the K-band. The image is nearly diffraction-limited and has a Full-Width-Half-Maximum (FWHM) diameter of 0.07 arcsec, with a central Strehl ratio of 56% (a measure of the degree of concentration of the light). The exposure lasted 300 seconds. North is up and East is left. The field measures 27 x 27 arcsec. On PR Photo 33d/01 , the sky area shown in this NAOS-CONICA high-resolution image is indicated on an earlier image of a much larger area, obtained in 1999 with the ISAAC multi-mode instrument on VLT ANTU ( ESO PR 16/99 ) Among the first images to be obtained of astronomical objects was one of the stellar cluster NGC 3603 that is located in the Carina spiral arm in the Milky Way at a distance of about 20,000 light-years, cf. PR Photo 33c/01 . With its central starburst cluster, it is one of the densest and most massive star forming regions in our Galaxy. Some of the most massive stars - with masses up to 120 times the mass of our Sun - can be found in this cluster. For a long time astronomers have suspected that the formation of low-mass stars is suppressed by the presence of high-mass stars, but two years ago, stars with masses as low as 10% of the mass of our Sun were detected in NGC 3603 with the ISAAC multi-mode instrument at VLT ANTU, cf. PR Photo 33d/01 and ESO PR 16/99. The high stellar density in this region, however, prevented the search for objects with still lower masses, so-called Brown Dwarfs. The new, high-resolution K-band images like PR Photo 33c/01 , obtained with NAOS-CONICA at YEPUN, now for the first time facilitate the study of the elusive class of brown dwarfs in such a starburst environment. This will, among others, offer very valuable insight into the fundamental problem about the total amount of matter that is deposited into stars in star-forming regions. An illustration of the potential of Adaptive Optics ESO PR Photo 33e/01 ESO PR Photo 33e/01 [Preview - JPEG: 376 x 400 pix - 128k] [Normal - JPEG: 752 x 800 pix - 336k] ESO PR Photo 33f/01 ESO PR Photo 33f/01 [Animated GIF: 400 x 425 pix - 71k] Caption : PR Photo 33e/01 was obtained with the WFPC2 camera on the Hubble Space Telescope (HST) in the I-band (800nm). It is a 400-sec exposure and shows the same sky region as in the NAOS-CONICA image shown in PR Photo 33c/01. PR Photo 33f/01 provides a direct comparison of the two images (animated GIF). The HST image was extracted from archival data. HST is operated by NASA and ESA. Normally, the achievable image sharpness of a ground-based telescope is limited by the effect of atmospheric turbulence . However, the Adaptive Optics (AO) technique overcomes this problem and when the AO instrument is optimized, the telescope produces images that are at the theoretical limit, i.e., as sharp as if it were in space . The theoretical image diameter is inversely proportional to the diameter of the main mirror of the telescope and proportional to the wavelength of the observed light. Thus, the larger the telescope and the shorter the wavelength, the sharper will be the images recorded . To illustrate this, a comparison of the NAOS-CONICA image of NGC 3603 ( PR Photo 33c/01 ) is here made with a near-infrared image obtained earlier by the Hubble Space Telescope (HST) covering the same sky area ( PR Photo 33e/01 ). Both images are close to the theoretical limit ("diffraction limited"). However, the diameter of the VLT YEPUN mirror (8.2-m) is somewhat more than three times that of that of HST (2.4-m). This is "compensated" by the fact that the wavelength of the NAOS-CONICA image (2.2 µm) is about two-and-a-half times longer that than of the HST image (0.8 µm). The measured image diameters are therefore not too different, approx. 0.085 arcsec (HST) vrs. approx. 0.068 arcsec (VLT). Although the exposure times are similar (300 sec for the VLT image; 400 sec for the HST image), the VLT image shows considerably fainter objects. This is partly due to the larger mirror, partly because by observing at a longer wavelength, NAOS-CONICA can detect a host of cool low-mass stars. The Becklin-Neugebauer object and its associated nebulosity ESO PR Photo 33g/01 ESO PR Photo 33g/01 [Preview - JPEG: 299 x 400 pix - 128k] [Normal - JPEG: 597 x 800 pix - 272k] Caption : PR Photo 33g/01 is a composite (false-) colour image obtained by NAOS-CONICA of the region around the Becklin-Neugebauer object that is deeply embedded in the Orion Nebula. It is based on two exposures, one in the light of shock-excited molecular hydrogen line (H 2 ; wavelength 2.12 µm; here rendered as blue) and one in the broader K-band (2.2 µm; red) from ionized hydrogen. A third (green) image was produced as an "average" of the H 2 and K-band images. The field-of-view measures 20 x 25 arcsec 2 , cf. the 1 x 1 arcsec 2 square. North is up and east to the left. PR Photo 33g/01 is a composite image of the region around the Becklin-Neugebauer object (generally refered to as "BN" ). With its associated Kleinmann-Low nebula, it is located in the Orion star forming region at a distance of approx. 1500 light-years. It is the nearest high-mass star-forming complex. The immediate vicinity of BN (the brightest star in the image) is highly dynamic with outflows and cloudlets glowing in the light of shock-excited molecular hydrogen. While many masers and outflows have been detected, the identification of their driving sources is still lacking. Deep images in the infrared K and H bands, as well as in the light of molecular hydrogen emission were obtained with NAOS-CONICA at VLT YEPUN during the current tests. The new images facilitate the detection of fainter and smaller structures in the cloud than ever before. More details on the embedded star cluster are revealed as well. These observations were only made possible by the infrared wavefront sensor of NAOS. The latter is a unique capability of NAOS and allows to do adaptive optics on highly embedded infrared sources, which are practically invisible at optical wavelengths. Exploring the limits ESO PR Photo 33h/01 ESO PR Photo 33h/01 [Preview - JPEG: 400 x 260 pix - 44k] [Normal - JPEG: 800 x 520 pix - 112k] Caption : PR Photo 33h/01 shows a NAOS-CONICA image of the double star GJ 263 for which the angular distance between the two components is only 0.030 arcsec . The raw image, as directly recorded by CONICA, is shown in the middle, with a computer-processed (using the ONERA MISTRAL myopic deconvolution algorithm) version to the right. The recorded Point-Spread-Function (PSF) is shown to the left. For this, the C50S camera (0.01325 arcsec/pixel) was used, with an FeII filter at the near-infrared wavelength 1.257 µm. The exposure time was 10 seconds. ESO PR Photo 33i/01 ESO PR Photo 33i/01 [Preview - JPEG: 400 x 316 pix - 82k] [Normal - JPEG: 800 x 631 pix - 208k] Caption : PR Photo 33i/01 shows the near-diffraction-limited image of a 17-mag reference star , as recorded with NAOS-CONICA during a 200-second exposure in the K-band under 0.60 arcsec seeing. The 3D-profile is also shown. ESO PR Photo 33j/01 ESO PR Photo 33j/01 [Preview - JPEG: 342 x 400 pix - 83k] [Normal - JPEG: 684 x 800 pix - 200k] Caption : PR Photo 33j/01 shows the central cluster in the 30 Doradus HII region in the Large Magellanic Cloud (LMC), a satellite of our Milky Way Galaxy. It was obtained by NAOS-CONICA in the infrared K-band during a 600 second exposure. The field shown here measures 15 x 15 arcsec 2. PR Photos 33h-j/01 provide three examples of images obtained during specific tests where the observers pushed NAOS-CONICA towards the limits to explore the potential of the new instrument. Although, as expected, these images are not "perfect", they bear clear witness to the impressive performance, already at this early stage of the commissioning programme. The first PR Photo 33h/01 shows how diffraction-limited imaging with NAOS-CONICA at a wavelength of 1.257 µm allows to view the individual components of a close double star, here the binary star GJ 263 for which the angular distance between the two stars is only 0.030 arcsec (i.e., the angle subtended by a 1 Euro coin at a distance of 160 km). Spatially resolved observations of binary stars like this one will allow the determination of orbital parameters, and ultimately of the masses of the individual binary star components. After few days of optimisation and calibration, NAOS-CONICA was able to "close the loop" on a reference star as faint as visual magnitude 17 and to provide a fine diffraction-limited K-band image with Strehl ratio 19% under 0.6 arcsec seeing. PR Photo 33i/01 provides a view of this image, as seen in the recorder frame and as a 3D-profile. The exposure time was 200 seconds. The ability to use reference stars as faint as this is an enormous asset for NAOS-CONICA - it will be first to offer this capability to non-specialist users with an instrument on an 8-10 m class telescope . This permits to access many sky fields and already get significant AO corrections, without having to wait for the artificial laser guide star now being constructed for the VLT, see below. 30 Doradus in the Large Magellanic Cloud (LMC - a satellite of our Galaxy) is the most luminous, giant HII region in the Local Group of Galaxies. It is powered by a massive star cluster with more than 100 ultra-luminous stars (of the "Wolf-Rayet"-type and O-stars). The NAOS CONICA K-band image PR Photo 33x/01 resolves the dense stellar core of high-mass stars at the centre of the cluster, revealing thousands of lower mass cluster members. Due to the lack of a sufficiently bright, isolated and single reference star in this sky field, the observers used instead the bright central star complex (R136a) to generate the corrective signals to the flexible mirror, needed to compensate for the atmospheric turbulence. However, R136a is not a round object; it is strongly elongated in the "5 hour"-direction. As a result, all star images seen in this photo are slightly elongated in the same direction as R136a. Nevertheless, this is a small penalty to pay for the large improvement obtained over a direct (seeing-limited) image! Adaptive Optics at ESO - a long tradition ESO PR Photo 33k/01 ESO PR Photo 33k/01 [Preview - JPEG: 400 x 320 pix - 144k] [Normal - JPEG: 800 x 639 pix - 344k] [Hi-Res - JPEG: 3000 x 2398 pix - 3.0M] ESO PR Photo 33l/01 ESO PR Photo 33l/01 [Preview - JPEG: 400 x 367 pix - 47k] [Normal - JPEG: 800 x 734 pix - 592k] [Hi-Res - JPEG: 3000 x 2754 pix - 3.9M] Caption : PR Photo 33k/01 is a view of the upper platform at the ESO Paranal Observatory with the four enclosures for the VLT 8.2-m Unit Telescopes and the partly subterranean Interferometric Laboratory (at centre). YEPUN (UT4) is housed in the enclosure to the right. This photo was obtained in the evening of November 25, 2001, some hours before "First Light" was achieved for the new NAOS-CONICA instrument, mounted at that telescope. PR Photo 33l/01 NAOS-CONICA installed on the Nasmyth B platform of the 8.2-m VLT YEPUN Unit Telescope. From left to right: the telescope adapter/rotator (dark blue), NAOS (light blue) and the CONICA cryostat (red). The control electronics is housed in the white cabinet. "Adaptive Optics" is a modern buzzword of astronomy. It embodies the seemingly magic way by which ground-based telescopes can overcome the undesirable blurring effect of atmospheric turbulence that has plagued astronomers for centuries. With "Adaptive Optics", the images of stars and galaxies captured by these instruments are now as sharp as theoretically possible. Or, as the experts like to say, "it is as if a giant ground-based telescope is 'lifted' into space by a magic hand!" . Adaptive Optics works by means of a computer-controlled, flexible mirror that counteracts the image distortion induced by atmospheric turbulence in real time. The concept is not new. Already in 1989, the first Adaptive Optics system ever built for Astronomy (aptly named "COME-ON" ) was installed on the 3.6-m telescope at the ESO La Silla Observatory, as the early fruit of a highly successful continuing collaboration between ESO and French research institutes (ONERA and Observatoire de Paris). Ten years ago, ESO initiated an Adaptive Optics program , to serve the needs for its frontline VLT project. In 1993, the Adaptive Optics facility (ADONIS) was offered to Europe's astronomers, as the first instrument of its kind, available for non-specialists. It is still in operation and continues to produce frontline results, cf. ESO PR 22/01. In 1997, ESO launched a collaborative effort with a French Consortium ( see below) for the development of the NAOS Nasmyth Adaptive Optics System . With its associated CONICA IR high angular resolution camera , developed with a German Consortium ( see below), it provides a full high angular resolution capability on the VLT at Paranal. With the successful "First Light" on November 25, 2001, this project is now about to enter into the operational phase. The advantages of NAOS-CONICA NAOS-CONICA belongs to a new generation of sophisticated adaptive optics (AO) devices. They have certain advantages over past systems. In particular, NAOS is unique in being equipped with an infrared-sensitive Wavefront Sensor (WFS) that permits to look inside regions that are highly obscured by interstellar dust and therefore unobservable in visible light. With its other WFS for visible light , NAOS should be able to achieve the highest degree of light concentration (the so-called "Strehl ratio") obtained at any existing 8-m class telescope. It also provides partially corrected images, using reference stars (see PR Photo 33e/01 ) as faint as visual magnitude 18, fainter than demonstrated so far at any other AO system at such large telescope. A major advantage of CONICA is to offer the large format and very high image quality required to fully match NAOS' performance , as well as a variety of observing modes. Moreover, NAOS-CONICA is the first astronomical AO instrument to be offered with a full end-to-end observing capability. It is completely integrated into the VLT dataflow system , with a seamless process from the preparation of the observations, including optimization of the instrument, to their execution at the telescope and on to automatic data quality assessment and storage in the VLT Archive. Collaboration and Institutes The Nasmyth Adaptive Optics System (NAOS) has been developed, with the support of INSU-CNRS, by a French Consortium in collaboration with ESO. The French consortium consists of Office National d'Etudes et de Recherches Aérospatiales (ONERA) , Laboratoire d'Astrophysique de Grenoble (LAOG) and Observatoire de Paris (DESPA and DASGAL). The Project Manager is Gérard Rousset (ONERA), the Instrument Responsible is François Lacombe (Observatoire de Paris) and the Project Scientist is Anne-Marie Lagrange (Laboratoire d'Astrophysique de Grenoble). The CONICA Near-Infrared CAmera has been developed by a German Consortium, with an extensive ESO collaboration. The Consortium consists of Max-Planck-Institut für Astronomie (MPIA) (Heidelberg) and the Max-Planck-Institut für Extraterrestrische Physik (MPE) (Garching). The Principal Investigator (PI) is Rainer Lenzen (MPIA), with Reiner Hofmann (MPE) as Co-Investigator. Contacts Norbert Hubin European Southern Observatory Garching, Germany Tel.: +4989-3200-6517 email: nhubin@eso.org Alan Moorwood European Southern Observatory Garching, Germany Tel.: +4989-3200-6294 email: amoorwoo@eso.org Appendix: Technical Information about NAOS and CONICA Once fully tested, NAOS-CONICA will provide adaptive optics assisted imaging, polarimetry and spectroscopy in the 1 - 5 µm waveband. NAOS is an adaptive optics system equipped with both visible and infrared, Shack-Hartmann type, wavefront sensors. Provided a reference source (e.g., a star) with visual magnitude V brighter than 18 or K-magnitude brighter than 13 mag is available within 60 arcsec of the science target, NAOS-CONICA will ultimately offer diffraction limited resolution at the level of 0.030 arcsec at a wavelength of 1 µm, albeit with a large halo around the image core for the faint end of the reference source brightness. This may be compared with VLT median seeing images of 0.65 arcsec at a wavelength of 1 µm and exceptionally good images around 0.30 arcsec. NAOS-CONICA is installed at Nasmyth Focus B at VLT YEPUN (UT4). In about two years' time, this instrument will benefit from a sodium Laser Guide Star (LGS) facility. The creation of an artificial guide star is then possible in any sky field of interest, thereby providing a much better sky coverage than what is possible with natural guide stars only. NAOS is equipped with two wavefront sensors, one in the visible part of the spectrum (0.45 - 0.95 µm) and one in the infrared part (1 - 2.5 µm); both are based on the Shack-Hartmann principle. The maximum correction frequency is about 500 Hz. There are 185 deformable mirror actuators plus a tip-tilt mirror correction. Together, they should permit to obtain a high Strehl ratio in the K-band (2.2 µm), up to 70%, depending on the actual seeing and waveband. Both the visible and IR wavefront sensors (WFS) have been optimized to provide AO correction for faint objects/stars. The visible WFS provides a low-order correction for objects as faint as visual magnitude ~ 18. The IR WFS will provide a low-order correction for objects as faint as K-magnitude 13. CONICA is a high performant instrument in terms of image quality and detector sensitivity. It has been designed so that it is able to make optimal use of the AO system. Inherent mechanical flexures are corrected on-line by NAOS through a pointing model. It offers a variety of modes, e.g., direct imaging, polarimetry, slit spectroscopy, coronagraphy and spectro-imaging. The ESO PR Video Clips service to visitors to the ESO website provides "animated" illustrations of the ongoing work and events at the European Southern Observatory. The most recent clip was: ESO PR Video Clip 06/01 about observations of a binary star (8 October 2001). Information is also available on the web about other ESO videos.

  13. Journal of Chemical Education on CD-ROM, 1999

    NASA Astrophysics Data System (ADS)

    1999-12-01

    The Journal of Chemical Education on CD-ROM contains the text and graphics for all the articles, features, and reviews published in the Journal of Chemical Education. This 1999 issue of the JCE CD series includes all twelve issues of 1999, as well as all twelve issues from 1998 and from 1997, and the September-December issues from 1996. Journal of Chemical Education on CD-ROM is formatted so that all articles on the CD retain as much as possible of their original appearance. Each article file begins with an abstract/keyword page followed by the article pages. All pages of the Journal that contain editorial content, including the front covers, table of contents, letters, and reviews, are included. Also included are abstracts (when available), keywords for all articles, and supplementary materials. The Journal of Chemical Education on CD-ROM has proven to be a useful tool for chemical educators. Like the Computerized Index to the Journal of Chemical Education (1) it will help you to locate articles on a particular topic or written by a particular author. In addition, having the complete article on the CD-ROM provides added convenience. It is no longer necessary to go to the library, locate the Journal issue, and read it while sitting in an uncomfortable chair. With a few clicks of the mouse, you can scan an article on your computer monitor, print it if it proves interesting, and read it in any setting you choose. Searching and Linking JCE CD is fully searchable for any word, partial word, or phrase. Successful searches produce a listing of articles that contain the requested text. Individual articles can be quickly accessed from this list. The Table of Contents of each issue is linked to individual articles listed. There are also links from the articles to any supplementary materials. References in the Chemical Education Today section (found in the front of each issue) to articles elsewhere in the issue are also linked to the article, as are WWW addresses and email addresses. If you have Internet access and a WWW browser and email utility, you can go directly to the Web site or prepare to send a message with a single mouse click. Full-text searching of the entire CD enables you to find the articles you want. Price and Ordering An order form is inserted in this issue that provides prices and other ordering information. If this insert is not available or if you need additional information, contact: JCE Software, University of Wisconsin-Madison, 1101 University Avenue, Madison, WI 53706-1396; phone: 608/262-5153 or 800/991-5534; fax: 608/265-8094; email: jcesoft@chem.wisc.edu. Information about all our publications (including abstracts, descriptions, updates) is available from our World Wide Web site at: http://jchemed.chem.wisc.edu/JCESoft/. Hardware and Software Requirements Hardware and software requirements for JCE CD 1999 are listed in the table below: Literature Cited 1. Schatz, P. F. Computerized Index, Journal of Chemical Education; J. Chem. Educ. Software 1993, SP 5-M. Schatz, P. F.; Jacobsen, J. J. Computerized Index, Journal of Chemical Education; J. Chem. Educ. Software 1993, SP 5-W.

  14. Web-based Tool Suite for Plasmasphere Information Discovery

    NASA Astrophysics Data System (ADS)

    Newman, T. S.; Wang, C.; Gallagher, D. L.

    2005-12-01

    A suite of tools that enable discovery of terrestrial plasmasphere characteristics from NASA IMAGE Extreme Ultra Violet (EUV) images is described. The tool suite is web-accessible, allowing easy remote access without the need for any software installation on the user's computer. The features supported by the tool include reconstruction of the plasmasphere plasma density distribution from a short sequence of EUV images, semi-automated selection of the plasmapause boundary in an EUV image, and mapping of the selected boundary to the geomagnetic equatorial plane. EUV image upload and result download is also supported. The tool suite's plasmapause mapping feature is achieved via the Roelof and Skinner (2000) Edge Algorithm. The plasma density reconstruction is achieved through a tomographic technique that exploits physical constraints to allow for a moderate resolution result. The tool suite's software architecture uses Java Server Pages (JSP) and Java Applets on the front side for user-software interaction and Java Servlets on the server side for task execution. The compute-intensive components of the tool suite are implemented in C++ and invoked by the server via Java Native Interface (JNI).

  15. Rate distortion optimal bit allocation methods for volumetric data using JPEG 2000.

    PubMed

    Kosheleva, Olga M; Usevitch, Bryan E; Cabrera, Sergio D; Vidal, Edward

    2006-08-01

    Computer modeling programs that generate three-dimensional (3-D) data on fine grids are capable of generating very large amounts of information. These data sets, as well as 3-D sensor/measured data sets, are prime candidates for the application of data compression algorithms. A very flexible and powerful compression algorithm for imagery data is the newly released JPEG 2000 standard. JPEG 2000 also has the capability to compress volumetric data, as described in Part 2 of the standard, by treating the 3-D data as separate slices. As a decoder standard, JPEG 2000 does not describe any specific method to allocate bits among the separate slices. This paper proposes two new bit allocation algorithms for accomplishing this task. The first procedure is rate distortion optimal (for mean squared error), and is conceptually similar to postcompression rate distortion optimization used for coding codeblocks within JPEG 2000. The disadvantage of this approach is its high computational complexity. The second bit allocation algorithm, here called the mixed model (MM) approach, mathematically models each slice's rate distortion curve using two distinct regions to get more accurate modeling at low bit rates. These two bit allocation algorithms are applied to a 3-D Meteorological data set. Test results show that the MM approach gives distortion results that are nearly identical to the optimal approach, while significantly reducing computational complexity.

  16. Using MATLAB software with Tomcat server and Java platform for remote image analysis in pathology.

    PubMed

    Markiewicz, Tomasz

    2011-03-30

    The Matlab software is a one of the most advanced development tool for application in engineering practice. From our point of view the most important is the image processing toolbox, offering many built-in functions, including mathematical morphology, and implementation of a many artificial neural networks as AI. It is very popular platform for creation of the specialized program for image analysis, also in pathology. Based on the latest version of Matlab Builder Java toolbox, it is possible to create the software, serving as a remote system for image analysis in pathology via internet communication. The internet platform can be realized based on Java Servlet Pages with Tomcat server as servlet container. In presented software implementation we propose remote image analysis realized by Matlab algorithms. These algorithms can be compiled to executable jar file with the help of Matlab Builder Java toolbox. The Matlab function must be declared with the set of input data, output structure with numerical results and Matlab web figure. Any function prepared in that manner can be used as a Java function in Java Servlet Pages (JSP). The graphical user interface providing the input data and displaying the results (also in graphical form) must be implemented in JSP. Additionally the data storage to database can be implemented within algorithm written in Matlab with the help of Matlab Database Toolbox directly with the image processing. The complete JSP page can be run by Tomcat server. The proposed tool for remote image analysis was tested on the Computerized Analysis of Medical Images (CAMI) software developed by author. The user provides image and case information (diagnosis, staining, image parameter etc.). When analysis is initialized, input data with image are sent to servlet on Tomcat. When analysis is done, client obtains the graphical results as an image with marked recognized cells and also the quantitative output. Additionally, the results are stored in a server database. The internet platform was tested on PC Intel Core2 Duo T9600 2.8 GHz 4 GB RAM server with 768x576 pixel size, 1.28 Mb tiff format images reffering to meningioma tumour (x400, Ki-67/MIB-1). The time consumption was as following: at analysis by CAMI, locally on a server - 3.5 seconds, at remote analysis - 26 seconds, from which 22 seconds were used for data transfer via internet connection. At jpg format image (102 Kb) the consumption time was reduced to 14 seconds. The results have confirmed that designed remote platform can be useful for pathology image analysis. The time consumption is depended mainly on the image size and speed of the internet connections. The presented implementation can be used for many types of analysis at different staining, tissue, morphometry approaches, etc. The significant problem is the implementation of the JSP page in the multithread form, that can be used parallelly by many users. The presented platform for image analysis in pathology can be especially useful for small laboratory without its own image analysis system.

  17. Using MATLAB software with Tomcat server and Java platform for remote image analysis in pathology

    PubMed Central

    2011-01-01

    Background The Matlab software is a one of the most advanced development tool for application in engineering practice. From our point of view the most important is the image processing toolbox, offering many built-in functions, including mathematical morphology, and implementation of a many artificial neural networks as AI. It is very popular platform for creation of the specialized program for image analysis, also in pathology. Based on the latest version of Matlab Builder Java toolbox, it is possible to create the software, serving as a remote system for image analysis in pathology via internet communication. The internet platform can be realized based on Java Servlet Pages with Tomcat server as servlet container. Methods In presented software implementation we propose remote image analysis realized by Matlab algorithms. These algorithms can be compiled to executable jar file with the help of Matlab Builder Java toolbox. The Matlab function must be declared with the set of input data, output structure with numerical results and Matlab web figure. Any function prepared in that manner can be used as a Java function in Java Servlet Pages (JSP). The graphical user interface providing the input data and displaying the results (also in graphical form) must be implemented in JSP. Additionally the data storage to database can be implemented within algorithm written in Matlab with the help of Matlab Database Toolbox directly with the image processing. The complete JSP page can be run by Tomcat server. Results The proposed tool for remote image analysis was tested on the Computerized Analysis of Medical Images (CAMI) software developed by author. The user provides image and case information (diagnosis, staining, image parameter etc.). When analysis is initialized, input data with image are sent to servlet on Tomcat. When analysis is done, client obtains the graphical results as an image with marked recognized cells and also the quantitative output. Additionally, the results are stored in a server database. The internet platform was tested on PC Intel Core2 Duo T9600 2.8GHz 4GB RAM server with 768x576 pixel size, 1.28Mb tiff format images reffering to meningioma tumour (x400, Ki-67/MIB-1). The time consumption was as following: at analysis by CAMI, locally on a server – 3.5 seconds, at remote analysis – 26 seconds, from which 22 seconds were used for data transfer via internet connection. At jpg format image (102 Kb) the consumption time was reduced to 14 seconds. Conclusions The results have confirmed that designed remote platform can be useful for pathology image analysis. The time consumption is depended mainly on the image size and speed of the internet connections. The presented implementation can be used for many types of analysis at different staining, tissue, morphometry approaches, etc. The significant problem is the implementation of the JSP page in the multithread form, that can be used parallelly by many users. The presented platform for image analysis in pathology can be especially useful for small laboratory without its own image analysis system. PMID:21489188

  18. High-fidelity data embedding for image annotation.

    PubMed

    He, Shan; Kirovski, Darko; Wu, Min

    2009-02-01

    High fidelity is a demanding requirement for data hiding, especially for images with artistic or medical value. This correspondence proposes a high-fidelity image watermarking for annotation with robustness to moderate distortion. To achieve the high fidelity of the embedded image, we introduce a visual perception model that aims at quantifying the local tolerance to noise for arbitrary imagery. Based on this model, we embed two kinds of watermarks: a pilot watermark that indicates the existence of the watermark and an information watermark that conveys a payload of several dozen bits. The objective is to embed 32 bits of metadata into a single image in such a way that it is robust to JPEG compression and cropping. We demonstrate the effectiveness of the visual model and the application of the proposed annotation technology using a database of challenging photographic and medical images that contain a large amount of smooth regions.

  19. HUBBLE SHOWS EXPANSION OF ETA CARINAE DEBRIS

    NASA Technical Reports Server (NTRS)

    2002-01-01

    The furious expansion of a huge, billowing pair of gas and dust clouds are captured in this NASA Hubble Space Telescope comparison image of the supermassive star Eta Carinae. To create the picture, astronomers aligned and subtracted two images of Eta Carinae taken 17 months apart (April 1994, September 1995). Black represents where the material was located in the older image, and white represents the more recent location. (The light and dark streaks that make an 'X' pattern are instrumental artifacts caused by the extreme brightness of the central star. The bright white region at the center of the image results from the star and its immediate surroundings being 'saturated' in one of the images.)Photo Credit: Jon Morse (University of Colorado), Kris Davidson (University of Minnesota), and NASA Image files in GIF and JPEG format and captions may be accessed on Internet via anonymous ftp from oposite.stsci.edu in /pubinfo.

  20. Web-based Quality Control Tool used to validate CERES products on a cluster of Linux servers

    NASA Astrophysics Data System (ADS)

    Chu, C.; Sun-Mack, S.; Heckert, E.; Chen, Y.; Mlynczak, P.; Mitrescu, C.; Doelling, D.

    2014-12-01

    There have been a few popular desktop tools used in the Earth Science community to validate science data. Because of the limitation on the capacity of desktop hardware such as disk space and CPUs, those softwares are not able to display large amount of data from files.This poster will talk about an in-house developed web-based software built on a cluster of Linux servers. That allows users to take advantage of a few Linux servers working in parallel to generate hundreds images in a short period of time. The poster will demonstrate:(1) The hardware and software architecture is used to provide high throughput of images. (2) The software structure that can incorporate new products and new requirement quickly. (3) The user interface about how users can manipulate the data and users can control how the images are displayed.

  1. Development of a system for transferring images via a network: supporting a regional liaison.

    PubMed

    Mihara, Naoki; Manabe, Shiro; Takeda, Toshihiro; Shinichirou, Kitamura; Junichi, Murakami; Kouji, Kiso; Matsumura, Yasushi

    2013-01-01

    We developed a system that transfers images via network and started using them in our hospital's PACS (Picture Archiving and Communication Systems) in 2006. We are pleased to report that the system has been re-developed and has been running so that there will be a regional liaison in the future. It has become possible to automatically transfer images simply by selecting the destination hospital that is registered in advance at the relay server. The gateway of this system can send images to a multi-center, relay management server, which receives the images and resends them. This system has the potential to be useful for image exchange, and to serve as a regional medical liaison.

  2. Automatic Thermal Infrared Panoramic Imaging Sensor

    DTIC Science & Technology

    2006-11-01

    hibernation, in which power supply to the server computer , the wireless network hardware, the GPS receiver, and the electronic compass / tilt sensor...prototype. At the operator’s command on the client laptop, the receiver wakeup device on the server side will switch on the ATX power supply at the...server, to resume the power supply to all the APTIS components. The embedded computer will resume all of the functions it was performing when put

  3. A configurable distributed high-performance computing framework for satellite's TDI-CCD imaging simulation

    NASA Astrophysics Data System (ADS)

    Xue, Bo; Mao, Bingjing; Chen, Xiaomei; Ni, Guoqiang

    2010-11-01

    This paper renders a configurable distributed high performance computing(HPC) framework for TDI-CCD imaging simulation. It uses strategy pattern to adapt multi-algorithms. Thus, this framework help to decrease the simulation time with low expense. Imaging simulation for TDI-CCD mounted on satellite contains four processes: 1) atmosphere leads degradation, 2) optical system leads degradation, 3) electronic system of TDI-CCD leads degradation and re-sampling process, 4) data integration. Process 1) to 3) utilize diversity data-intensity algorithms such as FFT, convolution and LaGrange Interpol etc., which requires powerful CPU. Even uses Intel Xeon X5550 processor, regular series process method takes more than 30 hours for a simulation whose result image size is 1500 * 1462. With literature study, there isn't any mature distributing HPC framework in this field. Here we developed a distribute computing framework for TDI-CCD imaging simulation, which is based on WCF[1], uses Client/Server (C/S) layer and invokes the free CPU resources in LAN. The server pushes the process 1) to 3) tasks to those free computing capacity. Ultimately we rendered the HPC in low cost. In the computing experiment with 4 symmetric nodes and 1 server , this framework reduced about 74% simulation time. Adding more asymmetric nodes to the computing network, the time decreased namely. In conclusion, this framework could provide unlimited computation capacity in condition that the network and task management server are affordable. And this is the brand new HPC solution for TDI-CCD imaging simulation and similar applications.

  4. Hunting the Southern Skies with SIMBA

    NASA Astrophysics Data System (ADS)

    2001-08-01

    First Images from the New "Millimetre Camera" on SEST at La Silla Summary A new instrument, SIMBA ("SEST IMaging Bolometer Array") , has been installed at the Swedish-ESO Submillimetre Telescope (SEST) at the ESO La Silla Observatory in July 2001. It records astronomical images at a wavelength of 1.2 mm and is able to quickly map large sky areas. In order to achieve the best possible sensitivity, SIMBA is cooled to only 0.3 deg above the absolute zero on the temperature scale. SIMBA is the first imaging millimetre instrument in the southern hemisphere . Radiation at this wavelength is mostly emitted from cold dust and ionized gas in a variety of objects in the Universe. Among other, SIMBA now opens exciting prospects for in-depth studies of the "hidden" sites of star formation , deep inside dense interstellar nebulae. While such clouds are impenetrable to optical light, they are transparent to millimetre radiation and SIMBA can therefore observe the associated phenomena, in particular the dust around nascent stars . This sophisticated instrument can also search for disks of cold dust around nearby stars in which planets are being formed or which may be left-overs of this basic process. Equally important, SIMBA may observe extremely distant galaxies in the early universe , recording them while they were still in the formation stage. Various SIMBA images have been obtained during the first tests of the new instrument. The first observations confirm the great promise for unique astronomical studies of the southern sky in the millimetre wavelength region. These results also pave the way towards the Atacama Large Millimeter Array (ALMA) , the giant, joint research project that is now under study in Europe, the USA and Japan. PR Photo 28a/01 : SIMBA image centered on the infrared source IRAS 17175-3544 PR Photo 28b/01 : SIMBA image centered on the infrared source IRAS 18434-0242 PR Photo 28c/01 : SIMBA image centered on the infrared source IRAS 17271-3439 PR Photo 28d/01 : View of the SIMBA instrument First observations with SIMBA SIMBA ("SEST IMaging Bolometer Array") was built and installed at the Swedish-ESO Submillimetre Telescope (SEST) at La Silla (Chile) within an international collaboration between the University of Bochum and the Max Planck Institute for Radio Astronomy in Germany, the Swedish National Facility for Radio Astronomy and ESO . The SIMBA ("Lion" in Swahili) instrument detects radiation at a wavelength of 1.2 mm . It has 37 "horns" and acts like a camera with 37 picture elements (pixels). By changing the pointing direction of the telescope, relatively large sky fields can be imaged. As the first and only imaging millimetre instrument in the southern hemisphere , SIMBA now looks up towards rich and virgin hunting grounds in the sky. Observations at millimetre wavelengths are particularly useful for studies of star formation , deep inside dense interstellar clouds that are impenetrable to optical light. Other objects for which SIMBA is especially suited include planet-forming disks of cold dust around nearby stars and extremely distant galaxies in the early universe , still in the stage of formation. During the first observations, SIMBA was used to study the gas and dust content of star-forming regions in our own Milky Way Galaxy, as well as in the Magellanic Clouds and more distant galaxies. It was also used to record emission from planetary nebulae , clouds of matter ejected by dying stars. Moreover, attempts were made to detect distant galaxies and quasars radiating at mm-wavelengths and located in two well-studied sky fields, the "Hubble Deep Field South" and the "Chandra Deep Field" [1]. Observations with SEST and SIMBA also serve to identify objects that can be observed at higher resolution and at shorter wavelengths with future southern submm telescopes and interferometers such as APEX (see MPG Press Release 07/01 of 6 July 2001) and ALMA. SIMBA images regions of high-mass star formation ESO PR Photo 28a/01 ESO PR Photo 28a/01 [Preview - JPEG: 400 x 568 pix - 61k] [Normal - JPEG: 800 x 1136 pix - 200k] Caption : This intensity-coded, false-colour SIMBA image is centered on the infrared source IRAS 17175-3544 and covers the well-known high-mass star formation complex NGC 6334 , at a distance of 5500 light-years. The southern bright source is an ultra-compact region of ionized hydrogen ("HII region") created by a star or several stars already formed. The northern bright source has not yet developed an HII region and may be a star or a cluster of stars that are presently forming. A remarkable, narrow, linear dust filament extends over the image; it was known to exist before, but the SIMBA image now shows it to a much larger extent and much more clearly. This and the following images cover an area of about 15 arcmin x 6 arcmin on the sky and have a pixel size of 8 arcsec. ESO PR Photo 28b/01 ESO PR Photo 28b/01 [Preview - JPEG: 532 x 400 pix - 52k] [Normal - JPEG: 1064 x 800 pix - 168k] Caption : This SIMBA image is centered on the object IRAS 18434-0242 . It includes many bright sources that are associated with dense cores and compact HII regions located deep inside the cloud. A much less detailed map was made several years ago with a single channel bolometer on SEST. The new SIMBA map is more extended and shows more sources. ESO PR Photo 28c/01 ESO PR Photo 28c/01 [Preview - JPEG: 400 x 505 pix - 59k] [Normal - JPEG: 800 x 1009 pix - 160k] Caption : Another SIMBA image is centered on IRAS 17271-3439 and includes an extended bright source that is associated with several compact HII regions as well as a cluster of weaker sources. Some of the recent SIMBA images are shown above; they were taken during test observations, and within a pilot survey of high-mass starforming regions . Stars form in interstellar clouds that consist of gas and dust. The denser parts of these clouds can collapse into cold and dense cores which may form stars. Often many stars are formed in clusters, at about the same time. The newborn stars heat up the surrounding regions of the cloud . Radiation is emitted, first at mm-wavelengths and later at infrared wavelengths as the cloud core gets hotter. If very massive stars are formed, their UV-radiation ionizes the immediate surrounding gas and this ionized gas also emits at mm-wavelengths. These ionized regions are called ultra compact HII regions . Because the stars form deep inside the interstellar clouds, the obscuration at visible wavelengths is very high and it is not possible to see these regions optically. The objects selected for the SIMBA survey are from a catalog of objects, first detected at long infrared wavelengths with the IRAS satellite (launched in 1983), hence the designations indicated in Photos 28a-c/01 . From 1995 to 1998, the ESA Infrared Space Observatory (ISO) gathered an enormous amount of valuable data, obtaining images and spectra in the broad infrared wavelength region from 2.5 to 240 µm (0.025 to 0.240 mm), i.e. just shortward of the millimetre region in which SIMBA operates. ISO produced mid-infrared images of field size and angular resolution (sharpness) comparable to those of SIMBA. It will obviously be most interesting to combine the images that will be made with SIMBA with imaging and spectral data from ISO and also with those obtained by large ground-based telescopes in the near- and mid-infrared spectral regions. Some technical details about the SIMBA instrument ESO PR Photo 28d/01 ESO PR Photo 28d/01 [Preview - JPEG: 509 x 400 pix - 83k] [Normal - JPEG: 1017 x 800 pix - 528k] Caption : The SIMBA instrument - with the cover removed - in the SEST electronics laboratory. The 37 antenna horns to the right, each of which produces one picture element (pixel) of the combined image. The bolometer elements are located behind the horns. The cylindrical aluminium foil covered unit is the cooler that keeps SIMBA at extremely low temperature (-272.85 °C, or only 0.3 deg above the absolute zero) when it is mounted in the telescope. SIMBA is unique because of its ability to quickly map large sky areas due to the fast scanning mode. In order to achieve low noise and good sensitivity, the instrument is cooled to only 0.3 deg above the absolute zero, i.e., to -272.85 °C. SIMBA consists of 37 horns (each providing one pixel on the sky) arranged in a hexagonal pattern, cf. Photo 28d/01 . To form images, the sky position of the telescope is changed according to a raster pattern - in this way all of a celestial object and the surrounding sky field may be "scanned" fast, at speeds of typically 80 arcsec per second. This makes SIMBA a very efficient facility: for instance, a fully sampled image of good sensitivity with a field size of 15 arcmin x 6 arcmin can be taken in 15 minutes. If higher sensitivity is needed (to observe fainter sources), more images may be obtained of the same field and then added together. Large sky areas can be covered by combining many images taken at different positions. The image resolution (the "telescope beamsize") is 22 arcsec, corresponding to the angular resolution of this 15-m telescope at the indicated wavelength. Note [1} Observations of the HDFS and CDFS fields in other wavebands with other telescopes at the ESO observatories have been reported earlier, e.g. within the ESO Imaging Survey Project (EIS) (the "EIS Deep-Survey"). It is the ESO policy on these fields to make data public world-wide.

  5. HOT WHITE DWARF SHINES IN YOUNG STAR CLUSTER

    NASA Technical Reports Server (NTRS)

    2002-01-01

    A dazzling 'jewel-box' collection of over 20,000 stars can be seen in crystal clarity in this NASA Hubble Space Telescope image, taken with the Wide Field and Planetary Camera 2. The young (40 million year old) cluster, called NGC 1818, is 164,000 light-years away in the Large Magellanic Cloud (LMC), a satellite galaxy of our Milky Way. The LMC, a site of vigorous current star formation, is an ideal nearby laboratory for studying stellar evolution. In the cluster, astronomers have found a young white dwarf star, which has only very recently formed following the burnout of a red giant. Based on this observation astronomers conclude that the red giant progenitor star was 7.6 times the mass of our Sun. Previously, astronomers have estimated that stars anywhere from 6 to 10 solar masses would not just quietly fade away as white dwarfs but abruptly self-destruct in torrential explosions. Hubble can easily resolve the star in the crowded cluster, and detect its intense blue-white glow from a sizzling surface temperature of 50,000 degrees Fahrenheit. IMAGE DATA Date taken: December 1995 Wavelength: natural color reconstruction from three filters (I,B,U) Field of view: 100 light-years, 2.2 arc minutes TARGET DATA Name: NGC 1818 Distance: 164,000 light-years Constellation: Dorado Age: 40 million years Class: Rich star cluster Apparent magnitude: 9.7 Apparent diameter: 7 arc minutes Credit: Rebecca Elson and Richard Sword, Cambridge UK, and NASA (Original WFPC2 image courtesy J. Westphal, Caltech) Image files are available electronically via the World Wide Web at: http://oposite.stsci.edu/pubinfo/1998/16 and via links in http://oposite.stsci.edu/pubinfo/latest.html or http://oposite.stsci.edu/pubinfo/pictures.html. GIF and JPEG images are available via anonymous ftp to oposite.stsci.edu in /pubinfo/GIF/9816.GIF and /pubinfo/JPEG/9816.jpg.

  6. Baseline coastal oblique aerial photographs collected from Calcasieu Lake, Louisiana, to Brownsville, Texas, September 9-10, 2008

    USGS Publications Warehouse

    Morgan, Karen L. M.; Karen A. Westphal,

    2016-04-28

    The U.S. Geological Survey (USGS), as part of the National Assessment of Coastal Change Hazards project, conducts baseline and storm-response photography missions to document and understand the changes in vulnerability of the Nation's coasts to extreme storms (Morgan, 2009). On September 9-10, 2008, the USGS conducted an oblique aerial photographic survey from Calcasieu Lake, Louisiana, to Brownsville, Texas, aboard a Cessna C-210 (aircraft) at an altitude of 500 feet (ft) and approximately 1,000 ft offshore. This mission was flown to collect baseline data for assessing incremental changes of the beach and nearshore area, and the data can be used in the assessment of future coastal change.The photographs provided in this report are Joint Photographic Experts Group (JPEG) images. ExifTool was used to add the following to the header of each photo: time of collection, Global Positioning System (GPS) latitude, GPS longitude, keywords, credit, artist (photographer), caption, copyright, and contact information. The photograph locations are an estimate of the position of the aircraft at the time the photograph was taken and do not indicate the location of any feature in the images (see the Navigation Data page). These photographs document the state of the barrier islands and other coastal features at the time of the survey. Pages containing thumbnail images of the photographs, referred to as contact sheets, were created in 5-minute segments of flight time. These segments can be found on the Photos and Maps page. Photographs can be opened directly with any JPEG-compatible image viewer by clicking on a thumbnail on the contact sheet.In addition to the photographs, a Google Earth Keyhole Markup Language (KML) file is provided and can be used to view the images by clicking on the marker and then clicking on either the thumbnail or the link above the thumbnail. The KML file was created using the photographic navigation files. The KML file can be found in the kml folder.

  7. Post-Hurricane Sandy coastal oblique aerial photographs collected from Cape Lookout, North Carolina, to Montauk, New York, November 4-6, 2012

    USGS Publications Warehouse

    Morgan, Karen L.M.; Krohn, M. Dennis

    2014-01-01

    The U.S. Geological Survey (USGS) conducts baseline and storm response photography missions to document and understand the changes in vulnerability of the Nation's coasts to extreme storms. On November 4-6, 2012, approximately one week after the landfall of Hurricane Sandy, the USGS conducted an oblique aerial photographic survey from Cape Lookout, N.C., to Montauk, N.Y., aboard a Piper Navajo Chieftain (aircraft) at an altitude of 500 feet (ft) and approximately 1,000 ft offshore. This mission was flown to collect post-Hurricane Sandy data for assessing incremental changes in the beach and nearshore area since the last survey in 2009. The data can be used in the assessment of future coastal change. The photographs provided here are Joint Photographic Experts Group (JPEG) images. The photograph locations are an estimate of the position of the aircraft and do not indicate the location of the feature in the images. These photos document the configuration of the barrier islands and other coastal features at the time of the survey. Exiftool was used to add the following to the header of each photo: time of collection, Global Positioning System (GPS) latitude, GPS longitude, keywords, credit, artist (photographer), caption, copyright, and contact information. Photographs can be opened directly with any JPEG-compatible image viewer by clicking on a thumbnail on the contact sheet. Table 1 provides detailed information about the GPS location, image name, date, and time each of the 9,481 photographs were taken, along with links to each photograph. The photographs are organized in segments, also referred to as contact sheets, and represent approximately 5 minutes of flight time. In addition to the photographs, a Google Earth Keyhole Markup Language (KML) file is provided and can be used to view the images by clicking on the marker and then clicking on either the thumbnail or the link above the thumbnail. The KML files were created using the photographic navigation files.

  8. Study on parallel and distributed management of RS data based on spatial database

    NASA Astrophysics Data System (ADS)

    Chen, Yingbiao; Qian, Qinglan; Wu, Hongqiao; Liu, Shijin

    2009-10-01

    With the rapid development of current earth-observing technology, RS image data storage, management and information publication become a bottle-neck for its appliance and popularization. There are two prominent problems in RS image data storage and management system. First, background server hardly handle the heavy process of great capacity of RS data which stored at different nodes in a distributing environment. A tough burden has put on the background server. Second, there is no unique, standard and rational organization of Multi-sensor RS data for its storage and management. And lots of information is lost or not included at storage. Faced at the above two problems, the paper has put forward a framework for RS image data parallel and distributed management and storage system. This system aims at RS data information system based on parallel background server and a distributed data management system. Aiming at the above two goals, this paper has studied the following key techniques and elicited some revelatory conclusions. The paper has put forward a solid index of "Pyramid, Block, Layer, Epoch" according to the properties of RS image data. With the solid index mechanism, a rational organization for different resolution, different area, different band and different period of Multi-sensor RS image data is completed. In data storage, RS data is not divided into binary large objects to be stored at current relational database system, while it is reconstructed through the above solid index mechanism. A logical image database for the RS image data file is constructed. In system architecture, this paper has set up a framework based on a parallel server of several common computers. Under the framework, the background process is divided into two parts, the common WEB process and parallel process.

  9. Study on parallel and distributed management of RS data based on spatial data base

    NASA Astrophysics Data System (ADS)

    Chen, Yingbiao; Qian, Qinglan; Liu, Shijin

    2006-12-01

    With the rapid development of current earth-observing technology, RS image data storage, management and information publication become a bottle-neck for its appliance and popularization. There are two prominent problems in RS image data storage and management system. First, background server hardly handle the heavy process of great capacity of RS data which stored at different nodes in a distributing environment. A tough burden has put on the background server. Second, there is no unique, standard and rational organization of Multi-sensor RS data for its storage and management. And lots of information is lost or not included at storage. Faced at the above two problems, the paper has put forward a framework for RS image data parallel and distributed management and storage system. This system aims at RS data information system based on parallel background server and a distributed data management system. Aiming at the above two goals, this paper has studied the following key techniques and elicited some revelatory conclusions. The paper has put forward a solid index of "Pyramid, Block, Layer, Epoch" according to the properties of RS image data. With the solid index mechanism, a rational organization for different resolution, different area, different band and different period of Multi-sensor RS image data is completed. In data storage, RS data is not divided into binary large objects to be stored at current relational database system, while it is reconstructed through the above solid index mechanism. A logical image database for the RS image data file is constructed. In system architecture, this paper has set up a framework based on a parallel server of several common computers. Under the framework, the background process is divided into two parts, the common WEB process and parallel process.

  10. Feeling the Heat

    NASA Astrophysics Data System (ADS)

    2004-05-01

    Successful "First Light" for the Mid-Infrared VISIR Instrument on the VLT Summary Close to midnight on April 30, 2004, intriguing thermal infrared images of dust and gas heated by invisible stars in a distant region of our Milky Way appeared on a computer screen in the control room of the ESO Very Large Telescope (VLT). These images mark the successful "First Light" of the VLT Imager and Spectrometer in the InfraRed (VISIR), the latest instrument to be installed on this powerful telescope facility at the ESO Paranal Observatory in Chile. The event was greeted with a mixture of delight, satisfaction and some relief by the team of astronomers and engineers from the consortium of French and Dutch Institutes and ESO who have worked on the development of VISIR for around 10 years [1]. Pierre-Olivier Lagage (CEA, France), the Principal Investigator, is content : "This is a wonderful day! A result of many years of dedication by a team of engineers and technicians, who can today be proud of their work. With VISIR, astronomers will have at their disposal a great instrument on a marvellous telescope. And the gain is enormous; 20 minutes of observing with VISIR is equivalent to a whole night of observing on a 3-4m class telescope." Dutch astronomer and co-PI Jan-Willem Pel (Groningen, The Netherlands) adds: "What's more, VISIR features a unique observing mode in the mid-infrared: spectroscopy at a very high spectral resolution. This will open up new possibilities such as the study of warm molecular hydrogen most likely to be an important component of our galaxy." PR Photo 16a/04: VISIR under the Cassegrain focus of the Melipal telescope PR Photo 16b/04: VISIR mounted behind the mirror of the Melipal telescope PR Photo 16c/04: Colour composite of the star forming region G333.6-0.2 PR Photo 16d/04: Colour composite of the Galactic Centre PR Photo 16e/04: The Ant Planetary Nebula at 12.8 μm PR Photo 16f/04: The starburst galaxy He2-10 at 11.3μm PR Photo 16g/04: High-resolution spectrum of G333.6-0.2 around 12.8μm PR Photo 16h/04: High-resolution spectrum of the Ant Planetary Nebula around 12.8μm From cometary tails to centres of galaxies The mid-infrared spectral region extends from a few to a few tens of microns in wavelength and provides a unique view of our Universe. Optical astronomy, that is astronomy at wavelengths to which our eyes are sensitive, is mostly directed towards light emitted by gas, be it in stars, nebulae or galaxies. Mid-Infrared astronomy, however, allows us to also detect solid dust particles at temperatures of -200 to +300 °C. Dust is very abundant in the universe in many different environments, ranging from cometary tails to the centres of galaxies. This dust also often totally absorbs and hence blocks the visible light reaching us from such objects. Red light, and especially infrared light, can propagate much better in dust clouds. Many important astrophysical processes occur in regions of high obscuration by dust, most notably star formation and the late stages of their evolution, when stars that have burnt nearly all their fuel shed much of their outer layers and dust grains form in their "stellar wind". Stars are born in so-called molecular clouds. The proto-stars feed from these clouds and are shielded from the outside by them. Infrared is a tool - very much as ultrasound is for medical inspections - for looking into those otherwise hidden regions to study the stellar "embryos". It is thus crucial to also observe the Universe in the infrared and mid-infrared. Unfortunately, there are also infrared-emitting molecules in the Earth's atmosphere, e.g. water vapour, Nitric Oxides, Ozone, Methane. Because of these gases, the atmosphere is completely opaque at certain wavelengths, except in a few "windows" where the Earth's atmosphere is transparent. Even in these windows, however, the sky and telescope emit radiation in the infrared to an extent that observing in the mid-infrared at night is comparable to trying to do optical astronomy in daytime. Ground-based infrared astronomers have thus become extremely adept at developing special techniques called "chopping' and "nodding" for detecting the extremely faint astronomical signals against this unwanted bright background [3]. VISIR: an extremely complex instrument VISIR - the VLT Imager and Spectrometer in the InfraRed - is a complex multi-mode instrument designed to operate in the 10 and 20 μm atmospheric windows, i.e. at wavelengths up to about 40 times longer than visible light and to provide images as well as spectra at a wide range of resolving power up to ~ 30.000. It can sample images down to the diffraction limit of the 8.2-m Melipal telescope (0.27 arcsec at 10 μm wavelength, i.e. corresponding to a resolution of 500 m on the Moon), which is expected to be reached routinely due to the excellent seeing conditions experienced for a large fraction of the time at the VLT [2]. Because at room temperature the metal and glass of VISIR would emit strongly at exactly the same wavelengths and would swamp any faint mid-infrared astronomical signals, the whole VISIR instrument is cooled to a temperature close to -250° C and its two panoramic 256x256 pixel array detectors to even lower temperatures, only a few degrees above absolute zero. It is also kept in a vacuum tank to avoid the unavoidable condensation of water and icing which would otherwise occur. The complete instrument is mounted on the telescope and must remain rigid to within a few thousandths of a millimetre as the telescope moves to acquire and then track objects anywhere in the sky. Needless to say, this makes for an extremely complex instrument and explains the many years needed to develop and bring it to the telescope on the top of Paranal. VISIR also includes a number of important technological innovations, most notably its unique cryogenic motor drive systems comprising integrated stepper motors, gears and clutches whose shape is similar to that of the box of the famous French Camembert cheese. VISIR is mounted on Melipal ESO PR Photo 16a/04 ESO PR Photo 16a/04 VISIR under the Cassegrain focus of the Melipal telescope [Preview - JPEG: 400 x 476 pix - 271k] [Normal - JPEG: 800 x 951 pix - 600k] ESO PR Photo 16b/04 ESO PR Photo 16b/04 VISIR mounted behind the mirror of the Melipal telescope [Preview - JPEG: 400 x 603 pix - 366k] [Normal - JPEG: 800 x 1206 pix - 945k] Caption: ESO PR Photo 16a/04 shows VISIR about to be attached at the Cassegrain focus of the Melipal telescope. On ESO PR Photo 16b/04, VISIR appears much smaller once mounted behind the enormous 8.2-m diameter mirror of the Melipal telescope. The fully integrated VISIR plus all the associated equipment (amounting to a total of around 8 tons) was air freighted from Paris to Santiago de Chile and arrived at the Paranal Observatory on 25th March after a subsequent 1500 km journey by road. Following tests to confirm that nothing had been damaged, VISIR was mounted on the third VLT telescope "Melipal" on April 27th. PR Photos 16a/04 and 16b/04 show the approximately 1.6 tons of VISIR being mounted at the Cassegrain focus, below the 8.2-m main mirror. First technical light on a star was achieved on April 29th, shortly after VISIR had been cooled down to its operating temperature. This allowed to proceed with the necessary first basic operations, including focusing the telescope, and tests. While telescope focusing was one of the difficult and frequent tasks faced by astronomers in the past, this is no longer so with the active optics feature of the VLT telescopes which, in principle, has to be focused only once after which it will forever be automatically kept in perfect focus. First images and spectra from VISIR ESO PR Photo 16c/04 ESO PR Photo 16c/04 Colour composite of the star forming region G333.6-0.2 [Preview - JPEG: 400 x 477 pix - 78k] [Normal - JPEG: 800 x 954 pix - 191k] ESO PR Photo 16d/04 ESO PR Photo 16d/04 Colour composite of the Galactic Centre [Preview - JPEG: 400 x 478 pix - 159k] [Normal - JPEG: 800 x 955 pix - 348k] Caption: ESO PR Photo 16c/04 is a colour composite image of the visually obscured G333.6-0.2 star-forming region at a distance of nearly 10,000 light-years in our Milky Way galaxy. This image was made by combining three digital images of the intensity of the infrared emission at wavelengths of 11.3μm (one of the Polycyclic Aromatic Hydrocarbon features, coded blue), 12.8 μm (an emission line of [NeII], coded green) and 19μm (warm dust emission, coded red). Each pixel subtends 0.127 arcsec and the total field is ~ 33 x 33 arcsec with North at the top and East to the left. The total integration times were 13 seconds at the shortest and 35 seconds at the longer wavelengths. The brighter spots locate regions where the dust, which obscures all the visible light, has been heated by recently formed stars. ESO PR Photo 16d/04 shows another colour composite, this time of the Galactic Centre at a distance of about 30,000 light-years. It was made by combining images in filters centred at 8.6μm (Polycyclic Aromatic Hydrocarbon molecular feature - coded blue), 12.8μm ([NeII] - coded green) and 19.5μm (coded red). Each pixel subtends 0.127 arcsec and the total field is ~ 33 x 33 arcsec with North at the top and East to the left. Total integration times were 300, 160 and 300 s for the 3 filters, respectively. This region is very rich, full of stars, dust, ionised and molecular gas. One of the scientific goals will be to detect and monitor the signal from the black hole at the centre of our galaxy. ESO PR Photo 16e/04 ESO PR Photo 16e/04 The Ant Planetary Nebula at 12.8 μm [Preview - JPEG: 400 x 477 pix - 77k] [Normal - JPEG: 800 x 954 pix - 182k] Caption: ESO PR Photo 16e/04 is an image of the "Ant" Planetary Nebula (Mz3) in the narrow-band filter centred at wavelength 12.8 μm. The scale is 0.127 arcsec/pixel and the total field-of-view is 33 x 33 arcsec, with North at the top and East to the left. The total integration time was 200 seconds. Note the diffraction rings around the central star which confirm that the maximum spatial resolution possible with the 8.2-m telescope is being achieved. ESO PR Photo 16f/04 ESO PR Photo 16f/04 The starburst galaxy He2-10 at 11.3μm [Preview - JPEG: 400 x 477 pix - 69k] [Normal - JPEG: 800 x 954 pix - 172k] Caption: ESO PR Photo 16f/04 is an image at wavelength 11.3 μm of the "nearby" (distance about 30 million light-years) blue compact galaxy He2-10, which is actively forming stars. The scale is 0.127 arcsec per pixel and the full field covers 15 x 15 arcsec with North at the top and East on the left. The total integration time for this observation is one hour. Several star forming regions are detected, as well as a diffuse emission, which was unknown until these VISIR observations. The star-forming regions on the left of the image are not visible in optical images. ESO PR Photo 16g/04 ESO PR Photo 16g/04 High-resolution spectrum of G333.6-0.2 around 12.8 μm [Preview - JPEG: 652 x 400 pix - 123k] [Normal - JPEG: 1303 x 800 pix - 277k] Caption: ESO PR Photo 16g/04 is a reproduction of a high-resolution spectrum of the Ne II line (ionised Neon) at 12.8135 μm of the star-forming region G333.6-0.2 shown in ESO PR Photo 16c/04. This spectrum reveals the complex motions of the ionized gas in this region. The images are 256 x 256 frames of 50 x 50 micron pixels. The "field" direction is horizontal, with total slit length of 32.5 arcsec; North is left and South is to the right. The dispersion direction is vertical, with the wavelength increasing downward. The total integration time was 80 sec. ESO PR Photo 16h/04 ESO PR Photo 16h/04 High-resolution spectrum of the Ant nebula around 12.8 μm [Preview - JPEG: 610 x 400 pix - 354k] [Normal - JPEG: 1219 x 800 pix - 901k] Caption: ESO PR Photo 16h/04 is a reproduction of a high-resolution spectrum of the Ne II line (ionised Neon) at 12.8135 microns of the Ant Planetary Nebula, also known as Mz-3, shown in ESO PR Photo 16d/04. The technical details are similar to ESO PR Photo 16g/04. The total integration time was 120 sec. The photos above resulted from some of the first observational tests with VISIR. PR Photo 16c/04 shows the scientific "First Light" image, obtained one day later on April 30th, of a visually obscured star forming region nearly 10,000 light-years away in our galaxy, the Milky Way. The picture shown here is a false-colour image made by combining three digital images of the intensity of the infrared emission from this region at wavelengths of 11.3 μm (one of the Polycyclic Aromatic Hydrocarbon - PAH - features), 12.8 μm (an emission line of ionised neon) and 19 μm (cool dust emission). Ten times sharper Until now, an elegant way to avoid the problems caused by the emission and absorption of the atmosphere was to fly infrared telescopes on satellites as was done in the highly successful IRAS and ISO missions and currently the Spitzer observatory. For both technical and cost reasons, however, such telescopes have so far been limited to only 60-85 cm in diameter. While very sensitive therefore, the spatial resolution (sharpness) delivered by these telescopes is 10 times worse than that of the 8.2-m diameter VLT telescopes. They have also not been equipped with the very high spectral resolution capability, a feature of the VISIR instrument, which is thus expected to remain the instrument of choice for a wide range of studies for many years to come despite the competition from space. More information A corresponding [1]: The consortium of institutes responsible for building the VISIR instrument under contract to ESO comprises the CEA/DSM/DAPNIA, Saclay, France - led by the Principal Investigator (PI), Pierre-Olivier Lagage and the Netherlands Foundation for Research in Astronomy/ASTRON - (Dwingeloo, The Netherlands) with Jan-Willem Pel from Groningen University as Co-PI for the spectrometer. [2]: Stellar radiation on its way to the observer is also affected by the turbulence of the Earth's atmosphere. This is the effect which makes the stars twinkle for the human eye. While the general public enjoys this phenomenon as something that makes the night sky interesting and may be entertaining, the twinkling is a major concern for amateur and professional astronomers, as it smears out the optical images. Infrared radiation is less affected by this effect. Therefore an instrument like VISIR can make full use of the extremely high optical quality of modern telescopes, like the VLT. [3]: Observations from the ground at wavelengths of 10 to 20 μm are particularly difficult because this is the wavelength region in which both the telescope and the atmosphere emits most strongly. In order to minimize its effect, the images shown here were made by tilting the telescope secondary mirror every few seconds (chopping) and the whole telescope every minute (nodding) so that this unwanted telescope and sky background emission could be measured and subtracted from the science images faster than it varies.

  11. Smart cloud system with image processing server in diagnosing brain diseases dedicated for hospitals with limited resources.

    PubMed

    Fahmi, Fahmi; Nasution, Tigor H; Anggreiny, Anggreiny

    2017-01-01

    The use of medical imaging in diagnosing brain disease is growing. The challenges are related to the big size of data and complexity of the image processing. High standard of hardware and software are demanded, which can only be provided in big hospitals. Our purpose was to provide a smart cloud system to help diagnosing brain diseases for hospital with limited infrastructure. The expertise of neurologists was first implanted in cloud server to conduct an automatic diagnosis in real time using image processing technique developed based on ITK library and web service. Users upload images through website and the result, in this case the size of tumor was sent back immediately. A specific image compression technique was developed for this purpose. The smart cloud system was able to measure the area and location of tumors, with average size of 19.91 ± 2.38 cm2 and an average response time 7.0 ± 0.3 s. The capability of the server decreased when multiple clients accessed the system simultaneously: 14 ± 0 s (5 parallel clients) and 27 ± 0.2 s (10 parallel clients). The cloud system was successfully developed to process and analyze medical images for diagnosing brain diseases in this case for tumor.

  12. Automatic rice crop height measurement using a field server and digital image processing.

    PubMed

    Sritarapipat, Tanakorn; Rakwatin, Preesan; Kasetkasem, Teerasit

    2014-01-07

    Rice crop height is an important agronomic trait linked to plant type and yield potential. This research developed an automatic image processing technique to detect rice crop height based on images taken by a digital camera attached to a field server. The camera acquires rice paddy images daily at a consistent time of day. The images include the rice plants and a marker bar used to provide a height reference. The rice crop height can be indirectly measured from the images by measuring the height of the marker bar compared to the height of the initial marker bar. Four digital image processing steps are employed to automatically measure the rice crop height: band selection, filtering, thresholding, and height measurement. Band selection is used to remove redundant features. Filtering extracts significant features of the marker bar. The thresholding method is applied to separate objects and boundaries of the marker bar versus other areas. The marker bar is detected and compared with the initial marker bar to measure the rice crop height. Our experiment used a field server with a digital camera to continuously monitor a rice field located in Suphanburi Province, Thailand. The experimental results show that the proposed method measures rice crop height effectively, with no human intervention required.

  13. Fully automated rodent brain MR image processing pipeline on a Midas server: from acquired images to region-based statistics.

    PubMed

    Budin, Francois; Hoogstoel, Marion; Reynolds, Patrick; Grauer, Michael; O'Leary-Moore, Shonagh K; Oguz, Ipek

    2013-01-01

    Magnetic resonance imaging (MRI) of rodent brains enables study of the development and the integrity of the brain under certain conditions (alcohol, drugs etc.). However, these images are difficult to analyze for biomedical researchers with limited image processing experience. In this paper we present an image processing pipeline running on a Midas server, a web-based data storage system. It is composed of the following steps: rigid registration, skull-stripping, average computation, average parcellation, parcellation propagation to individual subjects, and computation of region-based statistics on each image. The pipeline is easy to configure and requires very little image processing knowledge. We present results obtained by processing a data set using this pipeline and demonstrate how this pipeline can be used to find differences between populations.

  14. An FPGA-Based People Detection System

    NASA Astrophysics Data System (ADS)

    Nair, Vinod; Laprise, Pierre-Olivier; Clark, James J.

    2005-12-01

    This paper presents an FPGA-based system for detecting people from video. The system is designed to use JPEG-compressed frames from a network camera. Unlike previous approaches that use techniques such as background subtraction and motion detection, we use a machine-learning-based approach to train an accurate detector. We address the hardware design challenges involved in implementing such a detector, along with JPEG decompression, on an FPGA. We also present an algorithm that efficiently combines JPEG decompression with the detection process. This algorithm carries out the inverse DCT step of JPEG decompression only partially. Therefore, it is computationally more efficient and simpler to implement, and it takes up less space on the chip than the full inverse DCT algorithm. The system is demonstrated on an automated video surveillance application and the performance of both hardware and software implementations is analyzed. The results show that the system can detect people accurately at a rate of about[InlineEquation not available: see fulltext.] frames per second on a Virtex-II 2V1000 using a MicroBlaze processor running at[InlineEquation not available: see fulltext.], communicating with dedicated hardware over FSL links.

  15. Development of an electronic medical report delivery system to 3G GSM mobile (cellular) phones for a medical imaging department.

    PubMed

    Lim, Eugene Y; Lee, Chiang; Cai, Weidong; Feng, Dagan; Fulham, Michael

    2007-01-01

    Medical practice is characterized by a high degree of heterogeneity in collaborative and cooperative patient care. Fast and effective communication between medical practitioners can improve patient care. In medical imaging, the fast delivery of medical reports to referring medical practitioners is a major component of cooperative patient care. Recently, mobile phones have been actively deployed in telemedicine applications. The mobile phone is an ideal medium to achieve faster delivery of reports to the referring medical practitioners. In this study, we developed an electronic medical report delivery system from a medical imaging department to the mobile phones of the referring doctors. The system extracts a text summary of medical report and a screen capture of diagnostic medical image in JPEG format, which are transmitted to 3G GSM mobile phones.

  16. [Development of an original computer program FISHMet: use for molecular cytogenetic diagnosis and genome mapping by fluorescent in situ hybridization (FISH)].

    PubMed

    Iurov, Iu B; Khazatskiĭ, I A; Akindinov, V A; Dovgilov, L V; Kobrinskiĭ, B A; Vorsanova, S G

    2000-08-01

    Original software FISHMet has been developed and tried for improving the efficiency of diagnosis of hereditary diseases caused by chromosome aberrations and for chromosome mapping by fluorescent in situ hybridization (FISH) method. The program allows creation and analysis of pseudocolor chromosome images and hybridization signals in the Windows 95 system, allows computer analysis and editing of the results of pseudocolor hybridization in situ, including successive imposition of initial black-and-white images created using fluorescent filters (blue, green, and red), and editing of each image individually or of a summary pseudocolor image in BMP, TIFF, and JPEG formats. Components of image computer analysis system (LOMO, Leitz Ortoplan, and Axioplan fluorescent microscopes, COHU 4910 and Sanyo VCB-3512P CCD cameras, Miro-Video, Scion LG-3 and VG-5 image capture maps, and Pentium 100 and Pentium 200 computers) and specialized software for image capture and visualization (Scion Image PC and Video-Cup) have been used with good results in the study.

  17. Image acquisition unit for the Mayo/IBM PACS project

    NASA Astrophysics Data System (ADS)

    Reardon, Frank J.; Salutz, James R.

    1991-07-01

    The Mayo Clinic and IBM Rochester, Minnesota, have jointly developed a picture archiving, distribution and viewing system for use with Mayo's CT and MRI imaging modalities. Images are retrieved from the modalities and sent over the Mayo city-wide token ring network to optical storage subsystems for archiving, and to server subsystems for viewing on image review stations. Images may also be retrieved from archive and transmitted back to the modalities. The subsystems that interface to the modalities and communicate to the other components of the system are termed Image Acquisition Units (LAUs). The IAUs are IBM Personal System/2 (PS/2) computers with specially developed software. They operate independently in a network of cooperative subsystems and communicate with the modalities, archive subsystems, image review server subsystems, and a central subsystem that maintains information about the content and location of images. This paper provides a detailed description of the function and design of the Image Acquisition Units.

  18. High-Performance Tiled WMS and KML Web Server

    NASA Technical Reports Server (NTRS)

    Plesea, Lucian

    2007-01-01

    This software is an Apache 2.0 module implementing a high-performance map server to support interactive map viewers and virtual planet client software. It can be used in applications that require access to very-high-resolution geolocated images, such as GIS, virtual planet applications, and flight simulators. It serves Web Map Service (WMS) requests that comply with a given request grid from an existing tile dataset. It also generates the KML super-overlay configuration files required to access the WMS image tiles.

  19. Server-based enterprise collaboration software improves safety and quality in high-volume PET/CT practice.

    PubMed

    McDonald, James E; Kessler, Marcus M; Hightower, Jeremy L; Henry, Susan D; Deloney, Linda A

    2013-12-01

    With increasing volumes of complex imaging cases and rising economic pressure on physician staffing, timely reporting will become progressively challenging. Current and planned iterations of PACS and electronic medical record systems do not offer workflow management tools to coordinate delivery of imaging interpretations with the needs of the patient and ordering physician. The adoption of a server-based enterprise collaboration software system by our Division of Nuclear Medicine has significantly improved our efficiency and quality of service.

  20. Black Hole in Search of a Home

    NASA Astrophysics Data System (ADS)

    2005-09-01

    Astronomers Discover Bright Quasar Without Massive Host Galaxy An international team of astronomers [1] used two of the most powerful astronomical facilities available, the ESO Very Large Telescope (VLT) at Cerro Paranal and the Hubble Space Telescope (HST), to conduct a detailed study of 20 low redshift quasars. For 19 of them, they found, as expected, that these super massive black holes are surrounded by a host galaxy. But when they studied the bright quasar HE0450-2958, located some 5 billion light-years away, they couldn't find evidence for an encircling galaxy. This, the astronomers suggest, may indicate a rare case of collision between a seemingly normal spiral galaxy and a much more exotic object harbouring a very massive black hole. With masses up to hundreds of millions that of the Sun, "super massive" black holes are the most tantalizing objects known. Hiding in the centre of most large galaxies, including our own Milky Way (see ESO PR 26/03), they sometimes manifest themselves by devouring matter they engulf from their surroundings. Shining up to the largest distances, they are then called "quasars" or "QSOs" (for "quasi-stellar objects"), as they had initially been confused with stars. Decades of observations of quasars have suggested that they are always associated with massive host galaxies. However, observing the host galaxy of a quasar is a challenging work, because the quasar is radiating so energetically that its host galaxy is hard to detect in the flare. ESO PR Photo 28a/05 ESO PR Photo 28a/05 Two Quasars with their Host Galaxy [Preview - JPEG: 400 x 760 pix - 82k] [Normal - JPEG: 800 x 1520 pix - 395k] [Full Res - JPEG: 1722 x 3271 pix - 4.0M] Caption: ESO PR Photo 28a/05 shows two examples of quasars from the sample studied by the astronomers, where the host galaxy is obvious. In each case, the quasar is the bright central spot. The host of HE1239-2426 (left), a z=0.082 quasar, displays large spiral arms, while the host of HE1503+0228 (right), having a redshift of 0.135, is more fuzzy and shows only hints of spiral arms. Although these particular objects are rather close to us and constitute therefore easy targets, their host would still be perfectly visible at much higher redshift, including at distances as large as the one of HE0450-2958 (z=0.285). The observations were done with the ACS camera on the HST. ESO PR Photo 28b/05 ESO PR Photo 28b/05 The Quasar without a Home: HE0450-2958 [Preview - JPEG: 400 x 760 pix - 53k] [Normal - JPEG: 800 x 1520 pix - 197k] [Full Res - JPEG: 1718 x 3265 pix - 1.5M] Caption of ESO PR Photo 28b/05: (Left) HST image of the z=0.285 quasar HE0450-2958. No obvious host galaxy centred on the quasar is seen. Only a strongly disturbed and star forming companion galaxy is seen near the top of the image. (Right) Same image shown after applying an efficient image sharpening method known as MCS-deconvolution. In contrast to the usual cases, as the ones shown in ESO PR Photo 28a/05, the quasar is not situated at the centre of an extended host galaxy, but on the edge of a compact structure, whose spectra (see ESO PR Photo 28c/05) show it to be composed of gas ionised by the quasar radiation. This gas may have been captured through a collision with the star-forming galaxy. The star indicated on the figure is a nearby galactic star seen by chance in the field of view. To overcome this problem, the astronomers devised a new and highly efficient strategy. Using ESO's VLT for spectroscopy and HST for imagery, they observed their quasars at the same time as a reference star. Simultaneous observation of a star allowed them to measure at best the shape of the quasar point source on spectra and images, and further to separate the quasar light from the other contribution, i.e. from the underlying galaxy itself. This very powerful image and spectra sharpening method ("MCS deconvolution") was applied to these data in order to detect the finest details of the host galaxy (see e.g. ESO PR 19/03). Using this efficient technique, the astronomers could detect a host galaxy for all but one of the quasars they studied. No stellar environment was found for HE0450-2958, suggesting that if any host galaxy exists, it must either have a luminosity at least six times fainter than expected a priori from the quasar observed luminosity, or a radius smaller than about 300 light-years. Typical radii for quasar host galaxies range between 6,000 and 50,000 light-years, i.e. they are at least 20 to 170 times larger. "With the data we managed to secure with the VLT and the HST, we would have been able to detect a normal host galaxy", says Pierre Magain (Université de Liège, Belgium), lead author of the paper reporting the study. "We must therefore conclude that, contrary to our expectations, this bright quasar is not surrounded by a massive galaxy." Instead, the astronomers detected just besides the quasar a bright cloud of about 2,500 light-years in size, which they baptized "the blob". The VLT observations show this cloud to be composed only of gas ionised by the intense radiation coming from the quasar. It is probably the gas of this cloud which is feeding the supermassive black hole, allowing it to become a quasar. ESO PR Photo 28c/05 ESO PR Photo 28c/05 Spectrum of Quasar HE0450-2958, the Blob and the Companion Galaxy (FORS/VLT) [Preview - JPEG: 400 x 561 pix - 112k] [Normal - JPEG: 800 x 1121 pix - 257k] [HiRes - JPEG: 2332 x 3268 pix - 1.1M] Caption: ESO PR Photo 28c/05 presents the spectra of the three objects indicated in ESO PR Photo 28b/05 as obtained with FORS1 on ESO's Very Large Telescope. The spectrum of the companion galaxy shown on the top panel reveals strong star formation. Thanks to the image sharpening process, it has been possible to separate very well the spectra of the quasar (centre) from that of the blob (bottom). The spectrum of the blob shows exclusively strong narrow emission lines having properties indicative of ionisation by the quasar light. There is no trace of stellar light, down to very faint levels, in the surrounding of the quasar. A strongly perturbed galaxy, showing all signs of a recent collision, is also seen on the HST images 2 arcseconds away (corresponding to about 50,000 light-years), with the VLT spectra showing it to be presently in a state where it forms stars at a frantic rate. "The absence of a massive host galaxy, combined with the existence of the blob and the star-forming galaxy, lead us to believe that we have uncovered a really exotic quasar, says team member Frédéric Courbin (Ecole Polytechnique Fédérale de Lausanne, Switzerland). "There is little doubt that a burst in the formation of stars in the companion galaxy and the quasar itself have been ignited by a collision that must haven taken place about 100 million years ago. What happened to the putative quasar host remains unknown." HE0450-2958 constitutes a challenging case of interpretation. The astronomers propose several possible explanations, that will need to be further investigated and confronted. Has the host galaxy been completely disrupted as a result of the collision? It is hard to imagine how that could happen. Has an isolated black hole captured gas while crossing the disc of a spiral galaxy? This would require very special conditions and would probably not have caused such a tremendous perturbation as is observed in the neighbouring galaxy. Another intriguing hypothesis is that the galaxy harbouring the black hole was almost exclusively made of dark matter. "Whatever the solution of this riddle, the strong observable fact is that the quasar host galaxy, if any, is much too faint", says team member Knud Jahnke (Astrophysikalisches Institut Potsdam, Germany). The report on HE0450-2958 is published in the September 15, 2005 issue of the journal Nature ("Discovery of a bright quasar without a massive host galaxy" by Pierre Magain et al.).

  1. Compression strategies for LiDAR waveform cube

    NASA Astrophysics Data System (ADS)

    Jóźków, Grzegorz; Toth, Charles; Quirk, Mihaela; Grejner-Brzezinska, Dorota

    2015-01-01

    Full-waveform LiDAR data (FWD) provide a wealth of information about the shape and materials of the surveyed areas. Unlike discrete data that retains only a few strong returns, FWD generally keeps the whole signal, at all times, regardless of the signal intensity. Hence, FWD will have an increasingly well-deserved role in mapping and beyond, in the much desired classification in the raw data format. Full-waveform systems currently perform only the recording of the waveform data at the acquisition stage; the return extraction is mostly deferred to post-processing. Although the full waveform preserves most of the details of the real data, it presents a serious practical challenge for a wide use: much larger datasets compared to those from the classical discrete return systems. Atop the need for more storage space, the acquisition speed of the FWD may also limit the pulse rate on most systems that cannot store data fast enough, and thus, reduces the perceived system performance. This work introduces a waveform cube model to compress waveforms in selected subsets of the cube, aimed at achieving decreased storage while maintaining the maximum pulse rate of FWD systems. In our experiments, the waveform cube is compressed using classical methods for 2D imagery that are further tested to assess the feasibility of the proposed solution. The spatial distribution of airborne waveform data is irregular; however, the manner of the FWD acquisition allows the organization of the waveforms in a regular 3D structure similar to familiar multi-component imagery, as those of hyper-spectral cubes or 3D volumetric tomography scans. This study presents the performance analysis of several lossy compression methods applied to the LiDAR waveform cube, including JPEG-1, JPEG-2000, and PCA-based techniques. Wide ranges of tests performed on real airborne datasets have demonstrated the benefits of the JPEG-2000 Standard where high compression rates incur fairly small data degradation. In addition, the JPEG-2000 Standard-compliant compression implementation can be fast and, thus, used in real-time systems, as compressed data sequences can be formed progressively during the waveform data collection. We conclude from our experiments that 2D image compression strategies are feasible and efficient approaches, thus they might be applied during the acquisition of the FWD sensors.

  2. Network Consumption and Storage Needs when Working in a Full-Time Routine Digital Environment in a Large Nonacademic Training Hospital.

    PubMed

    Nap, Marius

    2016-01-01

    Digital pathology is indisputably connected with high demands on data traffic and storage. As a consequence, control of the logistic process and insight into the management of both traffic and storage is essential. We monitored data traffic from scanners to server and server to workstation and registered storage needs for diagnostic images and additional projects. The results showed that data traffic inside the hospital network (1 Gbps) never exceeded 80 Mbps for scanner-to-server activity, and activity from the server to the workstation took at most 5 Mbps. Data storage per image increased from 300 MB to an average of 600 MB as a result of camera and software updates, and, due to the increased scanning speed, the scanning time was reduced with almost 8 h/day. Introduction of a storage policy of only 12 months for diagnostic images and rescanning if needed resulted in a manageable storage window of 45 TB for the period of 1 year. Using simple registration tools allowed the transition of digital pathology into a concise package that allows planning and control. Incorporating retrieval of such information from scanning and storage devices will reduce the fear of losing control by the management when introducing digital pathology in daily routine. © 2016 S. Karger AG, Basel.

  3. JPEG2000-coded image error concealment exploiting convex sets projections.

    PubMed

    Atzori, Luigi; Ginesu, Giaime; Raccis, Alessio

    2005-04-01

    Transmission errors in JPEG2000 can be grouped into three main classes, depending on the affected area: LL, high frequencies at the lower decomposition levels, and high frequencies at the higher decomposition levels. The first type of errors are the most annoying but can be concealed exploiting the signal spatial correlation like in a number of techniques proposed in the past; the second are less annoying but more difficult to address; the latter are often imperceptible. In this paper, we address the problem of concealing the second class or errors when high bit-planes are damaged by proposing a new approach based on the theory of projections onto convex sets. Accordingly, the error effects are masked by iteratively applying two procedures: low-pass (LP) filtering in the spatial domain and restoration of the uncorrupted wavelet coefficients in the transform domain. It has been observed that a uniform LP filtering brought to some undesired side effects that negatively compensated the advantages. This problem has been overcome by applying an adaptive solution, which exploits an edge map to choose the optimal filter mask size. Simulation results demonstrated the efficiency of the proposed approach.

  4. Web-based system for surgical planning and simulation

    NASA Astrophysics Data System (ADS)

    Eldeib, Ayman M.; Ahmed, Mohamed N.; Farag, Aly A.; Sites, C. B.

    1998-10-01

    The growing scientific knowledge and rapid progress in medical imaging techniques has led to an increasing demand for better and more efficient methods of remote access to high-performance computer facilities. This paper introduces a web-based telemedicine project that provides interactive tools for surgical simulation and planning. The presented approach makes use of client-server architecture based on new internet technology where clients use an ordinary web browser to view, send, receive and manipulate patients' medical records while the server uses the supercomputer facility to generate online semi-automatic segmentation, 3D visualization, surgical simulation/planning and neuroendoscopic procedures navigation. The supercomputer (SGI ONYX 1000) is located at the Computer Vision and Image Processing Lab, University of Louisville, Kentucky. This system is under development in cooperation with the Department of Neurological Surgery, Alliant Health Systems, Louisville, Kentucky. The server is connected via a network to the Picture Archiving and Communication System at Alliant Health Systems through a DICOM standard interface that enables authorized clients to access patients' images from different medical modalities.

  5. First-Ever Census of Variable Mira-Type Stars in Galaxy Outside the Local Group

    NASA Astrophysics Data System (ADS)

    2003-05-01

    First-Ever Census of Variable Mira-Type Stars in Galaxy Outsidethe Local Group Summary An international team led by ESO astronomer Marina Rejkuba [1] has discovered more than 1000 luminous red variable stars in the nearby elliptical galaxy Centaurus A (NGC 5128) . Brightness changes and periods of these stars were measured accurately and reveal that they are mostly cool long-period variable stars of the so-called "Mira-type" . The observed variability is caused by stellar pulsation. This is the first time a detailed census of variable stars has been accomplished for a galaxy outside the Local Group of Galaxies (of which the Milky Way galaxy in which we live is a member). It also opens an entirely new window towards the detailed study of stellar content and evolution of giant elliptical galaxies . These massive objects are presumed to play a major role in the gravitational assembly of galaxy clusters in the Universe (especially during the early phases). This unprecedented research project is based on near-infrared observations obtained over more than three years with the ISAAC multi-mode instrument at the 8.2-m VLT ANTU telescope at the ESO Paranal Observatory . PR Photo 14a/03 : Colour image of the peculiar galaxy Centaurus A . PR Photo 14b/03 : Location of the fields in Centaurus A, now studied. PR Photo 14c/03 : "Field 1" in Centaurus A (visual light; FORS1). PR Photo 14d/03 : "Field 2" in Centaurus A (visual light; FORS1). PR Photo 14e/03 : "Field 1" in Centaurus A (near-infrared; ISAAC). PR Photo 14f/03 : "Field 2" in Centaurus A (near-infrared; ISAAC). PR Photo 14g/03 : Light variation of six variable stars in Centaurus A PR Photo 14h/03 : Light variation of stars in Centaurus A (Animated GIF) PR Photo 14i/03 : Light curves of four variable stars in Centaurus A. Mira-type variable stars Among the stars that are visible in the sky to the unaided eye, roughly one out of three hundred (0.3%) displays brightness variations and is referred to by astronomers as a "variable star". The percentage is much higher among large, cool stars ("red giants") - in fact, almost all luminous stars of that type are variable. Such stars are known as Mira-variables ; the name comes from the most prominent member of this class, Omicron Ceti in the constellation Cetus (The Whale), also known as "Stella Mira" (The Wonderful Star). Its brightness changes with a period of 332 days and it is about 1500 times brighter at maximum (visible magnitude 2 and one of the fifty brightest stars in the sky) than at minimum (magnitude 10 and only visible in small telescopes) [2]. Stars like Omicron Ceti are nearing the end of their life. They are very large and have sizes from a few hundred to about a thousand times that of the Sun. The brightness variation is due to pulsations during which the star's temperature and size change dramatically. In the following evolutionary phase, Mira-variables will shed their outer layers into surrounding space and become visible as planetary nebulae with a hot and compact star (a "white dwarf") at the middle of a nebula of gas and dust (cf. the "Dumbbell Nebula" - ESO PR Photo 38a-b/98 ). Several thousand Mira-type stars are currently known in the Milky Way galaxy and a few hundred have been found in other nearby galaxies, including the Magellanic Clouds. The peculiar galaxy Centaurus A ESO PR Photo 14a/03 ESO PR Photo 14a/03 [Preview - JPEG: 400 x 451 pix - 53k [Normal - JPEG: 800 x 903 pix - 528k] [Hi-Res - JPEG: 3612 x 4075 pix - 8.4M] ESO PR Photo 14b/03 ESO PR Photo 14b/03 [Preview - JPEG: 570 x 400 pix - 52k [Normal - JPEG: 1140 x 800 pix - 392k] ESO PR Photo 14c/03 ESO PR Photo 14c/03 [Preview - JPEG: 400 x 451 pix - 61k [Normal - JPEG: 800 x 903 pix - 768k] ESO PR Photo 14d/03 ESO PR Photo 14d/03 [Preview - JPEG: 400 x 451 pix - 56k [Normal - JPEG: 800 x 903 pix - 760k] Captions : PR Photo 14a/03 is a colour composite photo of the peculiar galaxy Centaurus A (NGC 5128) , obtained with the Wide-Field Imager (WFI) camera at the ESO/MPG 2.2-m telescope on La Silla. It is based on a total of nine 3-min exposures made on March 25, 1999, through different broad-band optical filters (B(lue) - total exposure time 9 min - central wavelength 456 nm - here rendered as blue; V(isual) - 540 nm - 9 min - green; I(nfrared) - 784 nm - 9 min - red); it was prepared from files in the ESO Science Data Archive by ESO-astronomer Benoît Vandame . The elliptical shape and the central dust band, the imprint of a galaxy collision, are well visible. PR Photo 14b/03 identifies the two regions of Centaurus A (the rectangles in the upper left and lower right inserts) in which a search for variable stars was made during the present research project: "Field 1" is located in an area north-east of the center in which many young stars are present. This is also the direction in which an outflow ("jet") is seen on deep optical and radio images. "Field 2" is positioned in the galaxy's halo, south of the centre. High-resolution, very deep colour photos of these two fields and their immediate surroundings are shown in PR Photos 14c-d/03 . They were produced by means of CCD-frames obtained in July 1999 through U- and V-band optical filters with the VLT FORS1 multi-mode instrument at the 8.2-m VLT ANTU telescope on Paranal. Note the great variety of object types and colours, including many background galaxies which are seen through these less dense regions of Centaurus A . The total exposure time was 30 min in each filter and the seeing was excellent, 0.5 arcsec. The original pixel size is 0.196 arcsec and the fields measure 6.7 x 6.7 arcmin 2 (2048 x 2048 pix 2 ). North is up and East is left on all photos. Centaurus A (NGC 5128) is the nearest giant galaxy, at a distance of about 13 million light-years. It is located outside the Local Group of Galaxies to which our own galaxy, the Milky Way, and its satellite galaxies, the Magellanic Clouds, belong. Centaurus A is seen in the direction of the southern constellation Centaurus. It is of elliptical shape and is currently merging with a companion galaxy, making it one of the most spectacular objects in the sky, cf. PR Photo 14a/03 . It possesses a very heavy black hole at its centre (see ESO PR 04/01 ) and is a source of strong radio and X-ray emission. During the present research programme, two regions in Centaurus A were searched for stars of variable brightness; they are located in the periphery of this peculiar galaxy, cf. PR Photos 14b-d/03 . An outer field ("Field 1") coincides with a stellar shell with many blue and luminous stars produced by the on-going galaxy merger; it lies at a distance of 57,000 light-years from the centre. The inner field ("Field 2") is more crowded and is situated at a projected distance of about 30,000 light-years from the centre.. Three years of VLT observations ESO PR Photo 14e/03 ESO PR Photo 14e/03 [Preview - JPEG: 400 x 447 pix - 120k [Normal - JPEG: 800 x 894 pix - 992k] ESO PR Photo 14f/03 ESO PR Photo 14f/03 [Preview - JPEG: 400 x 450 pix - 96k [Normal - JPEG: 800 x 899 pix - 912k] Caption : PR Photos 14e-f/03 are colour composites of two small fields ("Field 1" and "Field 2") in the peculiar galaxy Centaurus A (NGC 5128) , based on exposures through three near-infrared filters (the J-, H- and K-bands at wavelengths 1.2, 1.6 and 2.2 µm, respectively) with the ISAAC multi-mode instrument at the 8.2-m VLT ANTU telescope at the ESO Paranal observatory. The corresponding areas are outlined within the two inserts in PR Photo 14b/03 and may be compared with the visual images from FORS1 ( PR Photos 14c-d/03 ). These ISAAC photos are the deepest near-infrared images ever obtained in this galaxy and show thousands of its stars of different colours. In the present colour-coding, the redder an image, the cooler is the star. The original pixel size is 0.15 arcsec and both fields measure 2.5 x 2.5 arcmin 2. North is up and East is left. Under normal circumstances, any team of professional astronomers will have access to the largest telescopes in the world for only a very limited number of consecutive nights each year. However, extensive searches for variable stars like the present require repeated observations lasting minutes-to-hours over periods of months-to-years. It is thus not feasible to perform such observations in the classical way in which the astronomers travel to the telescope each time. Fortunately, the operational system of the VLT at the ESO Paranal Observatory (Chile) is also geared to encompass this kind of long-term programme. Between April 1999 and July 2002, the 8.2-m VLT ANTU telescope on Cerro Paranal in Chile) was operated in service mode on many occasions to obtain K-band images of the two fields in Centaurus A by means of the near-infrared ISAAC multi-mode instrument. Each field was observed over 20 times in the course of this three-year period ; some of the images were obtained during exceptional seeing conditions of 0.30 arcsec. One set of complementary optical images was obtained with the FORS1 multi-mode instrument (also on VLT ANTU) in July 1999. Each image from the ISAAC instrument covers a sky field measuring 2.5 x 2.5 arcmin 2. The combined images, encompassing a total exposure of 20 hours are indeed the deepest infrared images ever made of the halo of any galaxy as distant as Centaurus A , about 13 million light-years. Discovering one thousand Mira variables ESO PR Photo 14g/03 ESO PR Photo 14g/03 [Preview - JPEG: 400 x 480 pix - 61k [Normal - JPEG: 800 x 961 pix - 808k] ESO PR Photo 14h/03 ESO PR Photo 14h/03 [Animated GIF: 263 x 267 pix - 56k ESO PR Photo 14i/03 ESO PR Photo 14i/03 [Preview - JPEG: 480 x 400 pix - 33k [Normal - JPEG: 959 x 800 pix - 152k] Captions : PR Photo 14g/03 shows a zoomed-in area within "Field 2" in Centaurus A , from the ISAAC colour image shown in PR Photo 14e/03 . Nearly all red stars in this area are of the variable Mira-type. The brightness variation of some stars (labelled A-D) is demonstrated in the animated-GIF image PR Photo 14h/03 . The corresponding light curves (brightness over the pulsation period) are shown in PR Photo 14i/03 . Here the abscissa indicates the pulsation phase (one full period corresponds to the interval from 0 to 1) and the ordinate unit is near-infrared K s -magnitude. One magnitude corresponds to a difference in brightness of a factor 2.5. Once the lengthy observations were completed, two further steps were needed to identify the variable stars in Centaurus A . First, each ISAAC frame was individually processed to identify the thousands and thousands of faint point-like images (stars) visible in these fields. Next, all images were compared using a special software package ("DAOPHOT") to measure the brightness of all these stars in the different frames, i.e., as a function of time. While most stars in these fields as expected were found to have constant brightness, more than 1000 stars displayed variations in brightness with time; this is by far the largest number of variable stars ever discovered in a galaxy outside the Local Group of Galaxies. The detailed analysis of this enormous dataset took more than a year. Most of the variable stars were found to be of the Mira-type and their light curves (brightness over the pulsation period) were measured, cf. PR Photo 14i/03 . For each of them, values of the characterising parameters, the period (days) and brightness amplitude (magnitudes) were determined. A catalogue of the newly discovered variable stars in Centaurus A has now been made available to the astronomical community via the European research journal Astronomy & Astrophysics. Marina Rejkuba is pleased and thankful: "We are really very fortunate to have carried out this ambitious project so successfully. It all depended critically on different factors: the repeated granting of crucial observing time by the ESO Observing Programmes Committee over different observing periods in the face of rigorous international competition, the stability and reliability of the telescope and the ISAAC instrument over a period of more than three years and, not least, the excellent quality of the service mode observations, so efficiently performed by the staff at the Paranal Observatory." What have we learned about Centaurus A? The present study of variable stars in this giant elliptical galaxy is the first-ever of its kind. Although the evaluation of the very large observational data material is still not finished, it has already led to a number of very useful scientific results. Confirmation of the presence of an intermediate-age population Based on earlier research (optical and near-IR colour-magnitude diagrams of the stars in the fields), the present team of astronomers had previously detected the presence of intermediate-age and young stellar populations in the halo of this galaxy. The youngest stars appear to be aligned with the powerful jet produced by the massive black hole at the centre. Some of the very luminous red variable stars now discovered confirm the presence of a population of intermediate-age stars in the halo of this galaxy. It also contributes to our understanding of how giant elliptical galaxies form. New measurement of the distance to Centaurus A The pulsation of Mira-type variable stars obeys a period-luminosity relation. The longer its period, the more luminous is a Mira-type star. This fact makes it possible to use Mira-type stars as "standard candles" (objects of known intrinsic luminosity) for distance determinations. They have in fact often been used in this way to measure accurate distances to more nearby objects, e.g., to individual clusters of stars and to the center in our Milky Way galaxy, and also to galaxies in the Local Group, in particular the Magellanic Clouds. This method works particularly well with infrared measurements and the astronomers were now able to measure the distance to Centaurus A in this new way. They found 13.7 ± 1.9 million light-years , in general agreement with and thus confirming other methods. Study of stellar population gradients in the halo of a giant elliptical galaxy The two fields here studied contain different populations of stars. A clear dependence on the location (a "gradient") within the galaxy is observed, which can be due to differences in chemical composition or age, or to a combination of both. Understanding the cause of this gradient will provide additional clues to how Centaurus A - and indeed all giant elliptical galaxies - was formed and has since evolved. Comparison with other well-known nearby galaxies Past searches have discovered Mira-type variable stars thoughout the Milky Way, our home galaxy, and in other nearby galaxies in the Local Group. However, there are no giant elliptical galaxies like Centaurus A in the Local Group and this is the first time it has been possible to identify this kind of stars in that type of galaxy. The present investigation now opens a new window towards studies of the stellar constituents of such galaxies .

  6. And Then There Were Three...!

    NASA Astrophysics Data System (ADS)

    2000-01-01

    VLT MELIPAL Achieves Successful "First Light" in Record Time This was a night to remember at the ESO Paranal Observatory! For the first time, three 8.2-m VLT telescopes were observing in parallel, with a combined mirror surface of nearly 160 m 2. In the evening of January 26, the third 8.2-m Unit Telescope, MELIPAL ("The Southern Cross" in the Mapuche language), was pointed to the sky for the first time and successfully achieved "First Light". During this night, a number of astronomical exposures were made that served to evaluate provisionally the performance of the new telescope. The ESO staff expressed great satisfaction with MELIPAL and there were broad smiles all over the mountain. The first images ESO PR Photo 04a/00 ESO PR Photo 04a/00 [Preview - JPEG: 400 x 352 pix - 95k] [Normal - JPEG: 800 x 688 pix - 110k] Caption : ESO PR Photo 04a/00 shows the "very first light" image for MELIPAL . It is that of a relatively bright star, as recorded by the Guide Probe at about 21:50 hrs local time on January 26, 2000. It is a 0.1 sec exposure, obtained after preliminary adjustment of the optics during a few iterations with the computer controlled "active optics" system. The image quality is measured as 0.46 arcsec FWHM (Full-Width at Half Maximum). ESO PR Photo 04b/00 ESO PR Photo 04b/00 [Preview - JPEG: 400 x 429 pix - 39k] [Normal - JPEG: 885 x 949 pix - 766k] Caption : ESO PR Photo 04b/00 shows the central region of the Crab Nebula, the famous supernova remnant in the constellation Taurus (The Bull). It was obtained early in the night of "First Light" with the third 8.2-m VLT Unit Telescope, MELIPAL . It is a composite of several 30-sec exposures with the VLT Test Camera in three broad-band filters, B (here rendered as blue; most synchrotron emission), V (green) and R (red; mostly emission from hydrogen atoms). The Crab Pulsar is visible to the left; it is the lower of the two brightest stars near each other. The image quality is about 0.9 arcsec, and is completely determined by the external seeing caused by the atmospheric turbulence above the telescope at the time of the observation. The coloured, vertical lines to the left are artifacts of a "bad column" of the CCD. The field measures about 1.3 x 1.3 arcmin 2. This image may be compared with that of the same area that was recently obtained with the FORS2 instrument at KUEYEN ( PR Photo 40g/99 ). Following two days of preliminary adjustments after the installation of the secondary mirror, cf. ESO PR Photos 03a-n/00 , MELIPAL was pointed to the sky above Paranal for the first time, soon after sunset in the evening of January 26. The light of a bright star was directed towards the Guide Probe camera, and the VLT Commissioning Team, headed by Dr. Jason Spyromilio , initiated the active optics procedure . This adjusts the 150 computer-controlled supports under the main 8.2-m Zerodur mirror as well as the position of the secondary 1.1-m Beryllium mirror. After just a few iterations, the optical quality of the recorded stellar image was measured as 0.46 arcsec ( PR Photo 04a/00 ), a truly excellent value, especially at this stage! Immediately thereafter, at 22:16 hrs local time (i.e., at 01:16 hrs UT on January 27), the shutter of the VLT Test Camera at the Cassegrain focus was opened. A 1-min exposure was made through a R(ed) optical filter of a distant star cluster in the constellation Eridanus (The River). The light from its faint stars was recorded by the CCD at the focal plane and the resulting frame was read into the computer. Despite the comparatively short exposure time, myriads of stars were seen when this "first frame" was displayed on the computer screen. Moreover, the sizes of these images were found to be virtually identical to the 0.6 arcsec seeing measured simultaneously with a monitor telescope, outside the telescope enclosure. This confirmed that MELIPAL was in very good shape. Nevertheless, these very first images were still slightly elongated and further optical adjustments and tests were therefore made to eliminate this unwanted effect. It is a tribute to the extensive experience and fine skills of the ESO staff that within only 1 hour, a 30 sec exposure of the central region of the Crab Nebula in Taurus with round images was obtained, cf. PR Photo 04b/00 . The ESO Director General, Dr. Catherine Cesarsky , who assumed her function in September 1999, was present in the Control Room during these operations. She expressed great satisfaction with the excellent result and warmly congratulated the ESO staff to this achievement. She was particularly impressed with the apparent ease with which a completely new telescope of this size could be adjusted in such a short time. A part of her statement on this occasion was recorded on ESO PR Video Clip 02/00 that accompanies this Press Release. Three telescopes now in operation at Paranal At 02:30 UT on January 27, 2000, three VLT Unit Telescopes were observing in parallel, with measured seeing values of 0.6 arcsec ( ANTU - "The Sun"), 0.7 arcsec ( KUEYEN -"The Moon") and 0.7 arcsec ( MELIPAL ). MELIPAL has now joined ANTU and KUEYEN that had "First Light" in May 1998 and March 1999, respectively. The fourth VLT Unit Telescope, YEPUN ("Sirius") will become operational later this year. While normal scientific observations continue with ANTU , the UVES and FORS2 astronomical instruments are now being commissioned at KUEYEN , before this telescope will be handed over to the astronomers on April 1, 2000. The telescope commissioning period will now start for MELIPAL , after which its first instrument, VIMOS will be installed later this year. Impressions from the MELIPAL "First Light" event First Light for MELIPAL ESO PR Video Clip 02/00 "First Light for MELIPAL" (3350 frames/2:14 min) [MPEG Video+Audio; 160x120 pix; 3.1Mb] [MPEG Video+Audio; 320x240 pix; 9.4 Mb] [RealMedia; streaming; 34kps] [RealMedia; streaming; 200kps] ESO Video Clip 02/00 shows sequences from the Control Room at the Paranal Observatory, recorded with a fixed TV-camera on January 27 at 03:00 UT, soon after the moment of "First Light" with the third 8.2-m VLT Unit Telescope ( MELIPAL ). The video sequences were transmitted via ESO's dedicated satellite communication link to the Headquarters in Garching for production of the Clip. It begins with a statement by the Manager of the VLT Project, Dr. Massimo Tarenghi , as exposures of the Crab Nebula are obtained with the telescope and the raw frames are successively displayed on the monitor screen. In a following sequence, ESO's Director General, Dr. Catherine Cesarsky , briefly relates the moment of "First Light" for MELIPAL , as she experienced it at the telescope controls. ESO Press Photo 04c/00 ESO Press Photo 04c/00 [Preview; JPEG: 400 x 300; 44k] [Full size; JPEG: 1600 x 1200; 241k] The computer screen with the image of a bright star, as recorded by the Guide Probe in the early evening of January 26; see also PR Photo 04a/00. This image was used for the initial adjustments by means of the active optics system. (Digital Photo). ESO Press Photo 04d/00 ESO Press Photo 04d/00 [Preview; JPEG: 400 x 314; 49k] [Full size; JPEG: 1528 x 1200; 189k] ESO staff at the moment of "First Light" for MELIPAL in the evening of January 26. The photo was made in the wooden hut on the telescope observing floor from where the telescope was controlled during the first hours. (Digital Photo). ESO PR Photos may be reproduced, if credit is given to the European Southern Observatory. The ESO PR Video Clips service to visitors to the ESO website provides "animated" illustrations of the ongoing work and events at the European Southern Observatory. The most recent clip was: ESO PR Video Clip 01/00 with aerial sequences from Paranal (12 January 2000). Information is also available on the web about other ESO videos.

  7. Feeding People's Curiosity: Leveraging the Cloud for Automatic Dissemination of Mars Images

    NASA Technical Reports Server (NTRS)

    Knight, David; Powell, Mark

    2013-01-01

    Smartphones and tablets have made wireless computing ubiquitous, and users expect instant, on-demand access to information. The Mars Science Laboratory (MSL) operations software suite, MSL InterfaCE (MSLICE), employs a different back-end image processing architecture compared to that of the Mars Exploration Rovers (MER) in order to better satisfy modern consumer-driven usage patterns and to offer greater server-side flexibility. Cloud services are a centerpiece of the server-side architecture that allows new image data to be delivered automatically to both scientists using MSLICE and the general public through the MSL website (http://mars.jpl.nasa.gov/msl/).

  8. Video streaming technologies using ActiveX and LabVIEW

    NASA Astrophysics Data System (ADS)

    Panoiu, M.; Rat, C. L.; Panoiu, C.

    2015-06-01

    The goal of this paper is to present the possibilities of remote image processing through data exchange between two programming technologies: LabVIEW and ActiveX. ActiveX refers to the process of controlling one program from another via ActiveX component; where one program acts as the client and the other as the server. LabVIEW can be either client or server. Both programs (client and server) exist independent of each other but are able to share information. The client communicates with the ActiveX objects that the server opens to allow the sharing of information [7]. In the case of video streaming [1] [2], most ActiveX controls can only display the data, being incapable of transforming it into a data type that LabVIEW can process. This becomes problematic when the system is used for remote image processing. The LabVIEW environment itself provides little if any possibilities for video streaming, and the methods it does offer are usually not high performance, but it possesses high performance toolkits and modules specialized in image processing, making it ideal for processing the captured data. Therefore, we chose to use existing software, specialized in video streaming along with LabVIEW and to capture the data provided by them, for further use, within LabVIEW. The software we studied (the ActiveX controls of a series of media players that utilize streaming technology) provide high quality data and a very small transmission delay, ensuring the reliability of the results of the image processing.

  9. A system for diagnosis of wheat leaf diseases based on Android smartphone

    NASA Astrophysics Data System (ADS)

    Xie, Xinhua; Zhang, Xiangqian; He, Bing; Liang, Dong; Zhang, Dongyang; Huang, Linsheng

    2016-10-01

    Owing to the shortages of inconvenience, expensive and high professional requirements etc. for conventional recognition devices of wheat leaf diseases, it does not satisfy the requirements of uploading and releasing timely investigation data in the large-scale field, which may influence the effectiveness of prevention and control for wheat diseases. In this study, a fast, accurate, and robust diagnose system of wheat leaf diseases based on android smartphone was developed, which comprises of two parts—the client and the server. The functions of the client include image acquisition, GPS positioning, corresponding, and knowledge base of disease prevention and control. The server includes image processing, feature extraction, and selection, and classifier establishing. The recognition process of the system goes as follow: when disease images were collected in fields and sent to the server by android smartphone, and then image processing of disease spots was carried out by the server. Eighteen larger weight features were selected by algorithm relief-F and as the input of Relevance Vector Machine (RVM), and the automatic identification of wheat stripe rust and powdery mildew was realized. The experimental results showed that the average recognition rate and predicted speed of RVM model were 5.56% and 7.41 times higher than that of Support Vector Machine (SVM). And application discovered that it needs about 1 minute to get the identification result. Therefore, it can be concluded that the system could be used to recognize wheat diseases and real-time investigate in fields.

  10. Virtual network computing: cross-platform remote display and collaboration software.

    PubMed

    Konerding, D E

    1999-04-01

    VNC (Virtual Network Computing) is a computer program written to address the problem of cross-platform remote desktop/application display. VNC uses a client/server model in which an image of the desktop of the server is transmitted to the client and displayed. The client collects mouse and keyboard input from the user and transmits them back to the server. The VNC client and server can run on Windows 95/98/NT, MacOS, and Unix (including Linux) operating systems. VNC is multi-user on Unix machines (any number of servers can be run are unrelated to the primary display of the computer), while it is effectively single-user on Macintosh and Windows machines (only one server can be run, displaying the contents of the primary display of the server). The VNC servers can be configured to allow more than one client to connect at one time, effectively allowing collaboration through the shared desktop. I describe the function of VNC, provide details of installation, describe how it achieves its goal, and evaluate the use of VNC for molecular modelling. VNC is an extremely useful tool for collaboration, instruction, software development, and debugging of graphical programs with remote users.

  11. Visual information processing II; Proceedings of the Meeting, Orlando, FL, Apr. 14-16, 1993

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O. (Editor); Juday, Richard D. (Editor)

    1993-01-01

    Various papers on visual information processing are presented. Individual topics addressed include: aliasing as noise, satellite image processing using a hammering neural network, edge-detetion method using visual perception, adaptive vector median filters, design of a reading test for low-vision image warping, spatial transformation architectures, automatic image-enhancement method, redundancy reduction in image coding, lossless gray-scale image compression by predictive GDF, information efficiency in visual communication, optimizing JPEG quantization matrices for different applications, use of forward error correction to maintain image fidelity, effect of peanoscanning on image compression. Also discussed are: computer vision for autonomous robotics in space, optical processor for zero-crossing edge detection, fractal-based image edge detection, simulation of the neon spreading effect by bandpass filtering, wavelet transform (WT) on parallel SIMD architectures, nonseparable 2D wavelet image representation, adaptive image halftoning based on WT, wavelet analysis of global warming, use of the WT for signal detection, perfect reconstruction two-channel rational filter banks, N-wavelet coding for pattern classification, simulation of image of natural objects, number-theoretic coding for iconic systems.

  12. Baseline coastal oblique aerial photographs collected from Breton Island, Louisiana, to the Alabama-Florida border, July 13, 2013

    USGS Publications Warehouse

    Morgan, Karen L.M.; Westphal, Karen A.

    2014-01-01

    The U.S. Geological Survey (USGS) conducts baseline and storm response photography missions to document and understand the changes in vulnerability of the Nation's coasts to extreme storms. On July 13, 2013, the USGS conducted an oblique aerial photographic survey from Breton Island, Louisiana, to the Alabama-Florida border, aboard a Cessna 172 flying at an altitude of 500 feet (ft) and approximately 1,000 ft offshore. This mission was flown to collect baseline data for assessing incremental changes since the last survey, and the data can be used in the assessment of future coastal change. The images provided here are Joint Photographic Experts Group (JPEG) images. ExifTtool was used to add the following to the header of each photo: time of collection, Global Positioning System (GPS) latitude, GPS longitude, keywords, credit, artist (photographer), caption, copyright, and contact information. The photograph locations are an estimate of the position of the aircraft and do not indicate the location of any feature in the images (see the Navigation Data page). These photographs document the configuration of the barrier islands and other coastal features at the time of the survey. Pages containing thumbnail images of the photographs, referred to as contact sheets, were created in 5-minute segments of flight time. These segements can be found on the Photos and Maps page. Photographs can be opened directly with any JPEG-compatible image viewer by clicking on a thumbnail on the contact sheet. Table 1 provides detailed information about the GPS location, name, date, and time each of the 1242 photographs taken along with links to each photograph. The photography is organized into segments, also referred to as contact sheets, and represent approximately 5 minutes of flight time. (Also see the Photos and Maps page). In addition to the photographs, a Google Earth Keyhole Markup Language (KML) file is provided and can be used to view the images by clicking on the marker and then clicking on either the thumbnail or the link above the thumbnail. The KML files were created using the photographic navigation files.

  13. Baseline coastal oblique aerial photographs collected from Dauphin Island, Alabama, to Breton Island, Louisiana, August 8, 2012

    USGS Publications Warehouse

    Morgan, Karen L.M.; Westphal, Karen A.

    2014-01-01

    The U.S. Geological Survey (USGS) conducts baseline and storm response photography missions to document and understand the changes in vulnerability of the Nation's coasts to extreme storms. On August 8, 2012, the USGS conducted an oblique aerial photographic survey from Dauphin Island, Alabama, to Breton Island, Louisiana, aboard a Cessna 172 at an altitude of 500 feet (ft) and approximately 1,000 ft offshore. This mission was flown to collect baseline data for assessing incremental changes since the last survey, and the data can be used in the assessment of future coastal change. The images provided here are Joint Photographic Experts Group (JPEG) images. Exiftool was used to add the following to the header of each photo: time of collection, Global Positioning System (GPS) latitude, GPS longitude, keywords, credit, artist (photographer), caption, copyright, and contact information. The photograph locations are an estimate of the position of the aircraft and do not indicate the location of any feature in the images (see the Navigation Data page). These photographs document the configuration of the barrier islands and other coastal features at the time of the survey. Pages containing thumbnail images of the photographs, referred to as contact sheets, were created in 5-minute segments of flight time. These segements can be found on the Photos and Maps page. Photographs can be opened directly with any JPEG-compatible image viewer by clicking on a thumbnail on the contact sheet. Table 1 provides detailed information about the GPS location, name, date, and time each of the 1241 photographs taken along with links to each photograph. The photography is organized into segments, also referred to as contact sheets, and represent approximately 5 minutes of flight time. (Also see the Photos and Maps page). In addition to the photographs, a Google Earth Keyhole Markup Language (KML) file is provided and can be used to view the images by clicking on the marker and then clicking on either the thumbnail or the link above the thumbnail. The KML files were created using the photographic navigation files.

  14. Building a Steganography Program Including How to Load, Process, and Save JPEG and PNG Files in Java

    ERIC Educational Resources Information Center

    Courtney, Mary F.; Stix, Allen

    2006-01-01

    Instructors teaching beginning programming classes are often interested in exercises that involve processing photographs (i.e., files stored as .jpeg). They may wish to offer activities such as color inversion, the color manipulation effects archived with pixel thresholding, or steganography, all of which Stevenson et al. [4] assert are sought by…

  15. Hybrid Rendering with Scheduling under Uncertainty

    PubMed Central

    Tamm, Georg; Krüger, Jens

    2014-01-01

    As scientific data of increasing size is generated by today’s simulations and measurements, utilizing dedicated server resources to process the visualization pipeline becomes necessary. In a purely server-based approach, requirements on the client-side are minimal as the client only displays results received from the server. However, the client may have a considerable amount of hardware available, which is left idle. Further, the visualization is put at the whim of possibly unreliable server and network conditions. Server load, bandwidth and latency may substantially affect the response time on the client. In this paper, we describe a hybrid method, where visualization workload is assigned to server and client. A capable client can produce images independently. The goal is to determine a workload schedule that enables a synergy between the two sides to provide rendering results to the user as fast as possible. The schedule is determined based on processing and transfer timings obtained at runtime. Our probabilistic scheduler adapts to changing conditions by shifting workload between server and client, and accounts for the performance variability in the dynamic system. PMID:25309115

  16. Transcontinental communication and quantitative digital histopathology via the Internet; with special reference to prostate neoplasia

    PubMed Central

    Montironi, R; Thompson, D; Scarpelli, M; Bartels, H G; Hamilton, P W; Da Silva, V D; Sakr, W A; Weyn, B; Van Daele, A; Bartels, P H

    2002-01-01

    Objective: To describe practical experiences in the sharing of very large digital data bases of histopathological imagery via the Internet, by investigators working in Europe, North America, and South America. Materials: Experiences derived from medium power (sampling density 2.4 pixels/μm) and high power (6 pixels/μm) imagery of prostatic tissues, skin shave biopsies, breast lesions, endometrial sections, and colonic lesions. Most of the data included in this paper were from prostate. In particular, 1168 histological images of normal prostate, high grade prostatic intraepithelial neoplasia (PIN), and prostate cancer (PCa) were recorded, archived in an image format developed at the Optical Sciences Center (OSC), University of Arizona, and transmitted to Ancona, Italy, as JPEG (joint photographic experts group) files. Images were downloaded for review using the Internet application FTP (file transfer protocol). The images were then sent from Ancona to other laboratories for additional histopathological review and quantitative analyses. They were viewed using Adobe Photoshop, Paint Shop Pro, and Imaging for Windows. For karyometric analysis full resolution imagery was used, whereas histometric analyses were carried out on JPEG imagery also. Results: The three applications of the telecommunication system were remote histopathological assessment, remote data acquisition, and selection of material. Typical data volumes for each project ranged from 120 megabytes to one gigabyte, and transmission times were usually less than one hour. There were only negligible transmission errors, and no problem in efficient communication, although real time communication was an exception, because of the time zone differences. As far as the remote histopathological assessment of the prostate was concerned, agreement between the pathologist's electronic diagnosis and the diagnostic label applied to the images by the recording scientist was present in 96.6% of instances. When these images were forwarded to two pathologists, the level of concordance with the reviewing pathologist who originally downloaded the files from Tucson was as high as 97.2% and 98.0%. Initial results of studies made by researchers belonging to our group but located in others laboratories showed the feasibility of making quantitative analysis on the same images. Conclusions: These experiences show that diagnostic teleconsultation and quantitative image analyses via the Internet are not only feasible, but practical, and allow a close collaboration between researchers widely separated by geographical distance and analytical resources. PMID:12037030

  17. VLT Images the Horsehead Nebula

    NASA Astrophysics Data System (ADS)

    2002-01-01

    Summary A new, high-resolution colour image of one of the most photographed celestial objects, the famous "Horsehead Nebula" (IC 434) in Orion, has been produced from data stored in the VLT Science Archive. The original CCD frames were obtained in February 2000 with the FORS2 multi-mode instrument at the 8.2-m VLT KUEYEN telescope on Paranal (Chile). The comparatively large field-of-view of the FORS2 camera is optimally suited to show this extended object and its immediate surroundings in impressive detail. PR Photo 02a/02 : View of the full field around the Horsehead Nebula. PR Photo 02b/02 : Enlargement of a smaller area around the Horse's "mouth" A spectacular object ESO PR Photo 02a/02 ESO PR Photo 02a/02 [Preview - JPEG: 400 x 485 pix - 63k] [Normal - JPEG: 800 x 970 pix - 896k] [Full-Res - JPEG: 1951 x 2366 pix - 4.7M] ESO PR Photo 02b/02 ESO PR Photo 02b/02 [Preview - JPEG: 400 x 501 pix - 91k] [Normal - JPEG: 800 x 1002 pix - 888k] [Full-Res - JPEG: 1139 x 1427 pix - 1.9M] Caption : PR Photo 02a/02 is a reproduction of a composite colour image of the Horsehead Nebula and its immediate surroundings. It is based on three exposures in the visual part of the spectrum with the FORS2 multi-mode instrument at the 8.2-m KUEYEN telescope at Paranal. PR Photo 02b/02 is an enlargement of a smaller area. Technical information about these photos is available below. PR Photo 02a/02 shows the famous "Horsehead Nebula" , which is situated in the Orion molecular cloud complex. Its official name is Barnard 33 and it is a dust protrusion in the southern region of the dense dust cloud Lynds 1630 , on the edge of the HII region IC 434 . The distance to the region is about 1400 light-years (430 pc). This beautiful colour image was produced from three images obtained with the multi-mode FORS2 instrument at the second VLT Unit Telescope ( KUEYEN ), some months after it had "First Light", cf. PR 17/99. The image files were extracted from the VLT Science Archive Facility and the photo constitutes a fine example of the subsequent use of such valuable data. Details about how the photo was made and some weblinks to other pictures are available below. The comparatively large field-of-view of the FORS2 camera (nearly 7 x 7 arcmin 2 ) and the detector resolution (0.2 arcsec/pixel) make this instrument optimally suited for imaging of this extended object and its immediate surroundings. There is obviously a wealth of detail, and scientific information can be derived from the colours shown in this photo. Three predominant colours are seen in the image: red from the hydrogen (H-alpha) emission from the HII region; brown for the foreground obscuring dust; and blue-green for scattered starlight. The blue-green regions of the Horsehead Nebula correspond to regions not shadowed from the light from the stars in the H II region to the top of the picture and scatter stellar radiation towards the observer; these are thus `mountains' of dust . The Horse's `mane' is an area in which there is less dust along the line-of-sight and the background (H-alpha) emission from ionized hydrogen atoms can be seen through the foreground dust. A chaotic area At the high resolution of this image the Horsehead appears very chaotic with many wisps and filaments and diffuse dust . At the top of the figure there is a bright rim separating the dust from the HII region. This is an `ionization front' where the ionizing photons from the HII region are moving into the cloud, destroying the dust and the molecules and heating and ionizing the gas. Dust and molecules can exist in cold regions of interstellar space which are shielded from starlight by very large layers of gas and dust. Astronomers refer to elongated structures, such as the Horsehead, as `elephant trunks' (never mind the zoological confusion!) which are common on the boundaries of HII regions. They can also be seen elsewhere in Orion - another well-known example is the pillars of M16 (the "Eagle Nebula") made famous by the fine HST image - a new infrared view by VLT and ISAAC of this area was published last month, cf. PR 25/01. Such structures are only temporary as they are being constantly eroded by the expanding region of ionized gas and are destroyed on timescales of typically a few thousand years. The Horsehead as we see it today will therefore not last forever and minute changes will become observable as the time passes. The surroundings To the east of the Horsehead (at the bottom of this image) there is ample evidence for star formation in the Lynds 1630 dark cloud . Here, the reflection nebula NGC 2023 surrounds the hot B-type star HD 37903 and some Herbig Haro objects are found which represent high-speed gas outflows from very young stars with masses of around a solar mass. The HII region to the west (top of picture) is ionized by the strong radiation from the bright star Sigma Orionis , located just below the southernmost star in Orion's Belt. The chain of dust and molecular clouds are part of the Orion A and B regions (also known as Orion's `sword' ). Other images of the Horsehead Nebula The Horsehead Nebula is a favourite object for amateur astrophotographers and large numbers of images are available on the WWW. Due to its significant extension and the limited field-of-view of some professional telescopes, fewer photographs are available from today's front-line facilities, except from specialized wide-field instruments like Schmidt telescopes, etc. The links below point to a number of prominent photos obtained elsewhere and some contain further useful links to other sites with more information about this splendid sky area. "Astronomy Picture of the Day" : http://antwrp.gsfc.nasa.gov/apod/ap971025.html Hubble Heritage image : http://hubble.stsci.edu/news_.and._views/pr.cgi?2001%2B12 INT Wide-Field image : http://www.ing.iac.es/PR/science/horsehead.htm NOT image : http://www.not.iac.es/new/general/photos/astronomical/ NOAO Wide-Field image : http://www.noao.edu/outreach/press/pr01/ir0101.html Bill Arnett's site : http://www.seds.org/billa/twn/b33x.html Technical information about the photos PR Photo 02a/02 was produced from three images, obtained on February 1, 2000, with the FORS2 multi-mode instrument at the 8.2-m KUEYEN Unit Telescope and extracted from the VLT Science Archive Facility. The frames were obtained in the B-band (600 sec exposure; wavelength 429 nm; FWHM 88 nm; here rendered as blue), V-band (300 sec; 554 nm; 112 nm; green) and R-band (120 sec; 655 nm; 165 nm; red) The original pixel size is 0.2 arcsec. The photo shows the full field recorded in all three colours, approximately 6.5 x 6.7 arcmin 2. The seeing was about 0.75 arcsec. PR Photo 02b/02 is an enlargement of a smaller area, measuring 3.8 x 4.1 arcmin 2. North is to the left and east is down (the usual orientation for showing this object). The frames were recorded with a TK2048 SITe CCD and the ESO-FIERA Controller, built by the Optical Detector Team (ODT). The images were prepared by Cyril Cavadore (ESO-ODT) , by means of Prism software. ESO PR Photos 02a-b/02 may be reproduced, if credit is given the European Southern Observatory (ESO).

  18. NASA/IPAC Infrared Archive's General Image Cutouts Service

    NASA Astrophysics Data System (ADS)

    Alexov, A.; Good, J. C.

    2006-07-01

    The NASA/IPAC Infrared Archive (IRSA) ``Cutouts" Service (http://irsa.ipac.caltech.edu/applications/Cutouts) is a general tool for creating small ``cutout" FITS images and JPEGs from collections of data archived at IRSA. This service is a companion to IRSA's Atlas tool (http://irsa.ipac.caltech.edu/applications/Atlas/), which currently serves over 25 different data collections of various sizes and complexity and returns entire images for a user-defined region of the sky. The Cutouts Services sits on top of Atlas and extends the Atlas functionality by generating subimages at locations and sizes requested by the user from images already identified by Atlas. These results can be downloaded individually, in batch mode (using the program wget), or as a tar file. Cutouts re-uses IRSA's software architecture along with the publicly available Montage mosaicking tools. The advantages and disadvantages of this approach to generic cutout serving will be discussed.

  19. Integrated test system of infrared and laser data based on USB 3.0

    NASA Astrophysics Data System (ADS)

    Fu, Hui Quan; Tang, Lin Bo; Zhang, Chao; Zhao, Bao Jun; Li, Mao Wen

    2017-07-01

    Based on USB3.0, this paper presents the design method of an integrated test system for both infrared image data and laser signal data processing module. The core of the design is FPGA logic control, the design uses dual-chip DDR3 SDRAM to achieve high-speed laser data cache, and receive parallel LVDS image data through serial-to-parallel conversion chip, and it achieves high-speed data communication between the system and host computer through the USB3.0 bus. The experimental results show that the developed PC software realizes the real-time display of 14-bit LVDS original image after 14-to-8 bit conversion and JPEG2000 compressed image after decompression in software, and can realize the real-time display of the acquired laser signal data. The correctness of the test system design is verified, indicating that the interface link is normal.

  20. Improved photo response non-uniformity (PRNU) based source camera identification.

    PubMed

    Cooper, Alan J

    2013-03-10

    The concept of using Photo Response Non-Uniformity (PRNU) as a reliable forensic tool to match an image to a source camera is now well established. Traditionally, the PRNU estimation methodologies have centred on a wavelet based de-noising approach. Resultant filtering artefacts in combination with image and JPEG contamination act to reduce the quality of PRNU estimation. In this paper, it is argued that the application calls for a simplified filtering strategy which at its base level may be realised using a combination of adaptive and median filtering applied in the spatial domain. The proposed filtering method is interlinked with a further two stage enhancement strategy where only pixels in the image having high probabilities of significant PRNU bias are retained. This methodology significantly improves the discrimination between matching and non-matching image data sets over that of the common wavelet filtering approach. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  1. A new interferential multispectral image compression algorithm based on adaptive classification and curve-fitting

    NASA Astrophysics Data System (ADS)

    Wang, Ke-Yan; Li, Yun-Song; Liu, Kai; Wu, Cheng-Ke

    2008-08-01

    A novel compression algorithm for interferential multispectral images based on adaptive classification and curve-fitting is proposed. The image is first partitioned adaptively into major-interference region and minor-interference region. Different approximating functions are then constructed for two kinds of regions respectively. For the major interference region, some typical interferential curves are selected to predict other curves. These typical curves are then processed by curve-fitting method. For the minor interference region, the data of each interferential curve are independently approximated. Finally the approximating errors of two regions are entropy coded. The experimental results show that, compared with JPEG2000, the proposed algorithm not only decreases the average output bit-rate by about 0.2 bit/pixel for lossless compression, but also improves the reconstructed images and reduces the spectral distortion greatly, especially at high bit-rate for lossy compression.

  2. OXYGEN-RICH SUPERNOVA REMNANT IN THE LARGE MAGELLANIC CLOUD

    NASA Technical Reports Server (NTRS)

    2002-01-01

    This is a NASA Hubble Space Telescope image of the tattered debris of a star that exploded 3,000 years ago as a supernova. This supernova remnant, called N132D, lies 169,000 light-years away in the satellite galaxy, the Large Magellanic Cloud. A Hubble Wide Field Planetary Camera 2 image of the inner regions of the supernova remnant shows the complex collisions that take place as fast moving ejecta slam into cool, dense interstellar clouds. This level of detail in the expanding filaments could only be seen previously in much closer supernova remnants. Now, Hubble's capabilities extend the detailed study of supernovae out to the distance of a neighboring galaxy. Material thrown out from the interior of the exploded star at velocities of more than four million miles per hour (2,000 kilometers per second) plows into neighboring clouds to create luminescent shock fronts. The blue-green filaments in the image correspond to oxygen-rich gas ejected from the core of the star. The oxygen-rich filaments glow as they pass through a network of shock fronts reflected off dense interstellar clouds that surrounded the exploded star. These dense clouds, which appear as reddish filaments, also glow as the shock wave from the supernova crushes and heats the clouds. Supernova remnants provide a rare opportunity to observe directly the interiors of stars far more massive than our Sun. The precursor star to this remnant, which was located slightly below and left of center in the image, is estimated to have been 25 times the mass of our Sun. These stars 'cook' heavier elements through nuclear fusion, including oxygen, nitrogen, carbon, iron etc., and the titanic supernova explosions scatter this material back into space where it is used to create new generations of stars. This is the mechanism by which the gas and dust that formed our solar system became enriched with the elements that sustain life on this planet. Hubble spectroscopic observations will be used to determine the exact chemical composition of this nuclear- processed material, and thereby test theories of stellar evolution. The image shows a region of the remnant 50 light-years across. The supernova explosion should have been visible from Earth's southern hemisphere around 1,000 B.C., but there are no known historical records that chronicle what would have appeared as a 'new star' in the heavens. This 'true color' picture was made by superposing images taken on 9-10 August 1994 in three of the strongest optical emission lines: singly ionized sulfur (red), doubly ionized oxygen (green), and singly ionized oxygen (blue). Photo credit: Jon A. Morse (STScI) and NASA Investigating team: William P. Blair (PI; JHU), Michael A. Dopita (MSSSO), Robert P. Kirshner (Harvard), Knox S. Long (STScI), Jon A. Morse (STScI), John C. Raymond (SAO), Ralph S. Sutherland (UC-Boulder), and P. Frank Winkler (Middlebury). Image files in GIF and JPEG format may be accessed via anonymous ftp from oposite.stsci.edu in /pubinfo: GIF: /pubinfo/GIF/N132D.GIF JPEG: /pubinfo/JPEG/N132D.jpg The same images are available via World Wide Web from links in URL http://www.stsci.edu/public.html.

  3. Demonstration of Inexact Computing Implemented in the JPEG Compression Algorithm using Probabilistic Boolean Logic applied to CMOS Components

    DTIC Science & Technology

    2015-12-24

    Ripple-Carry RCA Ripple-Carry Adder RF Radio Frequency RMS Root-Mean-Square SEU Single Event Upset SIPI Signal and Image Processing Institute SNR...correctness, where 0.5 < p < 1, and a probability (1−p) of error. Errors could be caused by noise, radio frequency (RF) interference, crosstalk...utilized in the Apollo Guidance Computer is the three input NOR Gate. . . At the time that the decision was made to use in- 11 tegrated circuits, the

  4. Demonstration of Inexact Computing Implemented in the JPEG Compression Algorithm Using Probabilistic Boolean Logic Applied to CMOS Components

    DTIC Science & Technology

    2015-12-24

    Ripple-Carry RCA Ripple-Carry Adder RF Radio Frequency RMS Root-Mean-Square SEU Single Event Upset SIPI Signal and Image Processing Institute SNR...correctness, where 0.5 < p < 1, and a probability (1−p) of error. Errors could be caused by noise, radio frequency (RF) interference, crosstalk...utilized in the Apollo Guidance Computer is the three input NOR Gate. . . At the time that the decision was made to use in- 11 tegrated circuits, the

  5. Retrieving high-resolution images over the Internet from an anatomical image database

    NASA Astrophysics Data System (ADS)

    Strupp-Adams, Annette; Henderson, Earl

    1999-12-01

    The Visible Human Data set is an important contribution to the national collection of anatomical images. To enhance the availability of these images, the National Library of Medicine has supported the design and development of a prototype object-oriented image database which imports, stores, and distributes high resolution anatomical images in both pixel and voxel formats. One of the key database modules is its client-server Internet interface. This Web interface provides a query engine with retrieval access to high-resolution anatomical images that range in size from 100KB for browser viewable rendered images, to 1GB for anatomical structures in voxel file formats. The Web query and retrieval client-server system is composed of applet GUIs, servlets, and RMI application modules which communicate with each other to allow users to query for specific anatomical structures, and retrieve image data as well as associated anatomical images from the database. Selected images can be downloaded individually as single files via HTTP or downloaded in batch-mode over the Internet to the user's machine through an applet that uses Netscape's Object Signing mechanism. The image database uses ObjectDesign's object-oriented DBMS, ObjectStore that has a Java interface. The query and retrieval systems has been tested with a Java-CDE window system, and on the x86 architecture using Windows NT 4.0. This paper describes the Java applet client search engine that queries the database; the Java client module that enables users to view anatomical images online; the Java application server interface to the database which organizes data returned to the user, and its distribution engine that allow users to download image files individually and/or in batch-mode.

  6. VRML and Collaborative Environments: New Tools for Networked Visualization

    NASA Astrophysics Data System (ADS)

    Crutcher, R. M.; Plante, R. L.; Rajlich, P.

    We present two new applications that engage the network as a tool for astronomical research and/or education. The first is a VRML server which allows users over the Web to interactively create three-dimensional visualizations of FITS images contained in the NCSA Astronomy Digital Image Library (ADIL). The server's Web interface allows users to select images from the ADIL, fill in processing parameters, and create renderings featuring isosurfaces, slices, contours, and annotations; the often extensive computations are carried out on an NCSA SGI supercomputer server without the user having an individual account on the system. The user can then download the 3D visualizations as VRML files, which may be rotated and manipulated locally on virtually any class of computer. The second application is the ADILBrowser, a part of the NCSA Horizon Image Data Browser Java package. ADILBrowser allows a group of participants to browse images from the ADIL within a collaborative session. The collaborative environment is provided by the NCSA Habanero package which includes text and audio chat tools and a white board. The ADILBrowser is just an example of a collaborative tool that can be built with the Horizon and Habanero packages. The classes provided by these packages can be assembled to create custom collaborative applications that visualize data either from local disk or from anywhere on the network.

  7. A medical application integrating remote 3D visualization tools to access picture archiving and communication system on mobile devices.

    PubMed

    He, Longjun; Ming, Xing; Liu, Qian

    2014-04-01

    With computing capability and display size growing, the mobile device has been used as a tool to help clinicians view patient information and medical images anywhere and anytime. However, for direct interactive 3D visualization, which plays an important role in radiological diagnosis, the mobile device cannot provide a satisfactory quality of experience for radiologists. This paper developed a medical system that can get medical images from the picture archiving and communication system on the mobile device over the wireless network. In the proposed application, the mobile device got patient information and medical images through a proxy server connecting to the PACS server. Meanwhile, the proxy server integrated a range of 3D visualization techniques, including maximum intensity projection, multi-planar reconstruction and direct volume rendering, to providing shape, brightness, depth and location information generated from the original sectional images for radiologists. Furthermore, an algorithm that changes remote render parameters automatically to adapt to the network status was employed to improve the quality of experience. Finally, performance issues regarding the remote 3D visualization of the medical images over the wireless network of the proposed application were also discussed. The results demonstrated that this proposed medical application could provide a smooth interactive experience in the WLAN and 3G networks.

  8. HRSCview: a web-based data exploration system for the Mars Express HRSC instrument

    NASA Astrophysics Data System (ADS)

    Michael, G.; Walter, S.; Neukum, G.

    2007-08-01

    The High Resolution Stereo Camera (HRSC) on the ESA Mars Express spacecraft has been orbiting Mars since January 2004. By spring 2007 it had returned around 2 terabytes of image data, covering around 35% of the Martian surface in stereo and colour at a resolu-tion of 10-20 m/pixel. HRSCview provides a rapid means to explore these images up to their full resolu-tion with the data-subsetting, sub-sampling, stretching and compositing being carried out on-the-fly by the image server. It is a joint website of the Free University of Berlin and the German Aerospace Center (DLR). The system operates by on-the-fly processing of the six HRSC level-4 image products: the map-projected ortho-rectified nadir pan-chromatic and four colour channels, and the stereo-derived DTM (digital terrain model). The user generates a request via the web-page for an image with several parameters: the centre of the view in surface coordinates, the image resolution in metres/pixel, the image dimensions, and one of several colour modes. If there is HRSC coverage at the given location, the necessary segments are extracted from the full orbit images, resampled to the required resolution, and composited according to the user's choice. In all modes the nadir channel, which has the highest resolu-tion, is included in the composite so that the maximum detail is always retained. The images are stretched ac-cording to the current view: this applies to the eleva-tion colour scale, as well as the nadir brightness and the colour channels. There are modes for raw colour, stretched colour, enhanced colour (exaggerated colour differences), and a synthetic 'Mars-like' colour stretch. A colour ratio mode is given as an alternative way to examine colour differences (R=IR/R, G=R/G and B=G/B). The final image is packaged as a JPEG file and returned to the user over the web. Each request requires approximately 1 second to process. A link is provided from each view to a data product page, where header items describing the full map-projected science data product are displayed, and a direct link to the archived data products on the ESA Planetary Science Archive (PSA) is provided. At pre-sent the majority of the elevation composites are de-rived from the HRSC Preliminary 200m DTMs gener-ated at the German Aerospace Center (DLR), which will not be available as separately downloadable data products. These DTMs are being progressively super-seded by systematically generated higher resolution archival DTMs, also from DLR, which will become available for download through the PSA, and be simi-larly accessible via HRSCview. At the time of writing this abstract (May 2007), four such high resolution DTMs are available for download via the HRSCview data product pages (for images from orbits 0572, 0905, 1004, and 2039).

  9. The Hazards Data Distribution System update

    USGS Publications Warehouse

    Jones, Brenda K.; Lamb, Rynn M.

    2010-01-01

    After a major disaster, a satellite image or a collection of aerial photographs of the event is frequently the fastest, most effective way to determine its scope and severity. The U.S. Geological Survey (USGS) Emergency Operations Portal provides emergency first responders and support personnel with easy access to imagery and geospatial data, geospatial Web services, and a digital library focused on emergency operations. Imagery and geospatial data are accessed through the Hazards Data Distribution System (HDDS). HDDS historically provided data access and delivery services through nongraphical interfaces that allow emergency response personnel to select and obtain pre-event baseline data and (or) event/disaster response data. First responders are able to access full-resolution GeoTIFF images or JPEG images at medium- and low-quality compressions through ftp downloads. USGS HDDS home page: http://hdds.usgs.gov/hdds2/

  10. Joint reconstruction of multiview compressed images.

    PubMed

    Thirumalai, Vijayaraghavan; Frossard, Pascal

    2013-05-01

    Distributed representation of correlated multiview images is an important problem that arises in vision sensor networks. This paper concentrates on the joint reconstruction problem where the distributively compressed images are decoded together in order to take benefit from the image correlation. We consider a scenario where the images captured at different viewpoints are encoded independently using common coding solutions (e.g., JPEG) with a balanced rate distribution among different cameras. A central decoder first estimates the inter-view image correlation from the independently compressed data. The joint reconstruction is then cast as a constrained convex optimization problem that reconstructs total-variation (TV) smooth images, which comply with the estimated correlation model. At the same time, we add constraints that force the reconstructed images to be as close as possible to their compressed versions. We show through experiments that the proposed joint reconstruction scheme outperforms independent reconstruction in terms of image quality, for a given target bit rate. In addition, the decoding performance of our algorithm compares advantageously to state-of-the-art distributed coding schemes based on motion learning and on the DISCOVER algorithm.

  11. [A study of the transport of three dimensional medical images to remote institutions for telediagnosis].

    PubMed

    Hayashi, Takashi; Iwai, Mitsuhiro; Takahashi, Katsuhiko; Takeda, Satoshi; Tateishi, Toshiki; Kaneko, Rumi; Ogasawara, Yoko; Yonezawa, Kazuya; Hanada, Akiko

    2011-01-01

    Using a 3D-imaging-create-function server and network services by IP-VPN, we began to deliver 3D images to the remote institution. An indication trial of the primary image, a rotary trial of a 3D image, and a reproducibility trial were studied in order to examine the practicality of using the system in a real network between Hakodate and Sapporo (communication distance of about 150 km). In these trials, basic data (time and receiving data volume) were measured for every variation of QF (quality factor) or monitor resolution. Analyzing the results of the system using a 3D image delivery server of our hospital with variations in the setting of QF and monitor resolutions, we concluded that this system has practicality in the remote interpretation-of-radiogram work, even if the access point of the region has a line speed of 6 Mbps.

  12. Detection of Copy-Rotate-Move Forgery Using Zernike Moments

    NASA Astrophysics Data System (ADS)

    Ryu, Seung-Jin; Lee, Min-Jeong; Lee, Heung-Kyu

    As forgeries have become popular, the importance of forgery detection is much increased. Copy-move forgery, one of the most commonly used methods, copies a part of the image and pastes it into another part of the the image. In this paper, we propose a detection method of copy-move forgery that localizes duplicated regions using Zernike moments. Since the magnitude of Zernike moments is algebraically invariant against rotation, the proposed method can detect a forged region even though it is rotated. Our scheme is also resilient to the intentional distortions such as additive white Gaussian noise, JPEG compression, and blurring. Experimental results demonstrate that the proposed scheme is appropriate to identify the forged region by copy-rotate-move forgery.

  13. A Portrait of One Hundred Thousand and One Galaxies

    NASA Astrophysics Data System (ADS)

    2002-08-01

    Rich and Inspiring Experience with NGC 300 Images from the ESO Science Data Archive Summary A series of wide-field images centred on the nearby spiral galaxy NGC 300 , obtained with the Wide-Field Imager (WFI) on the MPG/ESO 2.2-m telescope at the La Silla Observatory , have been combined into a magnificent colour photo. These images have been used by different groups of astronomers for various kinds of scientific investigations, ranging from individual stars and nebulae in NGC 300, to distant galaxies and other objects in the background. This material provides an interesting demonstration of the multiple use of astronomical data, now facilitated by the establishment of extensively documented data archives, like the ESO Science Data Archive that now is growing rapidly and already contains over 15 Terabyte. Based on the concept of Astronomical Virtual Observatories (AVOs) , the use of archival data sets is on the rise and provides a large number of scientists with excellent opportunities for front-line investigations without having to wait for precious observing time. In addition to presenting a magnificent astronomical photo, the present account also illustrates this important new tool of the modern science of astronomy and astrophysics. PR Photo 18a/02 : WFI colour image of spiral galaxy NGC 300 (full field) . PR Photo 18b/02 : Cepheid stars in NGC 300 PR Photo 18c/02 : H-alpha image of NGC 300 PR Photo 18d/02 : Distant cluster of galaxies CL0053-37 in the NGC 300 field PR Photo 18e/02 : Dark matter distribution in CL0053-37 PR Photo 18f/02 : Distant, reddened cluster of galaxies in the NGC 300 field PR Photo 18g/02 : Distant galaxies, seen through the outskirts of NGC 300 PR Photo 18h/02 : "The View Beyond" ESO PR Photo 18a/02 ESO PR Photo 18a/02 [Preview - JPEG: 400 x 412 pix - 112k] [Normal - JPEG: 1200 x 1237 pix - 1.7M] [Hi-Res - JPEG: 4000 x 4123 pix - 20.3M] Caption : PR Photo 18a/02 is a reproduction of a colour-composite image of the nearby spiral galaxy NGC 300 and the surrounding sky field, obtained in 1999 and 2000 with the Wide-Field Imager (WFI) on the MPG/ESO 2.2-m telescope at the La Silla Observatory. See the text for details about the many different uses of this photo. Smaller areas in this large field are shown in Photos 18b-h/02 , cf. below. The High-Res version of this image has been compressed by a factor 4 (2 x 2 pixel rebinning) to reduce it to a reasonably transportable size. Technical information about this and the other photos is available at the end of this communication. Located some 7 million light-years away, the spiral galaxy NGC 300 [1] is a beautiful representative of its class, a Milky-Way-like member of the prominent Sculptor group of galaxies in the southern constellation of that name. NGC 300 is a big object in the sky - being so close, it extends over an angle of almost 25 arcmin, only slightly less than the size of the full moon. It is also relative bright, even a small pair of binoculars will unveil this magnificent spiral galaxy as a hazy glowing patch on a dark sky background. The comparatively small distance of NGC 300 and its face-on orientation provide astronomers with a wonderful opportunity to study in great detail its structure as well as its various stellar populations and interstellar medium. It was exactly for this purpose that some images of NGC 300 were obtained with the Wide-Field Imager (WFI) on the MPG/ESO 2.2-m telescope at the La Silla Observatory. This advanced 67-million pixel digital camera has already produced many impressive pictures, some of which are displayed in the WFI Photo Gallery [2]. With its large field of view, 34 x 34 arcmin 2 , the WFI is optimally suited to show the full extent of the spiral galaxy NGC 300 and its immediate surroundings in the sky, cf. PR Photo 18a/02 . NGC 300 and "Virtual Astronomy" In addition to being a beautiful sight in its own right, the present WFI-image of NGC 300 is also a most instructive showcase of how astronomers with very different research projects nowadays can make effective use of the same observations for their programmes . The idea to exploit one and the same data set is not new, but thanks to rapid technological developments it has recently developed into a very powerful tool for the astronomers in their continued quest to understand the Universe. This kind of work has now become very efficient with the advent of a fully searchable data archive from which observational data can then - after the expiry of a nominal one-year proprietary period for the observers - be made available to other astronomers. The ESO Science Data Archive was established some years ago and now encompasses more than 15 Terabyte [3]. Normally, the identification of specific data sets in such a large archive would be a very difficult and time-consuming task. However, effective projects and software "tools" like ASTROVIRTEL and Querator now allow the users quickly to "filter" large amounts of data and extract those of their specific interest. Indeed, "Archival Astronomy" has already led to many important discoveries, cf. the ASTROVIRTEL list of publications. There is no doubt that "Virtual Astronomical Observatories" will play an increasingly important role in the future, cf. ESO PR 26/01. The present wide-field images of NGC 300 provide an impressive demonstration of the enormous potential of this innovative approach. Some of the ways they were used are explained below. Cepheids in NGC 300 and the cosmic distance scale ESO PR Photo 18b/02 ESO PR Photo 18b/02 [Preview - JPEG: 468 x 400 pix - 112k] [Full-Res - JPEG: 1258 x 1083 pix - 1.6M] Caption : PR Photo 18b/02 shows some of the Cepheid type stars in the spiral galaxy NGC 300 (at the centre of the markers), as they were identified by Wolfgang Gieren and collaborators during the research programme for which the WFI images of NGC 300 were first obtained. In this area of NGC 300, there is also a huge cloud of ionized hydrogen (a "HII shell"). It measures about 2000 light-years in diameter, thus dwarfing even the enormous Tarantula Nebula in the LMC, also photographed with the WFI (cf. ESO PR Photos 14a-g/02 ). The largest versions ("normal" or "full-res") of this and the following photos are shown with their original pixel size, demonstrating the incredible amount of detail visible on one WFI image. Technical information about this photo is available below. In 1999, Wolfgang Gieren (Universidad de Concepcion, Chile) and his colleagues started a search for Cepheid-type variable stars in NGC 300. These stars constitute a key element in the measurement of distances in the Universe. It has been known since many years that the pulsation period of a Cepheid-type star depends on its intrinsic brightness (its "luminosity"). Thus, once its period has been measured, the astronomers can calculate its luminosity. By comparing this to the star's apparent brightness in the sky, and applying the well-known diminution of light with the second power of the distance, they can obtain the distance to the star. This fundamental method has allowed some of the most reliable measurements of distances in the Universe and has been essential for all kinds of astrophysics, from the closest stars to the remotest galaxies. Previous to Gieren's new project, only about a dozen Cepheids were known in NGC 300. However, by regularly obtaining wide-field WFI exposures of NGC 300 from July 1999 through January 2000 and carefully monitoring the apparent brightness of its brighter stars during that period, the astronomers detected more than 100 additional Cepheids . The brightness variations (in astronomical terminology: "light curves") could be determined with excellent precision from the WFI data. They showed that the pulsation periods of these Cepheids range from about 5 to 115 days. Some of these Cepheids are identified on PR Photo 18b/02 , in the middle of a very crowded field in NGC 300. When fully studied, these unique observational data will yield a new and very accurate distance to NGC 300, making this galaxy a future cornerstone in the calibration of the cosmic distance scale . Moreover, they will also allow to understand in more detail how the brightness of a Cepheid-type star depends on its chemical composition, currently a major uncertainty in the application of the Cepheid method to the calibration of the extragalactic distance scale. Indeed, the effect of the abundance of different elements on the luminosity of a Cepheid can be especially well measured in NGC 300 due to the existence of large variations of these abundances in the stars located in the disk of this galaxy. Gieren and his group, in collaboration with astronomers Fabio Bresolin and Rolf Kudritzki (Institute of Astronomy, Hawaii, USA) are currently measuring the variations of these chemical abundances in stars in the disk of NGC 300, by means of spectra of about 60 blue supergiant stars, obtained with the FORS multi-mode instruments at the ESO Very Large Telescope (VLT) on Paranal. These stars, that are among the optically brightest in NGC 300, were first identified in the WFI images of this galaxy obtained in different colours - the same that were used to produce PR Photo 18a/02 . The nature of those stars was later spectroscopically confirmed at the VLT. As an important byproduct of these measurements, the luminosities of the blue supergiant stars in NGC 300 will themselves be calibrated (as a new cosmic "standard candle"), taking advantage of their stellar wind properties that can be measured from the VLT spectra. The WFI Cepheid observations in NGC 300, as well as the VLT blue supergiant star observations, form part of a large research project recently initiated by Gieren and his group that is concerned with the improvement of various stellar distance indicators in nearby galaxies (the "ARAUCARIA" project ). Clues on star formation history in NGC 300 ESO PR Photo 18c/02 ESO PR Photo 18c/02 [Preview - JPEG: 440 x 400 pix - 63k] [Normal - JPEG: 1200 x 1091 pix - 664k] [Full-Res - JPEG: 5515 x 5014 pix - 14.3M] Caption : PR Photo 18c/02 displays NGC 300, as seen through a narrow optical filter (H-alpha) in the red light of hydrogen atoms. A population of intrinsically bright and young stars turned "on" just a few million years ago. Their radiation and strong stellar winds have shaped many of the clouds of ionized hydrogen gas ("HII shells") seen in this photo. The "rings" near some of the bright stars are caused by internal reflections in the telescope. Technical information about this photo is available below.. But there is much more to discover on these WFI images of NGC 300! The WFI images obtained in several broad and narrow band filters from the ultraviolet to the near-infrared spectral region (U, B, V, R, I and H-alpha) allow a detailed study of groups of heavy, hot stars (known as "OB associations") and a large number of huge clouds of ionized hydrogen ("HII shells") in this galaxy. Corresponding studies have been carried out by Gieren's group, resulting in the discovery of an amazing number of OB associations, including a number of giant associations. These investigations, taken together with the observed distribution of the pulsation periods of the Cepheids, allow to better understand the history of star formation in NGC 300. For example, three distinct peaks in the number distribution of the pulsation periods of the Cepheids seem to indicate that there have been at least three different bursts of star formation within the past 100 million years. The large number of OB associations and HII shells ( PR Photo 18c/02 ) furthermore indicate the presence of a numerous, very young stellar population in NGC 300, aged only a few million years. Dark matter and the observed shapes of distant galaxies In early 2002, Thomas Erben and Mischa Schirmer from the "Institut für Astrophysik and extraterrestrische Forschung" ( IAEF , Universität Bonn, Germany), in the course of their ASTROVIRTEL programme, identified and retrieved all available broad-band and H-alpha images of NGC 300 available in the ESO Science Data Archive. Most of these have been observed for the project by Gieren and his colleagues, described above. However, the scientific interest of the German astronomers was very different from that of their colleagues and they were not at all concerned about the main object in the field, NGC 300. In a very different approach, they instead wanted to study those images to measure the amount of dark matter in the Universe, by means of the weak gravitational lensing effect produced by distant galaxy clusters. Various observations, ranging from the measurement of internal motions ("rotation curves") in spiral galaxies to the presence of hot X-ray gas in clusters of galaxies and the motion of galaxies in those clusters, indicate that there is about ten times more matter in the Universe than what is observed in the form of stars, gas and galaxies ("luminous matter"). As this additional matter does not emit light at any wavelengths, it is commonly referred to as "dark" matter - its true nature is yet entirely unclear. Insight into the distribution of dark matter in the Universe can be gained by looking at the shapes of images of very remote galaxies, billions of light-years away, cf. ESO PR 24/00. Light from such distant objects travels vast distances through space before arriving here on Earth, and whenever it passes heavy clusters of galaxies, it is bent a little due to the associated gravitational field. Thus, in long-exposure, high-quality images, this "weak lensing" effect can be perceived as a coherent pattern of distortion of the images of background galaxies. Gravitational lensing in the NGC 300 field ESO PR Photo 18d/02 ESO PR Photo 18d/02 [Preview - JPEG: 400 x 495 pix - 82k] [Full-Res - JPEG: 1304 x 1615 pix - 3.2M] Caption : PR Photo 18d/02 shows the distant cluster of galaxies CL0053-37 , as imaged on the WFI photo of the NGC 300 sky field. The elongated distribution of the cluster galaxies, as well as the presence of two large, early-type elliptical galaxies indicate that this cluster is still in the process of formation. Some of the galaxies appear to be merging. From the measured redshift ( z = 0.1625), a distance of about 2.1 billion light-years is deduced. Technical information about this photo is available below. ESO PR Photo 18e/02 ESO PR Photo 18e/02 [Preview - JPEG: 400 x 567 pix - 89k] [Normal - JPEG: 723 x 1024 pix - 424k] Caption : PR Photo 18e/02 is a "map" of the dark matter distribution (black contours) in the cluster of galaxies CL0053-37 (shown in PR Photo 18d/02 ), as obtained from the weak lensing effects detected in the WFI images, and the X-ray flux (green contours) taken from the All-Sky Survey carried out by the ROSAT satellite observatory. The distribution of galaxies resembles the elongated, dark-matter profile. Because of ROSAT's limited image sharpness (low "angular resolution"), it cannot be entirely ruled out that the observed X-ray emission is due to an active nucleus of a galaxy in CL0053-37, or even a foreground stellar binary system in NGC 300. The WFI NGC 300 images appeared promising for gravitational lensing research because of the exceptionally long total exposure time. Although the large foreground galaxy NGC 300 would block the light of tens of thousands of galaxies in the background, a huge number of others would still be visible in the outskirts of this sky field, making a search for clusters of galaxies and associated lensing effects quite feasible. To ensure the best possible image sharpness in the combined image, and thus to obtain the most reliable measurements of the shapes of the background objects, only red (R-band) images obtained under the best seeing conditions were combined. In order to provide additional information about the colours of these faint objects, a similar approach was adopted for images in the other bands as well. The German astronomers indeed measured a significant lensing effect for one of the galaxy clusters in the field ( CL0053-37 , see PR Photo 18d/02 ); the images of background galaxies around this cluster were noticeably distorted in the direction tangential to the cluster center. Based on the measured degree of distortion, a map of the distribution of (dark) matter in this direction was constructed ( PR Photo 18e/02 ). The separation of unlensed foreground (bluer) and lensed background galaxies (redder) greatly profited from the photometric measurements done by Gieren's group in the course of their work on the Cepheids in NGC 300. Assuming that the lensed background galaxies lie at a mean redshift of 1.0, i.e. a distance of 8 billion light-years, a mass of about 2 x 10 14 solar masses was obtained for the CL0053-37 cluster. This lensing analysis in the NGC 300 field is part of the Garching-Bonn Deep Survey (GaBoDS) , a weak gravitational lensing survey led by Peter Schneider (IAEF). GaBoDS is based on exposures made with the WFI and until now a sky area of more than 12 square degrees has been imaged during very good seeing conditions. Once complete, this investigation will allow more insight into the distribution and cosmological evolution of galaxy cluster masses, which in turn provide very useful information about the structure and history of the Universe. One hundred thousand galaxies ESO PR Photo 18f/02 ESO PR Photo 18f/02 [Preview - JPEG: 400 x 526 pix - 93k] [Full-Res - JPEG: 756 x 994 pix - 1.0M] Caption : PR Photo 18f/02 shows a group of galaxies , seen on the NGC 300 images. They are all quite red and their similar colours indicate that they must be about equally distant. They probably constitute a distant cluster, now in the stage of formation. Technical information about this photo is available below. ESO PR Photo 18g/02 ESO PR Photo 18g/02 [Preview - JPEG: 469 x 400 pix - xxk] [Full-Res - JPEG: 1055 x 899 pix - 968k] Caption : PR Photo 18g/02 shows an area in the outer regions of NGC 300. Disks of spiral galaxies are usually quite "thin" (some hundred light-years), as compared to their radial extent (tens of thousands of light-years across). In areas where only small amounts of dust are present, it is possible to see much more distant galaxies right through the disk of NGC 300 , as demonstrated by this image. Technical information about this photo is available below. ESO PR Photo 18h/02 ESO PR Photo 18h/02 [Preview - JPEG: 451 x 400 pix - 89k] [Normal - JPEG: 902 x 800 pix - 856k] [Full-Res - JPEG: 2439 x 2163 pix - 6.0M] Caption : PR Photo 18h/02 is an astronomers' joy ride to infinity. Such a rarely seen view of our universe imparts a feeling of the vast distances in space. In the upper half of the image, the outer region of NGC 300 is resolved into innumerable stars, while in the lower half, myriads of galaxies - a thousand times more distant - catch the eye. In reality, many of them are very similar to NGC 300, they are just much more remote. In addition to allowing a detailed investigation of dark matter and lensing effects in this field, the present, very "deep" colour image of NGC 300 invites to perform a closer inspection of the background galaxy population itself . No less than about 100,000 galaxies of all types are visible in this amazing image. Three known quasars ([ICS96] 005342.1-375947, [ICS96] 005236.1-374352, [ICS96] 005336.9-380354) with redshifts 2.25, 2.35 and 2.75, respectively, happen to lie inside this sky field, together with many interacting galaxies, some of which feature tidal tails. There are also several groups of highly reddened galaxies - probably distant clusters in formation, cf. PR Photo 18f/02 . Others are seen right through the outer regions of NGC 300, cf. PR Photo 18g/02 . More detailed investigations of the numerous galaxies in this field are now underway. From the nearby spiral galaxy NGC 300 to objects in the young Universe, it is all there, truly an astronomical treasure trove, cf. PR Photo 18h/02 ! Notes [1]: "NGC" means "New General Catalogue" (of nebulae and clusters) that was published in 1888 by J.L.E. Dreyer in the "Memoirs of the Royal Astronomical Society". [2]: Other colour composite images from the Wide-Field Imager at the MPG/ESO 2.2-m telescope at the La Silla Observatory are available at the ESO Outreach website at http://www.eso.org/esopia"bltxt">Tarantula Nebula in the LMC, cf. ESO PR Photos 14a-g/02. [3]: 1 Terabyte = 10 12 byte = 1000 Gigabyte = 1 million million byte. Technical information about the photos PR Photo 18a/02 and all cutouts were made from 110 WFI images obtained in the B-band (total exposure time 11.0 hours, rendered as blue), 105 images in the V-band (10.4 hours, green), 42 images in the R-band (4.2 hours, red) and 21 images through a H-alpha filter (5.1 hours, red). In total, 278 images of NGC 300 have been assembled to produce this colour image, together with about as many calibration images (biases, darks and flats). 150 GB of hard disk space were needed to store all uncompressed raw data, and about 1 TB of temporary files was produced during the extensive data reduction. Parallel processing of all data sets took about two weeks on a four-processor Sun Enterprise 450 workstation. The final colour image was assembled in Adobe Photoshop. To better show all details, the overall brightness of NGC 300 was reduced as compared to the outskirts of the field. The (red) "rings" near some of the bright stars originate from the H-alpha frames - they are caused by internal reflections in the telescope. The images were prepared by Mischa Schirmer at the Institut für Astrophysik und Extraterrestrische Forschung der Universität Bonn (IAEF) by means of a software pipeline specialised for reduction of multiple CCD wide-field imaging camera data. The raw data were extracted from the public sector of the ESO Science Data Archive. The extensive observations were performed at the ESO La Silla Observatory by Wolfgang Gieren, Pascal Fouque, Frederic Pont, Hermann Boehnhardt and La Silla staff, during 34 nights between July 1999 and January 2000. Some additional observations taken during the second half of 2000 were retrieved by Mischa Schirmer and Thomas Erben from the ESO archive. CD-ROM with full-scale NGC 300 image soon available PR Photo 18a/02 has been compressed by a factor 4 (2 x 2 rebinning). For PR Photos 18b-h/02 , the largest-size versions of the images are shown at the original scale (1 pixel = 0.238 arcsec). A full-resolution TIFF-version (approx. 8000 x 8000 pix; 200 Mb) of PR Photo 18a/02 will shortly be made available by ESO on a special CD-ROM, together with some other WFI images of the same size. An announcement will follow in due time.

  14. PRISMA-MAR: An Architecture Model for Data Visualization in Augmented Reality Mobile Devices

    ERIC Educational Resources Information Center

    Gomes Costa, Mauro Alexandre Folha; Serique Meiguins, Bianchi; Carneiro, Nikolas S.; Gonçalves Meiguins, Aruanda Simões

    2013-01-01

    This paper proposes an extension to mobile augmented reality (MAR) environments--the addition of data charts to the more usual text, image and video components. To this purpose, we have designed a client-server architecture including the main necessary modules and services to provide an Information Visualization MAR experience. The server side…

  15. Adaptive intercolor error prediction coder for lossless color (rgb) picutre compression

    NASA Astrophysics Data System (ADS)

    Mann, Y.; Peretz, Y.; Mitchell, Harvey B.

    2001-09-01

    Most of the current lossless compression algorithms, including the new international baseline JPEG-LS algorithm, do not exploit the interspectral correlations that exist between the color planes in an input color picture. To improve the compression performance (i.e., lower the bit rate) it is necessary to exploit these correlations. A major concern is to find efficient methods for exploiting the correlations that, at the same time, are compatible with and can be incorporated into the JPEG-LS algorithm. One such algorithm is the method of intercolor error prediction (IEP), which when used with the JPEG-LS algorithm, results on average in a reduction of 8% in the overall bit rate. We show how the IEP algorithm can be simply modified and that it nearly doubles the size of the reduction in bit rate to 15%.

  16. Hypertext-based computer vision teaching packages

    NASA Astrophysics Data System (ADS)

    Marshall, A. David

    1994-10-01

    The World Wide Web Initiative has provided a means for providing hypertext and multimedia based information across the whole INTERNET. Many applications have been developed on such http servers. At Cardiff we have developed a http hypertext based multimedia server, the Cardiff Information Server, using the widely available Mosaic system. The server provides a variety of information ranging from the provision of teaching modules, on- line documentation, timetables for departmental activities to more light hearted hobby interests. One important and novel development to the server has been the development of courseware facilities. This ranges from the provision of on-line lecture notes, exercises and their solutions to more interactive teaching packages. A variety of disciplines have benefitted notably Computer Vision, and Image Processing but also C programming, X Windows, Computer Graphics and Parallel Computing. This paper will address the issues of the implementation of the Computer Vision and Image Processing packages, the advantages gained from using a hypertext based system and also will relate practical experiences of using the packages in a class environment. The paper addresses issues of how best to provide information in such a hypertext based system and how interactive image processing packages can be developed and integrated into courseware. The suite of tools developed facilitates a flexible and powerful courseware package that has proved popular in the classroom and over the Internet. The paper will also detail many future developments we see possible. One of the key points raised in the paper is that Mosaic's hypertext language (html) is extremely powerful and yet relatively straightforward to use. It is also possible to link in Unix calls so that programs and shells can be executed. This provides a powerful suite of utilities that can be exploited to develop many packages.

  17. Daily Planet Imagery: GIBS MODIS Products on ArcGIS Online

    NASA Astrophysics Data System (ADS)

    Plesea, L.

    2015-12-01

    The NASA EOSDIS Global Imagery Browse Services (GIBS) is rapidly becoming an invaluable GIS resource for the science community and for the public at large. Reliable, fast access to historical as well as near real time, georeferenced images form a solid basis on which many innovative applications and projects can be built. Esri has recognized the value of this effort and is a GIBS user and collaborator. To enable the use of GIBS services within the ArcGIS ecosystem, Esri has built a GIBS reflector server at http://modis.arcgis.com, server which offers the facilities of a time enabled Mosaic Service on top of the GIBS provided images. Currently the MODIS reflectance products are supported by this mosaic service, possibilities of handling other GIBS products are being explored. This reflector service is deployed on the Amazon Elastic Compute Cloud platform, and is freely available to the end users. Due to the excellent response time from GIBS, image tiles do not have to be stored by the Esri mosaic server, all needed data being retrieved directly from GIBS when needed, continuously reflecting the state of GIBS, and greatly simplifying the maintenance of this service. Response latency is usually under one second, making it easy to interact with the data. The remote data access is achieved by using the Geospatial Data Abstraction Library (GDAL) Tiled Web Map Server (TWMS) driver. The response time of this server is excellent, usually under one second. The MODIS imagery has proven to be one of the most popular ones on the ArcGIS Online platform, where it is frequently use to provide temporal context to maps, or by itself, to tell a compelling story.

  18. Lossless data embedding for all image formats

    NASA Astrophysics Data System (ADS)

    Fridrich, Jessica; Goljan, Miroslav; Du, Rui

    2002-04-01

    Lossless data embedding has the property that the distortion due to embedding can be completely removed from the watermarked image without accessing any side channel. This can be a very important property whenever serious concerns over the image quality and artifacts visibility arise, such as for medical images, due to legal reasons, for military images or images used as evidence in court that may be viewed after enhancement and zooming. We formulate two general methodologies for lossless embedding that can be applied to images as well as any other digital objects, including video, audio, and other structures with redundancy. We use the general principles as guidelines for designing efficient, simple, and high-capacity lossless embedding methods for three most common image format paradigms - raw, uncompressed formats (BMP), lossy or transform formats (JPEG), and palette formats (GIF, PNG). We close the paper with examples of how the concept of lossless data embedding can be used as a powerful tool to achieve a variety of non-trivial tasks, including elegant lossless authentication using fragile watermarks. Note on terminology: some authors coined the terms erasable, removable, reversible, invertible, and distortion-free for the same concept.

  19. Peer-to-peer architecture for multi-departmental distributed PACS

    NASA Astrophysics Data System (ADS)

    Rosset, Antoine; Heuberger, Joris; Pysher, Lance; Ratib, Osman

    2006-03-01

    We have elected to explore peer-to-peer technology as an alternative to centralized PACS architecture for the increasing requirements for wide access to images inside and outside a radiology department. The goal being to allow users across the enterprise to access any study anytime without the need for prefetching or routing of images from central archive. Images can be accessed between different workstations and local storage nodes. We implemented "bonjour" a new remote file access technology developed by Apple allowing applications to share data and files remotely with optimized data access and data transfer. Our Open-source image display platform called OsiriX was adapted to allow sharing of local DICOM images through direct access of each local SQL database to be accessible from any other OsiriX workstation over the network. A server version of Osirix Core Data database also allows to access distributed archives servers in the same way. The infrastructure implemented allows fast and efficient access to any image anywhere anytime independently from the actual physical location of the data. It also allows benefiting from the performance of distributed low-cost and high capacity storage servers that can provide efficient caching of PACS data that was found to be 10 to 20 x faster that accessing the same date from the central PACS archive. It is particularly suitable for large hospitals and academic environments where clinical conferences, interdisciplinary discussions and successive sessions of image processing are often part of complex workflow or patient management and decision making.

  20. An Energy-Efficient Compressive Image Coding for Green Internet of Things (IoT).

    PubMed

    Li, Ran; Duan, Xiaomeng; Li, Xu; He, Wei; Li, Yanling

    2018-04-17

    Aimed at a low-energy consumption of Green Internet of Things (IoT), this paper presents an energy-efficient compressive image coding scheme, which provides compressive encoder and real-time decoder according to Compressive Sensing (CS) theory. The compressive encoder adaptively measures each image block based on the block-based gradient field, which models the distribution of block sparse degree, and the real-time decoder linearly reconstructs each image block through a projection matrix, which is learned by Minimum Mean Square Error (MMSE) criterion. Both the encoder and decoder have a low computational complexity, so that they only consume a small amount of energy. Experimental results show that the proposed scheme not only has a low encoding and decoding complexity when compared with traditional methods, but it also provides good objective and subjective reconstruction qualities. In particular, it presents better time-distortion performance than JPEG. Therefore, the proposed compressive image coding is a potential energy-efficient scheme for Green IoT.

  1. A semi-blind logo watermarking scheme for color images by comparison and modification of DFT coefficients

    NASA Astrophysics Data System (ADS)

    Kusyk, Janusz; Eskicioglu, Ahmet M.

    2005-10-01

    Digital watermarking is considered to be a major technology for the protection of multimedia data. Some of the important applications are broadcast monitoring, copyright protection, and access control. In this paper, we present a semi-blind watermarking scheme for embedding a logo in color images using the DFT domain. After computing the DFT of the luminance layer of the cover image, the magnitudes of DFT coefficients are compared, and modified. A given watermark is embedded in three frequency bands: Low, middle, and high. Our experiments show that the watermarks extracted from the lower frequencies have the best visual quality for low pass filtering, adding Gaussian noise, JPEG compression, resizing, rotation, and scaling, and the watermarks extracted from the higher frequencies have the best visual quality for cropping, intensity adjustment, histogram equalization, and gamma correction. Extractions from the fragmented and translated image are identical to extractions from the unattacked watermarked image. The collusion and rewatermarking attacks do not provide the hacker with useful tools.

  2. SEMG signal compression based on two-dimensional techniques.

    PubMed

    de Melo, Wheidima Carneiro; de Lima Filho, Eddie Batista; da Silva Júnior, Waldir Sabino

    2016-04-18

    Recently, two-dimensional techniques have been successfully employed for compressing surface electromyographic (SEMG) records as images, through the use of image and video encoders. Such schemes usually provide specific compressors, which are tuned for SEMG data, or employ preprocessing techniques, before the two-dimensional encoding procedure, in order to provide a suitable data organization, whose correlations can be better exploited by off-the-shelf encoders. Besides preprocessing input matrices, one may also depart from those approaches and employ an adaptive framework, which is able to directly tackle SEMG signals reassembled as images. This paper proposes a new two-dimensional approach for SEMG signal compression, which is based on a recurrent pattern matching algorithm called multidimensional multiscale parser (MMP). The mentioned encoder was modified, in order to efficiently work with SEMG signals and exploit their inherent redundancies. Moreover, a new preprocessing technique, named as segmentation by similarity (SbS), which has the potential to enhance the exploitation of intra- and intersegment correlations, is introduced, the percentage difference sorting (PDS) algorithm is employed, with different image compressors, and results with the high efficiency video coding (HEVC), H.264/AVC, and JPEG2000 encoders are presented. Experiments were carried out with real isometric and dynamic records, acquired in laboratory. Dynamic signals compressed with H.264/AVC and HEVC, when combined with preprocessing techniques, resulted in good percent root-mean-square difference [Formula: see text] compression factor figures, for low and high compression factors, respectively. Besides, regarding isometric signals, the modified two-dimensional MMP algorithm outperformed state-of-the-art schemes, for low compression factors, the combination between SbS and HEVC proved to be competitive, for high compression factors, and JPEG2000, combined with PDS, provided good performance allied to low computational complexity, all in terms of percent root-mean-square difference [Formula: see text] compression factor. The proposed schemes are effective and, specifically, the modified MMP algorithm can be considered as an interesting alternative for isometric signals, regarding traditional SEMG encoders. Besides, the approach based on off-the-shelf image encoders has the potential of fast implementation and dissemination, given that many embedded systems may already have such encoders available, in the underlying hardware/software architecture.

  3. Post-Hurricane Ike coastal oblique aerial photographs collected along the Alabama, Mississippi, and Louisiana barrier islands and the north Texas coast, September 14-15, 2008

    USGS Publications Warehouse

    Morgan, Karen L. M.; Krohn, M. Dennis; Guy, Kristy K.

    2016-04-28

    The U.S. Geological Survey (USGS), as part of the National Assessment of Coastal Change Hazards project, conducts baseline and storm-response photography missions to document and understand the changes in vulnerability of the Nation's coasts to extreme storms (Morgan, 2009). On September 14-15, 2008, the USGS conducted an oblique aerial photographic survey along the Alabama, Mississippi, and Louisiana barrier islands and the north Texas coast, aboard a Beechcraft Super King Air 200 (aircraft) at an altitude of 500 feet (ft) and approximately 1,200 ft offshore. This mission was flown to collect post-Hurricane Ike data for assessing incremental changes in the beach and nearshore area since the last survey, flown on September 9-10, 2008, and the data can be used in the assessment of future coastal change.The photographs provided in this report are Joint Photographic Experts Group (JPEG) images. ExifTool was used to add the following to the header of each photo: time of collection, Global Positioning System (GPS) latitude, GPS longitude, keywords, credit, artist (photographer), caption, copyright, and contact information. The photograph locations are an estimate of the position of the aircraft at the time the photograph was taken and do not indicate the location of any feature in the images (see the Navigation Data page). These photographs document the state of the barrier islands and other coastal features at the time of the survey. Pages containing thumbnail images of the photographs, referred to as contact sheets, were created in 5-minute segments of flight time. These segments can be found on the Photos and Maps page. Photographs can be opened directly with any JPEG-compatible image viewer by clicking on a thumbnail on the contact sheet.In addition to the photographs, a Google Earth Keyhole Markup Language (KML) file is provided and can be used to view the images by clicking on the marker and then clicking on either the thumbnail or the link above the thumbnail. The KML file was created using the photographic navigation files. The KML file can be found in the kml folder.

  4. Post-Hurricane Isaac coastal oblique aerial photographs collected along the Alabama, Mississippi, and Louisiana barrier islands, September 2–3, 2012

    USGS Publications Warehouse

    Morgan, Karen L. M.; Karen A. Westphal,

    2016-04-21

    The U.S. Geological Survey (USGS), as part of the National Assessment of Coastal Change Hazards project, conducts baseline and storm-response photography missions to document and understand the changes in vulnerability of the Nation's coasts to extreme storms (Morgan, 2009). On September 2-3, 2012, the USGS conducted an oblique aerial photographic survey along the Alabama, Mississippi, and Louisiana barrier islands aboard a Cessna 172 (aircraft) at an altitude of 500 feet (ft) and approximately 1,000 ft offshore. This mission was flown to collect post-Hurricane Isaac data for assessing incremental changes in the beach and nearshore area since the last survey, flown in September 2008 (central Louisiana barrier islands) and June 2011 (Dauphin Island, Alabama, to Breton Island, Louisiana), and the data can be used in the assessment of future coastal change.The photographs provided in this report are Joint Photographic Experts Group (JPEG) images. ExifTool was used to add the following to the header of each photo: time of collection, Global Positioning System (GPS) latitude, GPS longitude, keywords, credit, artist (photographer), caption, copyright, and contact information. The photograph locations are an estimate of the position of the aircraft at the time the photograph was taken and do not indicate the location of any feature in the images (see the Navigation Data page). These photographs document the state of the barrier islands and other coastal features at the time of the survey. Pages containing thumbnail images of the photographs, referred to as contact sheets, were created in 5-minute segments of flight time. These segments can be found on the Photos and Maps page. Photographs can be opened directly with any JPEG-compatible image viewer by clicking on a thumbnail on the contact sheet.In addition to the photographs, a Google Earth Keyhole Markup Language (KML) file is provided and can be used to view the images by clicking on the marker and then clicking on either the thumbnail or the link above the thumbnail. The KML files were created using the photographic navigation files. These KML file(s) can be found in the kml folder.

  5. ESO and NSF Sign Agreement on ALMA

    NASA Astrophysics Data System (ADS)

    2003-02-01

    Green Light for World's Most Powerful Radio Observatory On February 25, 2003, the European Southern Observatory (ESO) and the US National Science Foundation (NSF) are signing a historic agreement to construct and operate the world's largest and most powerful radio telescope, operating at millimeter and sub-millimeter wavelength. The Director General of ESO, Dr. Catherine Cesarsky, and the Director of the NSF, Dr. Rita Colwell, act for their respective organizations. Known as the Atacama Large Millimeter Array (ALMA), the future facility will encompass sixty-four interconnected 12-meter antennae at a unique, high-altitude site at Chajnantor in the Atacama region of northern Chile. ALMA is a joint project between Europe and North America. In Europe, ESO is leading on behalf of its ten member countries and Spain. In North America, the NSF also acts for the National Research Council of Canada and executes the project through the National Radio Astronomy Observatory (NRAO) operated by Associated Universities, Inc. (AUI). The conclusion of the ESO-NSF Agreement now gives the final green light for the ALMA project. The total cost of approximately 650 million Euro (or US Dollars) is shared equally between the two partners. Dr. Cesarsky is excited: "This agreement signifies the start of a great project of contemporary astronomy and astrophysics. Representing Europe, and in collaboration with many laboratories and institutes on this continent, we together look forward towards wonderful research projects. With ALMA we may learn how the earliest galaxies in the Universe really looked like, to mention but one of the many eagerly awaited opportunities with this marvellous facility". "With this agreement, we usher in a new age of research in astronomy" says Dr. Colwell. "By working together in this truly global partnership, the international astronomy community will be able to ensure the research capabilities needed to meet the long-term demands of our scientific enterprise, and that we will be able to study and understand our universe in ways that have previously been beyond our vision". The recent Presidential decree from Chile for AUI and the agreement signed in late 2002 between ESO and the Government of the Republic of Chile (cf. ESO PR 18/02) recognize the interest that the ALMA Project has for Chile, as it will deepen and strengthen the cooperation in scientific and technological matters between the parties. A joint ALMA Board has been established which oversees the realisation of the ALMA project via the management structure. This Board meets for the first time on February 24-25, 2003, at NSF in Washington and will witness this historic event. ALMA: Imaging the Light from Cosmic Dawn ESO PR Photo 06a/03 ESO PR Photo 06a/03 [Preview - JPEG: 588 x 400 pix - 52k [Normal - JPEG: 1176 x 800 pix - 192k] [Hi-Res - JPEG: 3300 x 2244 pix - 2.0M] ESO PR Photo 06b/03 ESO PR Photo 06b/03 [Preview - JPEG: 502 x 400 pix - 82k [Normal - JPEG: 1003 x 800 pix - 392k] [Hi-Res - JPEG: 2222 x 1773 pix - 3.0M] ESO PR Photo 06c/03 ESO PR Photo 06c/03 [Preview - JPEG: 474 x 400 pix - 84k [Normal - JPEG: 947 x 800 pix - 344k] [Hi-Res - JPEG: 2272 x 1920 pix - 2.0M] ESO PR Photo 06d/03 ESO PR Photo 06d/03 [Preview - JPEG: 414 x 400 pix - 69k [Normal - JPEG: 828 x 800 pix - 336k] [HiRes - JPEG: 2935 x 2835 pix - 7.4k] Captions: PR Photo 06a/03 shows an artist's view of the Atacama Large Millimeter Array (ALMA), with 64 12-m antennae. PR Photo 06b/03 is another such view, with the array arranged in a compact configuration at the high-altitude Chajnantor site. The ALMA VertexRSI prototype antennae is shown in PR Photo 06c/03 on the Antenna Test Facility (ATF) site at the NRAO Very Large Array (VLA) site near Socorro (New Mexico, USA). The future ALMA site at Llano de Chajnantor at 5000 metre altitude, some 40 km East of the village of San Pedro de Atacama (Chile) is seen in PR Photo 06d/03 - this view was obtained at 11 hrs in the morning on a crisp and clear autumn day (more views of this site are available at the Chajnantor Photo Gallery). The Atacama Large Millimeter Array (ALMA) will be one of astronomy's most powerful telescopes - providing unprecedented imaging capabilities and sensitivity in the corresponding wavelength range, many orders of magnitude greater than anything of its kind today. ALMA will be an array of 64 antennae that will work together as one telescope to study millimeter and sub-millimeter wavelength radiation from space. This radiation crosses the critical boundary between infrared and microwave radiation and holds the key to understanding such processes as planet and star formation, the formation of early galaxies and galaxy clusters, and the formation of organic and other molecules in space. "ALMA will be one of astronomy's premier tools for studying the universe" says Nobel Laureate Riccardo Giacconi, President of AUI (and former ESO Director General (1993-1999)). "The entire astronomical community is anxious to have the unprecedented power and resolution that ALMA will provide". The President of the ESO Council, Professor Piet van der Kruit, agrees: "ALMA heralds a break-through in sub-millimeter and millimeter astronomy, allowing some of the most penetrating studies the Universe ever made. It is safe to predict that there will be exciting scientific surprises when ALMA enters into operation". What is millimeter and sub-millimeter wavelength astronomy? Astronomers learn about objects in space by studying the energy emitted by those objects. Our Sun and the other stars throughout the Universe emit visible light. But these objects also emit other kinds of light waves, such as X-rays, infrared radiation, and radio waves. Some objects emit very little or no visible light, yet are strong sources at other wavelengths in the electromagnetic spectrum. Much of the energy in the Universe is present in the sub-millimeter and millimeter portion of the spectrum. This energy comes from the cold dust mixed with gas in interstellar space. It also comes from distant galaxies that formed many billions of years ago at the edges of the known universe. With ALMA, astronomers will have a uniquely powerful facility with access to this remarkable portion of the spectrum and hence, new and wonderful opportunities to learn more about those objects. Current observatories simply do not have anywhere near the necessary sensitivity and resolution to unlock the secrets that abundant sub-millimeter and millimeter wavelength radiation can reveal. It will take the unparalleled power of ALMA to fully study the cosmic emission at this wavelength and better understand the nature of the universe. Scientists from all over the world will use ALMA. They will compete for observing time by submitting proposals, which will be judged by a group of their peers on the basis of scientific merit. ALMA's unique capabilities ALMA's ability to detect remarkably faint sub-millimeter and millimeter wavelength emission and to create high-resolution images of the source of that emission gives it capabilities not found in any other astronomical instruments. ALMA will therefore be able to study phenomena previously out of reach to astronomers and astrophysicists, such as: * Very young galaxies forming stars at the earliest times in cosmic history; * New planets forming around young stars in our galaxy, the Milky Way; * The birth of new stars in spinning clouds of gas and dust; and * Interstellar clouds of gas and dust that are the nurseries of complex molecules and even organic chemicals that form the building blocks of life. How will ALMA work? All of ALMA's 64 antennae will work in concert, taking quick "snapshots" or long-term exposures of astronomical objects. Cosmic radiation from these objects will be reflected from the surface of each antenna and focussed onto highly sensitive receivers cooled to just a few degrees above absolute zero in order to suppress undesired "noise" from the surroundings. There the signals will be amplified many times, digitized, and then sent along underground fiber-optic cables to a large signal processor in the central control building. This specialized computer, called a correlator - running at 16,000 million-million operations per second - will combine all of the data from the 64 antennae to make images of remarkable quality. The extraordinary ALMA site Since atmospheric water vapor absorbs millimeter and (especially) sub-millimeter waves, ALMA must be constructed at a very high altitude in a very dry region of the earth. Extensive tests showed that the sky above the Atacama Desert of Chile has the excellent clarity and stability essential for ALMA. That is why ALMA will be built there, on Llano de Chajnantor at an altitude of 5,000 metres in the Chilean Andes. A series of views of this site, also in high-resolution suitable for reproduction, is available at the Chajnantor Photo Gallery. Timeline for ALMA June 1998: Phase 1 (Research and Development) June 1999: European/American Memorandum of Understanding February 2003: Signature of the bilateral Agreement 2004: Tests of the Prototype System 2007: Initial scientific operation of a partially completed array 2011: End of construction of the array

  6. Next VLT Instrument Ready for the Astronomers

    NASA Astrophysics Data System (ADS)

    2000-02-01

    FORS2 Commissioning Period Successfully Terminated The commissioning of the FORS2 multi-mode astronomical instrument at KUEYEN , the second FOcal Reducer/low dispersion Spectrograph at the ESO Very Large Telescope, was successfully finished today. This important work - that may be likened with the test driving of a new car model - took place during two periods, from October 22 to November 21, 1999, and January 22 to February 8, 2000. The overall goal was to thoroughly test the functioning of the new instrument, its conformity to specifications and to optimize its operation at the telescope. FORS2 is now ready to be handed over to the astronomers on April 1, 2000. Observing time for a six-month period until October 1 has already been allocated to a large number of research programmes. Two of the images that were obtained with FORS2 during the commissioning period are shown here. An early report about this instrument is available as ESO PR 17/99. The many modes of FORS2 The FORS Commissioning Team carried out a comprehensive test programme for all observing modes. These tests were done with "observation blocks (OBs)" that describe the set-up of the instrument and telescope for each exposure in all details, e.g., position in the sky of the object to be observed, filters, exposure time, etc.. Whenever an OB is "activated" from the control console, the corresponding observation is automatically performed. Additional information about the VLT Data Flow System is available in ESO PR 10/99. The FORS2 observing modes include direct imaging, long-slit and multi-object spectroscopy, exactly as in its twin, FORS1 at ANTU . In addition, FORS2 contains the "Mask Exchange Unit" , a motorized magazine that holds 10 masks made of thin metal plates into which the slits are cut by means of a laser. The advantage of this particular observing method is that more spectra (of more objects) can be taken with a single exposure (up to approximately 80) and that the shape of the slits can be adapted to the shape of the objects, thus increasing the scientific return. Results obtained so far look very promising. To increase further the scientific power of the FORS2 instrument in the spectroscopic mode, a number of new optical dispersion elements ("grisms", i.e., a combination of a grating and a glass prism) have been added. They give the scientists a greater choice of spectral resolution and wavelength range. Another mode that is new to FORS2 is the high time resolution mode. It was demonstrated with the Crab pulsar, cf. ESO PR 17/99 and promises very interesting scientific returns. Images from the FORS2 Commissioning Phase The two composite images shown below were obtained during the FORS2 commissioning work. They are based on three exposures through different optical broadband filtres (B: 429 nm central wavelength; 88 nm FWHM (Full Width at Half Maximum), V: 554/111 nm, R: 655/165 nm). All were taken with the 2048 x 2048 pixel 2 CCD detector with a field of view of 6.8 x 6.8 arcmin 2 ; each pixel measures 24 µm square. They were flatfield corrected and bias subtracted, scaled in intensity and some cosmetic cleaning was performed, e.g. removal of bad columns on the CCD. North is up and East is left. Tarantula Nebula in the Large Magellanic Cloud ESO Press Photo 05a/00 ESO Press Photo 05a/00 [Preview; JPEG: 400 x 452; 52k] [Normal; JPEG: 800 x 903; 142k] [Full-Res; JPEG: 2048 x 2311; 2.0Mb] The Tarantula Nebula in the Large Magellanic Cloud , as obtained with FORS2 at KUEYEN during the recent Commissioning period. It was taken during the night of January 31 - February 1, 2000. It is a composite of three exposures in B (30 sec exposure, image quality 0.75 arcsec; here rendered in blue colour), V (15 sec, 0.70 arcsec; green) and R (10 sec, 0.60 arcsec; red). The full-resolution version of this photo retains the orginal pixels. 30 Doradus , also known as the Tarantula Nebula , or NGC 2070 , is located in the Large Magellanic Cloud (LMC) , some 170,000 light-years away. It is one of the largest known star-forming regions in the Local Group of Galaxies. It was first catalogued as a star, but then recognized to be a nebula by the French astronomer A. Lacaille in 1751-52. The Tarantula Nebula is the only extra-galactic nebula which can be seen with the unaided eye. It contains in the centre the open stellar cluster R 136 with many of the largest, hottest, and most massive stars known. Radio Galaxy Centaurus A ESO Press Photo 05b/00 ESO Press Photo 05b/00 [Preview; JPEG: 400 x 448; 40k] [Normal; JPEG: 800 x 896; 110k] [Full-Res; JPEG: 2048 x 2293; 2.0Mb] The radio galaxy Centarus A , as obtained with FORS2 at KUEYEN during the recent Commissioning period. It was taken during the night of January 31 - February 1, 2000. It is a composite of three exposures in B (300 sec exposure, image quality 0.60 arcsec; here rendered in blue colour), V (240 sec, 0.60 arcsec; green) and R (240 sec, 0.55 arcsec; red). The full-resolution version of this photo retains the orginal pixels. ESO Press Photo 05c/00 ESO Press Photo 05c/00 [Preview; JPEG: 400 x 446; 52k] [Normal; JPEG: 801 x 894; 112k] An area, north-west of the centre of Centaurus A with a detailed view of the dust lane and clusters of luminous blue stars. The normal version of this photo retains the orginal pixels. The new FORS2 image of Centaurus A , also known as NGC 5128 , is an example of how frontier science can be combined with esthetic aspects. This galaxy is a most interesting object for the present attempts to understand active galaxies . It is being investigated by means of observations in all spectral regions, from radio via infrared and optical wavelengths to X- and gamma-rays. It is one of the most extensively studied objects in the southern sky. FORS2 , with its large field-of-view and excellent optical resolution, makes it possible to study the global context of the active region in Centaurus A in great detail. Note for instance the great number of massive and luminous blue stars that are well resolved individually, in the upper right and lower left in PR Photo 05b/00 . Centaurus A is one of the foremost examples of a radio-loud active galactic nucleus (AGN) . On images obtained at optical wavelengths, thick dust layers almost completely obscure the galaxy's centre. This structure was first reported by Sir John Herschel in 1847. Until 1949, NGC 5128 was thought to be a strange object in the Milky Way, but it was then identified as a powerful radio galaxy and designated Centaurus A . The distance is about 10-13 million light-years (3-4 Mpc) and the apparent visual magnitude is about 8, or 5 times too faint to be seen with the unaided eye. There is strong evidence that Centaurus A is a merger of an elliptical with a spiral galaxy, since elliptical galaxies would not have had enough dust and gas to form the young, blue stars seen along the edges of the dust lane. The core of Centaurus A is the smallest known extragalactic radio source, only 10 light-days across. A jet of high energy particles from this centre is observed in radio and X-ray images. The core probably contains a supermassive black hole with a mass of about 100 million solar masses. This is the caption to ESO PR Photos 05a-c/00 . They may be reproduced, if credit is given to the European Southern Observatory..

  7. Virtual reality for spherical images

    NASA Astrophysics Data System (ADS)

    Pilarczyk, Rafal; Skarbek, Władysław

    2017-08-01

    Paper presents virtual reality application framework and application concept for mobile devices. Framework uses Google Cardboard library for Android operating system. Framework allows to create virtual reality 360 video player using standard OpenGL ES rendering methods. Framework provides network methods in order to connect to web server as application resource provider. Resources are delivered using JSON response as result of HTTP requests. Web server also uses Socket.IO library for synchronous communication between application and server. Framework implements methods to create event driven process of rendering additional content based on video timestamp and virtual reality head point of view.

  8. Content-based image retrieval in medical applications for picture archiving and communication systems

    NASA Astrophysics Data System (ADS)

    Lehmann, Thomas M.; Guld, Mark O.; Thies, Christian; Fischer, Benedikt; Keysers, Daniel; Kohnen, Michael; Schubert, Henning; Wein, Berthold B.

    2003-05-01

    Picture archiving and communication systems (PACS) aim to efficiently provide the radiologists with all images in a suitable quality for diagnosis. Modern standards for digital imaging and communication in medicine (DICOM) comprise alphanumerical descriptions of study, patient, and technical parameters. Currently, this is the only information used to select relevant images within PACS. Since textual descriptions insufficiently describe the great variety of details in medical images, content-based image retrieval (CBIR) is expected to have a strong impact when integrated into PACS. However, existing CBIR approaches usually are limited to a distinct modality, organ, or diagnostic study. In this state-of-the-art report, we present first results implementing a general approach to content-based image retrieval in medical applications (IRMA) and discuss its integration into PACS environments. Usually, a PACS consists of a DICOM image server and several DICOM-compliant workstations, which are used by radiologists for reading the images and reporting the findings. Basic IRMA components are the relational database, the scheduler, and the web server, which all may be installed on the DICOM image server, and the IRMA daemons running on distributed machines, e.g., the radiologists" workstations. These workstations can also host the web-based front-ends of IRMA applications. Integrating CBIR and PACS, a special focus is put on (a) location and access transparency for data, methods, and experiments, (b) replication transparency for methods in development, (c) concurrency transparency for job processing and feature extraction, (d) system transparency at method implementation time, and (e) job distribution transparency when issuing a query. Transparent integration will have a certain impact on diagnostic quality supporting both evidence-based medicine and case-based reasoning.

  9. Architecture of distributed picture archiving and communication systems for storing and processing high resolution medical images

    NASA Astrophysics Data System (ADS)

    Tokareva, Victoria

    2018-04-01

    New generation medicine demands a better quality of analysis increasing the amount of data collected during checkups, and simultaneously decreasing the invasiveness of a procedure. Thus it becomes urgent not only to develop advanced modern hardware, but also to implement special software infrastructure for using it in everyday clinical practice, so-called Picture Archiving and Communication Systems (PACS). Developing distributed PACS is a challenging task for nowadays medical informatics. The paper discusses the architecture of distributed PACS server for processing large high-quality medical images, with respect to technical specifications of modern medical imaging hardware, as well as international standards in medical imaging software. The MapReduce paradigm is proposed for image reconstruction by server, and the details of utilizing the Hadoop framework for this task are being discussed in order to provide the design of distributed PACS as ergonomic and adapted to the needs of end users as possible.

  10. Face detection on distorted images using perceptual quality-aware features

    NASA Astrophysics Data System (ADS)

    Gunasekar, Suriya; Ghosh, Joydeep; Bovik, Alan C.

    2014-02-01

    We quantify the degradation in performance of a popular and effective face detector when human-perceived image quality is degraded by distortions due to additive white gaussian noise, gaussian blur or JPEG compression. It is observed that, within a certain range of perceived image quality, a modest increase in image quality can drastically improve face detection performance. These results can be used to guide resource or bandwidth allocation in a communication/delivery system that is associated with face detection tasks. A new face detector based on QualHOG features is also proposed that augments face-indicative HOG features with perceptual quality-aware spatial Natural Scene Statistics (NSS) features, yielding improved tolerance against image distortions. The new detector provides statistically significant improvements over a strong baseline on a large database of face images representing a wide range of distortions. To facilitate this study, we created a new Distorted Face Database, containing face and non-face patches from images impaired by a variety of common distortion types and levels. This new dataset is available for download and further experimentation at www.ideal.ece.utexas.edu/˜suriya/DFD/.

  11. NOAA GOES Geostationary Satellite Server

    Science.gov Websites

    Size West CONUS IR Image MPEG | Loop Visible Full Size West CONUS VIS Image MPEG | Loop Water Vapor Full Size West Conus WV Image MPEG | Loop Alaska Infrared Full Size Alaska IR Image Loop | Color Infrared Full Size Hawaii IR Image Loop | Color Visible Full Size Hawaii VIS Image Loop Water Vapor Full

  12. Space Images for NASA/JPL

    NASA Technical Reports Server (NTRS)

    Boggs, Karen; Gutheinz, Sandy C.; Watanabe, Susan M.; Oks, Boris; Arca, Jeremy M.; Stanboli, Alice; Peez, Martin; Whatmore, Rebecca; Kang, Minliang; Espinoza, Luis A.

    2010-01-01

    Space Images for NASA/JPL is an Apple iPhone application that allows the general public to access featured images from the Jet Propulsion Laboratory (JPL). A back-end infrastructure stores, tracks, and retrieves space images from the JPL Photojournal Web server, and catalogs the information into a streamlined rating infrastructure.

  13. An analysis of absorbing image on the Indonesian text by using color matching

    NASA Astrophysics Data System (ADS)

    Hutagalung, G. A.; Tulus; Iryanto; Lubis, Y. F. A.; Khairani, M.; Suriati

    2018-03-01

    The insertion of messages in an image is performed by inserting per character message in some pixels. One way of inserting a message into an image is by inserting the ASCII decimal value of a character to the decimal value of the primary color of the image. Messages that use characters in letters, numbers or symbols, where the use of letters of each word is different in number and frequency of use, as well as the use of letters in various messages within each language. In Indonesian language, the use of the letter A to be the most widely used, and the use of other letters greatly affect the clarity of a message or text presented in the language. This study aims to determine the capacity to absorb the message in Indonesian language from an image and what are the things that affect the difference. The data used in this study consists of several images in JPG or JPEG format can be obtained from the image drawing software or hardware of the image makers at different image sizes. The results of testing on four samples of a color image have been obtained by using an image size of 1200 X 1920.

  14. A Forceful Demonstration by FORS

    NASA Astrophysics Data System (ADS)

    1998-09-01

    New VLT Instrument Provides Impressive Images Following a tight schedule, the ESO Very Large Telescope (VLT) project forges ahead - full operative readiness of the first of the four 8.2-m Unit Telescopes will be reached early next year. On September 15, 1998, another crucial milestone was successfully passed on-time and within budget. Just a few days after having been mounted for the first time at the first 8.2-m VLT Unit Telescope (UT1), the first of a powerful complement of complex scientific instruments, FORS1 ( FO cal R educer and S pectrograph), saw First Light . Right from the beginning, it obtained some excellent astronomical images. This major event now opens a wealth of new opportunities for European Astronomy. FORS - a technological marvel FORS1, with its future twin (FORS2), is the product of one of the most thorough and advanced technological studies ever made of a ground-based astronomical instrument. This unique facility is now mounted at the Cassegrain focus of the VLT UT1. Despite its significant dimensions, 3 x 1.5 metres and 2.3 tonnes, it appears rather small below the giant 53 m 2 Zerodur main mirror. Profiting from the large mirror area and the excellent optical properties of the UT1, FORS has been specifically designed to investigate the faintest and most remote objects in the universe. This complex VLT instrument will soon allow European astronomers to look beyond current observational horizons. The FORS instruments are "multi-mode instruments" that may be used in several different observation modes. It is, e.g., possible to take images with two different image scales (magnifications) and spectra at different resolutions may be obtained of individual or multiple objects. Thus, FORS may first detect the images of distant galaxies and immediately thereafter obtain recordings of their spectra. This allows for instance the determination of their stellar content and distances. As one of the most powerful astronomical instruments of its kind, FORS1 is a real workhorse for the study of the distant universe. How FORS was built The FORS project is being carried out under ESO contract by a consortium of three German astronomical institutes, namely the Heidelberg State Observatory and the University Observatories of Göttingen and Munich. When this project is concluded, the participating institutes will have invested about 180 man-years of work. The Heidelberg State Observatory was responsible for directing the project, for designing the entire optical system, for developing the components of the imaging, spectroscopic, and polarimetric optics, and for producing the special computer software needed for handling and analysing the measurements obtained with FORS. Moreover, a telescope simulator was built in the shop of the Heidelberg observatory that made it possible to test all major functions of FORS in Europe, before the instrument was shipped to Paranal. The University Observatory of Göttingen performed the design, the construction and the installation of the entire mechanics of FORS. Most of the high-precision parts, in particular the multislit unit, were manufactured in the observatory's fine-mechanical workshops. The procurement of the huge instrument housings and flanges, the computer analysis for mechanical and thermal stability of the sensitive spectrograph and the construction of the handling, maintenance and aligning equipment as well as testing the numerous opto- and electro-mechanical functions were also under the responsibility of this Observatory. The University of Munich had the responsibility for the management of the project, the integration and test in the laboratory of the complete instrument, for design and installation of all electronics and electro-mechanics, and for developing and testing the comprehensive software to control FORS in all its parts completely by computers (filter and grism wheels, shutters, multi-object slit units, masks, all optical components, electro motors, encoders etc.). In addition, detailed computer software was provided to prepare the complex astronomical observations with FORS in advance and to monitor the instrument performance by quality checks of the scientific data accumulated. In return for building FORS for the community of European astrophysicists, the scientists in the three institutions of the FORS Consortium have received a certain amount of Guaranteed Observing Time at the VLT. This time will be used for various research projects concerned, among others, with minor bodies in the outer solar system, stars at late stages of their evolution and the clouds of gas they eject, as well as galaxies and quasars at very large distances, thereby permitting a look-back towards the early epoch of the universe. First tests of FORS1 at the VLT UT1: a great success After careful preparation, the FORS consortium has now started the so-called commissioning of the instrument. This comprises the thorough verification of the specified instrument properties at the telescope, checking the correct functioning under software control from the Paranal control room and, at the end of this process, a demonstration that the instrument fulfills its scientific purpose as planned. While performing these tests, the commissioning team at Paranal were able to obtain images of various astronomical objects, some of which are shown here. Two of these were obtained on the night of "FORS First Light". The photos demonstrate some of the impressive posibilities with this new instrument. They are based on observations with the FORS standard resolution collimator (field size 6.8 x 6.8 armin = 2048 x 2048 pixels; 1 pixel = 0.20 arcsec). Spiral galaxy NGC 1288 ESO PR Photo 37a/98 ESO PR Photo 37a/98 [Preview - JPEG: 800 x 908 pix - 224k] [High-Res - JPEG: 3000 x 3406 pix - 1.5Mb] A colour image of spiral galaxy NGC 1288, obtained on the night of "FORS First Light". The first photo shows a reproduction of a colour composite image of the beautiful spiral galaxy NGC 1288 in the southern constellation Fornax. PR Photo 37a/98 covers the entire field that was imaged on the 2048 x 2048 pixel CCD camera. It is based on CCD frames in different colours that were taken under good seeing conditions during the night of First Light (15 September 1998). The distance to this galaxy is about 300 million light-years; it recedes with a velocity of 4500 km/sec. Its diameter is about 200,000 light-years. Technical information : Photo 37a/98 is based on a composite of three images taken behind three different filters: B (420 nm; 6 min), V (530 nm; 3 min) and I (800 nm; 3min) during a period of 0.7 arcsec seeing. The field shown measures 6.8 x 6.8 arcmin. North is left; East is down. Distant cluster of galaxies ESO PR Photo 37b/98 ESO PR Photo 37b/98 [Preview - JPEG: 657 x 800 pix - 248k] [High-Res - JPEG: 2465 x 3000 pix - 1.9Mb] A peculiar cluster of galaxies in a sky field near the quasar PB5763 . ESO PR Photo 37c/98 ESO PR Photo 37c/98 [Preview - JPEG: 670 x 800 pix - 272k] [High-Res - JPEG: 2512 x 3000 pix - 1.9Mb] Enlargement from PR Photo 37b/98, showing the peculiar cluster of galaxies in more detail. The next photos are reproduced from a 5-min near-infrared exposure, also obtained during the night of First Light of the FORS1 instrument (September 15, 1998). PR Photo 37b/98 shows a sky field near the quasar PB5763 in which is also seen a peculiar, quite distant cluster of galaxies. It consists of a large number of faint and distant galaxies that have not yet been thoroughly investigated. Many other fainter galaxies are seen in other areas, for instance in the right part of the field. This cluster is a good example of a type of object to which much observing time with FORS will be dedicated, once it enters into regular operation. An enlargement of the same field is reproduced in PR Photo 37c/98. It shows the individual members of this cluster of galaxies in more detail. Note in particular the interesting spindle-shaped galaxy that apparently possesses an equatorial ring. There is also a fine spiral galaxy and many fainter galaxies. They may be dwarf members of the cluster or be located in the background at even larger distances. Technical information : PR Photos 37b/98 (negative) and 37c/98 (positive) are based on a monochrome image taken in 0.8 arcsec seeing through a near-infrared (I; 800 nm) filtre. The exposure time was 5 minutes and the image was flat-fielded. The fields shown measure 6.8 x 6.8 arcmin and 2.5 x 2.3 arcmin, respectively. North is to the upper left; East is to the lower left. Spiral galaxy NGC 1232 ESO PR Photo 37d/98 ESO PR Photo 37d/98 [Preview - JPEG: 800 x 912 pix - 760k] [High-Res - JPEG: 3000 x 3420 pix - 5.7Mb] A colour image of spiral galaxy NGC 1232, obtained on September 21, 1998. ESO PR Photo 37e/98 ESO PR Photo 37e/98 [Preview - JPEG: 800 x 961 pix - 480k] [High-Res - JPEG: 3000 x 3602 pix - 3.5Mb] Enlargement of central area of PR Photo 37d/98. This spectacular image (Photo 37d/98) of the large spiral galaxy NGC 1232 was obtained on September 21, 1998, during a period of good observing conditions. It is based on three exposures in ultra-violet, blue and red light, respectively. The colours of the different regions are well visible: the central areas (Photo 37e/98) contain older stars of reddish colour, while the spiral arms are populated by young, blue stars and many star-forming regions. Note the distorted companion galaxy on the left side of Photo 37d/98, shaped like the greek letter "theta". NGC 1232 is located 20 o south of the celestial equator, in the constellation Eridanus (The River). The distance is about 100 million light-years, but the excellent optical quality of the VLT and FORS allows us to see an incredible wealth of details. At the indicated distance, the edge of the field shown in PR Photo 37d/98 corresponds to about 200,000 lightyears, or about twice the size of the Milky Way galaxy. Technical information : PR Photos 37d/98 and 37e/98 are based on a composite of three images taken behind three different filters: U (360 nm; 10 min), B (420 nm; 6 min) and R (600 nm; 2:30 min) during a period of 0.7 arcsec seeing. The fields shown measure 6.8 x 6.8 arcmin and 1.6 x 1.8 arcmin, respectively. North is up; East is to the left. Note: [1] This Press Release is published jointly (in English and German) by the European Southern Observatory, the Heidelberg State Observatory and the University Observatories of Goettingen and Munich. Eine Deutsche Fassung dieser Pressemitteilung steht ebenfalls zur Verfügung. How to obtain ESO Press Information ESO Press Information is made available on the World-Wide Web (URL: http://www.eso.org ). ESO Press Photos may be reproduced, if credit is given to the European Southern Observatory.

  15. Watermarking scheme for authentication of compressed image

    NASA Astrophysics Data System (ADS)

    Hsieh, Tsung-Han; Li, Chang-Tsun; Wang, Shuo

    2003-11-01

    As images are commonly transmitted or stored in compressed form such as JPEG, to extend the applicability of our previous work, a new scheme for embedding watermark in compressed domain without resorting to cryptography is proposed. In this work, a target image is first DCT transformed and quantised. Then, all the coefficients are implicitly watermarked in order to minimize the risk of being attacked on the unwatermarked coefficients. The watermarking is done through registering/blending the zero-valued coefficients with a binary sequence to create the watermark and involving the unembedded coefficients during the process of embedding the selected coefficients. The second-order neighbors and the block itself are considered in the process of the watermark embedding in order to thwart different attacks such as cover-up, vector quantisation, and transplantation. The experiments demonstrate the capability of the proposed scheme in thwarting local tampering, geometric transformation such as cropping, and common signal operations such as lowpass filtering.

  16. A novel data hiding scheme for block truncation coding compressed images using dynamic programming strategy

    NASA Astrophysics Data System (ADS)

    Chang, Ching-Chun; Liu, Yanjun; Nguyen, Son T.

    2015-03-01

    Data hiding is a technique that embeds information into digital cover data. This technique has been concentrated on the spatial uncompressed domain, and it is considered more challenging to perform in the compressed domain, i.e., vector quantization, JPEG, and block truncation coding (BTC). In this paper, we propose a new data hiding scheme for BTC-compressed images. In the proposed scheme, a dynamic programming strategy was used to search for the optimal solution of the bijective mapping function for LSB substitution. Then, according to the optimal solution, each mean value embeds three secret bits to obtain high hiding capacity with low distortion. The experimental results indicated that the proposed scheme obtained both higher hiding capacity and hiding efficiency than the other four existing schemes, while ensuring good visual quality of the stego-image. In addition, the proposed scheme achieved a low bit rate as original BTC algorithm.

  17. Forensic steganalysis: determining the stego key in spatial domain steganography

    NASA Astrophysics Data System (ADS)

    Fridrich, Jessica; Goljan, Miroslav; Soukal, David; Holotyak, Taras

    2005-03-01

    This paper is an extension of our work on stego key search for JPEG images published at EI SPIE in 2004. We provide a more general theoretical description of the methodology, apply our approach to the spatial domain, and add a method that determines the stego key from multiple images. We show that in the spatial domain the stego key search can be made significantly more efficient by working with the noise component of the image obtained using a denoising filter. The technique is tested on the LSB embedding paradigm and on a special case of embedding by noise adding (the +/-1 embedding). The stego key search can be performed for a wide class of steganographic techniques even for sizes of secret message well below those detectable using known methods. The proposed strategy may prove useful to forensic analysts and law enforcement.

  18. A study on multiresolution lossless video coding using inter/intra frame adaptive prediction

    NASA Astrophysics Data System (ADS)

    Nakachi, Takayuki; Sawabe, Tomoko; Fujii, Tetsuro

    2003-06-01

    Lossless video coding is required in the fields of archiving and editing digital cinema or digital broadcasting contents. This paper combines a discrete wavelet transform and adaptive inter/intra-frame prediction in the wavelet transform domain to create multiresolution lossless video coding. The multiresolution structure offered by the wavelet transform facilitates interchange among several video source formats such as Super High Definition (SHD) images, HDTV, SDTV, and mobile applications. Adaptive inter/intra-frame prediction is an extension of JPEG-LS, a state-of-the-art lossless still image compression standard. Based on the image statistics of the wavelet transform domains in successive frames, inter/intra frame adaptive prediction is applied to the appropriate wavelet transform domain. This adaptation offers superior compression performance. This is achieved with low computational cost and no increase in additional information. Experiments on digital cinema test sequences confirm the effectiveness of the proposed algorithm.

  19. Mobile healthcare information management utilizing Cloud Computing and Android OS.

    PubMed

    Doukas, Charalampos; Pliakas, Thomas; Maglogiannis, Ilias

    2010-01-01

    Cloud Computing provides functionality for managing information data in a distributed, ubiquitous and pervasive manner supporting several platforms, systems and applications. This work presents the implementation of a mobile system that enables electronic healthcare data storage, update and retrieval using Cloud Computing. The mobile application is developed using Google's Android operating system and provides management of patient health records and medical images (supporting DICOM format and JPEG2000 coding). The developed system has been evaluated using the Amazon's S3 cloud service. This article summarizes the implementation details and presents initial results of the system in practice.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Temple, Brian Allen; Armstrong, Jerawan Chudoung

    This document is a mid-year report on a deliverable for the PYTHON Radiography Analysis Tool (PyRAT) for project LANL12-RS-107J in FY15. The deliverable is deliverable number 2 in the work package and is titled “Add the ability to read in more types of image file formats in PyRAT”. Right now PyRAT can only read in uncompressed TIF files (tiff files). It is planned to expand the file formats that can be read by PyRAT, making it easier to use in more situations. A summary of the file formats added include jpeg, jpg, png and formatted ASCII files.

  1. Phased development of a web-based PACS viewer

    NASA Astrophysics Data System (ADS)

    Gidron, Yoad; Shani, Uri; Shifrin, Mark

    2000-05-01

    The Web browser is an excellent environment for the rapid development of an effective and inexpensive PACS viewer. In this paper we will share our experience in developing a browser-based viewer, from the inception and prototype stages to its current state of maturity. There are many operational advantages to a browser-based viewer, even when native viewers already exist in the system (with multiple and/or high resolution screens): (1) It can be used on existing personal workstations throughout the hospital. (2) It is easy to make the service available from physician's homes. (3) The viewer is extremely portable and platform independent. There is a wide variety of means available for implementing the browser- based viewer. Each file sent to the client by the server can perform some end-user or client/server interaction. These means range from HTML (for HyperText Markup Language) files, through Java Script, to Java applets. Some data types may also invoke plug-in code in the client, although this would reduce the portability of the viewer, it would provide the needed efficiency in critical places. On the server side the range of means is also very rich: (1) A set of files: html, Java Script, Java applets, etc. (2) Extensions of the server via cgi-bin programs, (3) Extensions of the server via servlets, (4) Any other helper application residing and working with the server to access the DICOM archive. The viewer architecture consists of two basic parts: The first part performs query and navigation through the DICOM archive image folders. The second part does the image access and display. While the first part deals with low data traffic, it involves many database transactions. The second part is simple as far as access transactions are concerned, but requires much more data traffic and display functions. Our web-based viewer has gone through three development stages characterized by the complexity of the means and tools employed on both client and server sides.

  2. Iris Recognition: The Consequences of Image Compression

    NASA Astrophysics Data System (ADS)

    Ives, Robert W.; Bishop, Daniel A.; Du, Yingzi; Belcher, Craig

    2010-12-01

    Iris recognition for human identification is one of the most accurate biometrics, and its employment is expanding globally. The use of portable iris systems, particularly in law enforcement applications, is growing. In many of these applications, the portable device may be required to transmit an iris image or template over a narrow-bandwidth communication channel. Typically, a full resolution image (e.g., VGA) is desired to ensure sufficient pixels across the iris to be confident of accurate recognition results. To minimize the time to transmit a large amount of data over a narrow-bandwidth communication channel, image compression can be used to reduce the file size of the iris image. In other applications, such as the Registered Traveler program, an entire iris image is stored on a smart card, but only 4 kB is allowed for the iris image. For this type of application, image compression is also the solution. This paper investigates the effects of image compression on recognition system performance using a commercial version of the Daugman iris2pi algorithm along with JPEG-2000 compression, and links these to image quality. Using the ICE 2005 iris database, we find that even in the face of significant compression, recognition performance is minimally affected.

  3. The comparison between SVD-DCT and SVD-DWT digital image watermarking

    NASA Astrophysics Data System (ADS)

    Wira Handito, Kurniawan; Fauzi, Zulfikar; Aminy Ma’ruf, Firda; Widyaningrum, Tanti; Muslim Lhaksmana, Kemas

    2018-03-01

    With internet, anyone can publish their creation into digital data simply, inexpensively, and absolutely easy to be accessed by everyone. However, the problem appears when anyone else claims that the creation is their property or modifies some part of that creation. It causes necessary protection of copyrights; one of the examples is with watermarking method in digital image. The application of watermarking technique on digital data, especially on image, enables total invisibility if inserted in carrier image. Carrier image will not undergo any decrease of quality and also the inserted image will not be affected by attack. In this paper, watermarking will be implemented on digital image using Singular Value Decomposition based on Discrete Wavelet Transform (DWT) and Discrete Cosine Transform (DCT) by expectation in good performance of watermarking result. In this case, trade-off happen between invisibility and robustness of image watermarking. In embedding process, image watermarking has a good quality for scaling factor < 0.1. The quality of image watermarking in decomposition level 3 is better than level 2 and level 1. Embedding watermark in low-frequency is robust to Gaussian blur attack, rescale, and JPEG compression, but in high-frequency is robust to Gaussian noise.

  4. Interconnecting smartphone, image analysis server, and case report forms in clinical trials for automatic skin lesion tracking in clinical trials

    NASA Astrophysics Data System (ADS)

    Haak, Daniel; Doma, Aliaa; Gombert, Alexander; Deserno, Thomas M.

    2016-03-01

    Today, subject's medical data in controlled clinical trials is captured digitally in electronic case report forms (eCRFs). However, eCRFs only insufficiently support integration of subject's image data, although medical imaging is looming large in studies today. For bed-side image integration, we present a mobile application (App) that utilizes the smartphone-integrated camera. To ensure high image quality with this inexpensive consumer hardware, color reference cards are placed in the camera's field of view next to the lesion. The cards are used for automatic calibration of geometry, color, and contrast. In addition, a personalized code is read from the cards that allows subject identification. For data integration, the App is connected to an communication and image analysis server that also holds the code-study-subject relation. In a second system interconnection, web services are used to connect the smartphone with OpenClinica, an open-source, Food and Drug Administration (FDA)-approved electronic data capture (EDC) system in clinical trials. Once the photographs have been securely stored on the server, they are released automatically from the mobile device. The workflow of the system is demonstrated by an ongoing clinical trial, in which photographic documentation is frequently performed to measure the effect of wound incision management systems. All 205 images, which have been collected in the study so far, have been correctly identified and successfully integrated into the corresponding subject's eCRF. Using this system, manual steps for the study personnel are reduced, and, therefore, errors, latency and costs decreased. Our approach also increases data security and privacy.

  5. Rubble-Pile Minor Planet Sylvia and Her Twins

    NASA Astrophysics Data System (ADS)

    2005-08-01

    VLT NACO Instrument Helps Discover First Triple Asteroid One of the thousands of minor planets orbiting the Sun has been found to have its own mini planetary system. Astronomer Franck Marchis (University of California, Berkeley, USA) and his colleagues at the Observatoire de Paris (France) [1] have discovered the first triple asteroid system - two small asteroids orbiting a larger one known since 1866 as 87 Sylvia [2]. "Since double asteroids seem to be common, people have been looking for multiple asteroid systems for a long time," said Marchis. "I couldn't believe we found one." The discovery was made with Yepun, one of ESO's 8.2-m telescopes of the Very Large Telescope Array at Cerro Paranal (Chile), using the outstanding image' sharpness provided by the adaptive optics NACO instrument. Via the observatory's proven "Service Observing Mode", Marchis and his colleagues were able to obtain sky images of many asteroids over a six-month period without actually having to travel to Chile. ESO PR Photo 25a/05 ESO PR Photo 25a/05 Orbits of Twin Moonlets around 87 Sylvia [Preview - JPEG: 400 x 516 pix - 145k] [Normal - JPEG: 800 x 1032 pix - 350k] ESO PR Photo 25b/05 ESO PR Photo 25b/05 Artist's impression of the triple asteroid system [Preview - JPEG: 420 x 400 pix - 98k] [Normal - JPEG: 849 x 800 pix - 238k] [Full Res - JPEG: 4000 x 3407 pix - 3.7M] [Full Res - TIFF: 4000 x 3000 pix - 36.0M] Caption: ESO PR Photo 25a/05 is a composite image showing the positions of Remus and Romulus around 87 Sylvia on 9 different nights as seen on NACO images. It clearly reveals the orbits of the two moonlets. The inset shows the potato shape of 87 Sylvia. The field of view is 2 arcsec. North is up and East is left. ESO PR Photo 25b/05 is an artist rendering of the triple system: Romulus, Sylvia, and Remus. ESO Video Clip 03/05 ESO Video Clip 03/05 Asteroid Sylvia and Her Twins [Quicktime Movie - 50 sec - 384 x 288 pix - 12.6M] Caption: ESO PR Video Clip 03/05 is an artist rendering of the triple asteroid system showing the large asteroid 87 Sylvia spinning at a rapid rate and surrounded by two smaller asteroids (Remus and Romulus) in orbit around it. This computer animation is also available in broadcast quality to the media (please contact Herbert Zodet). One of these asteroids was 87 Sylvia, which was known to be double since 2001, from observations made by Mike Brown and Jean-Luc Margot with the Keck telescope. The astronomers used NACO to observe Sylvia on 27 occasions, over a two-month period. On each of the images, the known small companion was seen, allowing Marchis and his colleagues to precisely compute its orbit. But on 12 of the images, the astronomers also found a closer and smaller companion. 87 Sylvia is thus not double but triple! Because 87 Sylvia was named after Rhea Sylvia, the mythical mother of the founders of Rome [3], Marchis proposed naming the twin moons after those founders: Romulus and Remus. The International Astronomical Union approved the names. Sylvia's moons are considerably smaller, orbiting in nearly circular orbits and in the same plane and direction. The closest and newly discovered moonlet, orbiting about 710 km from Sylvia, is Remus, a body only 7 km across and circling Sylvia every 33 hours. The second, Romulus, orbits at about 1360 km in 87.6 hours and measures about 18 km across. The asteroid 87 Sylvia is one of the largest known from the asteroid main belt, and is located about 3.5 times further away from the Sun than the Earth, between the orbits of Mars and Jupiter. The wealth of details provided by the NACO images show that 87 Sylvia is shaped like a lumpy potato, measuring 380 x 260 x 230 km (see ESO PR Photo 25a/05). It is spinning at a rapid rate, once every 5 hours and 11 minutes. The observations of the moonlets' orbits allow the astronomers to precisely calculate the mass and density of Sylvia. With a density only 20% higher than the density of water, it is likely composed of water ice and rubble from a primordial asteroid. "It could be up to 60 percent empty space," said co-discoverer Daniel Hestroffer (Observatoire de Paris, France). "It is most probably a "rubble-pile" asteroid", Marchis added. These asteroids are loose aggregations of rock, presumably the result of a collision. Two asteroids smacked into each other and got disrupted. The new rubble-pile asteroid formed later by accumulation of large fragments while the moonlets are probably debris left over from the collision that were captured by the newly formed asteroid and eventually settled into orbits around it. "Because of the way they form, we expect to see more multiple asteroid systems like this." Marchis and his colleagues will report their discovery in the August 11 issue of the journal Nature, simultaneously with an announcement that day at the Asteroid Comet Meteor conference in Armação dos Búzios, Rio de Janeiro state, Brazil.

  6. Distributed PACS using distributed file system with hierarchical meta data servers.

    PubMed

    Hiroyasu, Tomoyuki; Minamitani, Yoshiyuki; Miki, Mitsunori; Yokouchi, Hisatake; Yoshimi, Masato

    2012-01-01

    In this research, we propose a new distributed PACS (Picture Archiving and Communication Systems) which is available to integrate several PACSs that exist in each medical institution. The conventional PACS controls DICOM file into one data-base. On the other hand, in the proposed system, DICOM file is separated into meta data and image data and those are stored individually. Using this mechanism, since file is not always accessed the entire data, some operations such as finding files, changing titles, and so on can be performed in high-speed. At the same time, as distributed file system is utilized, accessing image files can also achieve high-speed access and high fault tolerant. The introduced system has a more significant point. That is the simplicity to integrate several PACSs. In the proposed system, only the meta data servers are integrated and integrated system can be constructed. This system also has the scalability of file access with along to the number of file numbers and file sizes. On the other hand, because meta-data server is integrated, the meta data server is the weakness of this system. To solve this defect, hieratical meta data servers are introduced. Because of this mechanism, not only fault--tolerant ability is increased but scalability of file access is also increased. To discuss the proposed system, the prototype system using Gfarm was implemented. For evaluating the implemented system, file search operating time of Gfarm and NFS were compared.

  7. A Relationship Between Visual Complexity and Aesthetic Appraisal of Car Front Images: An Eye-Tracker Study.

    PubMed

    Chassy, Philippe; Lindell, Trym A E; Jones, Jessica A; Paramei, Galina V

    2015-01-01

    Image aesthetic pleasure (AP) is conjectured to be related to image visual complexity (VC). The aim of the present study was to investigate whether (a) two image attributes, AP and VC, are reflected in eye-movement parameters; and (b) subjective measures of AP and VC are related. Participants (N=26) explored car front images (M=50) while their eye movements were recorded. Following image exposure (10 seconds), its VC and AP were rated. Fixation count was found to positively correlate with the subjective VC and its objective proxy, JPEG compression size, suggesting that this eye-movement parameter can be considered an objective behavioral measure of VC. AP, in comparison, positively correlated with average dwelling time. Subjective measures of AP and VC were related too, following an inverted U-shape function best-fit by a quadratic equation. In addition, AP was found to be modulated by car prestige. Our findings reveal a close relationship between subjective and objective measures of complexity and aesthetic appraisal, which is interpreted within a prototype-based theory framework. © The Author(s) 2015.

  8. [Development of a video image system for wireless capsule endoscopes based on DSP].

    PubMed

    Yang, Li; Peng, Chenglin; Wu, Huafeng; Zhao, Dechun; Zhang, Jinhua

    2008-02-01

    A video image recorder to record video picture for wireless capsule endoscopes was designed. TMS320C6211 DSP of Texas Instruments Inc. is the core processor of this system. Images are periodically acquired from Composite Video Broadcast Signal (CVBS) source and scaled by video decoder (SAA7114H). Video data is transported from high speed buffer First-in First-out (FIFO) to Digital Signal Processor (DSP) under the control of Complex Programmable Logic Device (CPLD). This paper adopts JPEG algorithm for image coding, and the compressed data in DSP was stored to Compact Flash (CF) card. TMS320C6211 DSP is mainly used for image compression and data transporting. Fast Discrete Cosine Transform (DCT) algorithm and fast coefficient quantization algorithm are used to accelerate operation speed of DSP and decrease the executing code. At the same time, proper address is assigned for each memory, which has different speed;the memory structure is also optimized. In addition, this system uses plenty of Extended Direct Memory Access (EDMA) to transport and process image data, which results in stable and high performance.

  9. JHelioviewer. Time-dependent 3D visualisation of solar and heliospheric data

    NASA Astrophysics Data System (ADS)

    Müller, D.; Nicula, B.; Felix, S.; Verstringe, F.; Bourgoignie, B.; Csillaghy, A.; Berghmans, D.; Jiggens, P.; García-Ortiz, J. P.; Ireland, J.; Zahniy, S.; Fleck, B.

    2017-09-01

    Context. Solar observatories are providing the world-wide community with a wealth of data, covering wide time ranges (e.g. Solar and Heliospheric Observatory, SOHO), multiple viewpoints (Solar TErrestrial RElations Observatory, STEREO), and returning large amounts of data (Solar Dynamics Observatory, SDO). In particular, the large volume of SDO data presents challenges; the data are available only from a few repositories, and full-disk, full-cadence data for reasonable durations of scientific interest are difficult to download, due to their size and the download rates available to most users. From a scientist's perspective this poses three problems: accessing, browsing, and finding interesting data as efficiently as possible. Aims: To address these challenges, we have developed JHelioviewer, a visualisation tool for solar data based on the JPEG 2000 compression standard and part of the open source ESA/NASA Helioviewer Project. Since the first release of JHelioviewer in 2009, the scientific functionality of the software has been extended significantly, and the objective of this paper is to highlight these improvements. Methods: The JPEG 2000 standard offers useful new features that facilitate the dissemination and analysis of high-resolution image data and offers a solution to the challenge of efficiently browsing petabyte-scale image archives. The JHelioviewer software is open source, platform independent, and extendable via a plug-in architecture. Results: With JHelioviewer, users can visualise the Sun for any time period between September 1991 and today; they can perform basic image processing in real time, track features on the Sun, and interactively overlay magnetic field extrapolations. The software integrates solar event data and a timeline display. Once an interesting event has been identified, science quality data can be accessed for in-depth analysis. As a first step towards supporting science planning of the upcoming Solar Orbiter mission, JHelioviewer offers a virtual camera model that enables users to set the vantage point to the location of a spacecraft or celestial body at any given time.

  10. Multispectral Image Compression for Improvement of Colorimetric and Spectral Reproducibility by Nonlinear Spectral Transform

    NASA Astrophysics Data System (ADS)

    Yu, Shanshan; Murakami, Yuri; Obi, Takashi; Yamaguchi, Masahiro; Ohyama, Nagaaki

    2006-09-01

    The article proposes a multispectral image compression scheme using nonlinear spectral transform for better colorimetric and spectral reproducibility. In the method, we show the reduction of colorimetric error under a defined viewing illuminant and also that spectral accuracy can be improved simultaneously using a nonlinear spectral transform called Labplus, which takes into account the nonlinearity of human color vision. Moreover, we show that the addition of diagonal matrices to Labplus can further preserve the spectral accuracy and has a generalized effect of improving the colorimetric accuracy under other viewing illuminants than the defined one. Finally, we discuss the usage of the first-order Markov model to form the analysis vectors for the higher order channels in Labplus to reduce the computational complexity. We implement a multispectral image compression system that integrates Labplus with JPEG2000 for high colorimetric and spectral reproducibility. Experimental results for a 16-band multispectral image show the effectiveness of the proposed scheme.

  11. Comparison between a new computer program and the reference software for gray-scale median analysis of atherosclerotic carotid plaques.

    PubMed

    Casella, Ivan Benaduce; Fukushima, Rodrigo Bono; Marques, Anita Battistini de Azevedo; Cury, Marcus Vinícius Martins; Presti, Calógero

    2015-03-01

    To compare a new dedicated software program and Adobe Photoshop for gray-scale median (GSM) analysis of B-mode images of carotid plaques. A series of 42 carotid plaques generating ≥50% diameter stenosis was evaluated by a single observer. The best segment for visualization of internal carotid artery plaque was identified on a single longitudinal view and images were recorded in JPEG format. Plaque analysis was performed by both programs. After normalization of image intensity (blood = 0, adventitial layer = 190), histograms were obtained after manual delineation of plaque. Results were compared with nonparametric Wilcoxon signed rank test and Kendall tau-b correlation analysis. GSM ranged from 00 to 100 with Adobe Photoshop and from 00 to 96 with IMTPC, with a high grade of similarity between image pairs, and a highly significant correlation (R = 0.94, p < .0001). IMTPC software appears suitable for the GSM analysis of carotid plaques. © 2014 Wiley Periodicals, Inc.

  12. Progressive transmission of images over fading channels using rate-compatible LDPC codes.

    PubMed

    Pan, Xiang; Banihashemi, Amir H; Cuhadar, Aysegul

    2006-12-01

    In this paper, we propose a combined source/channel coding scheme for transmission of images over fading channels. The proposed scheme employs rate-compatible low-density parity-check codes along with embedded image coders such as JPEG2000 and set partitioning in hierarchical trees (SPIHT). The assignment of channel coding rates to source packets is performed by a fast trellis-based algorithm. We examine the performance of the proposed scheme over correlated and uncorrelated Rayleigh flat-fading channels with and without side information. Simulation results for the expected peak signal-to-noise ratio of reconstructed images, which are within 1 dB of the capacity upper bound over a wide range of channel signal-to-noise ratios, show considerable improvement compared to existing results under similar conditions. We also study the sensitivity of the proposed scheme in the presence of channel estimation error at the transmitter and demonstrate that under most conditions our scheme is more robust compared to existing schemes.

  13. 2-Step scalar deadzone quantization for bitplane image coding.

    PubMed

    Auli-Llinas, Francesc

    2013-12-01

    Modern lossy image coding systems generate a quality progressive codestream that, truncated at increasing rates, produces an image with decreasing distortion. Quality progressivity is commonly provided by an embedded quantizer that employs uniform scalar deadzone quantization (USDQ) together with a bitplane coding strategy. This paper introduces a 2-step scalar deadzone quantization (2SDQ) scheme that achieves same coding performance as that of USDQ while reducing the coding passes and the emitted symbols of the bitplane coding engine. This serves to reduce the computational costs of the codec and/or to code high dynamic range images. The main insights behind 2SDQ are the use of two quantization step sizes that approximate wavelet coefficients with more or less precision depending on their density, and a rate-distortion optimization technique that adjusts the distortion decreases produced when coding 2SDQ indexes. The integration of 2SDQ in current codecs is straightforward. The applicability and efficiency of 2SDQ are demonstrated within the framework of JPEG2000.

  14. Practical steganalysis of digital images: state of the art

    NASA Astrophysics Data System (ADS)

    Fridrich, Jessica; Goljan, Miroslav

    2002-04-01

    Steganography is the art of hiding the very presence of communication by embedding secret messages into innocuous looking cover documents, such as digital images. Detection of steganography, estimation of message length, and its extraction belong to the field of steganalysis. Steganalysis has recently received a great deal of attention both from law enforcement and the media. In our paper, we classify and review current stego-detection algorithms that can be used to trace popular steganographic products. We recognize several qualitatively different approaches to practical steganalysis - visual detection, detection based on first order statistics (histogram analysis), dual statistics methods that use spatial correlations in images and higher-order statistics (RS steganalysis), universal blind detection schemes, and special cases, such as JPEG compatibility steganalysis. We also present some new results regarding our previously proposed detection of LSB embedding using sensitive dual statistics. The recent steganalytic methods indicate that the most common paradigm in image steganography - the bit-replacement or bit substitution - is inherently insecure with safe capacities far smaller than previously thought.

  15. A Robust Image Watermarking in the Joint Time-Frequency Domain

    NASA Astrophysics Data System (ADS)

    Öztürk, Mahmut; Akan, Aydın; Çekiç, Yalçın

    2010-12-01

    With the rapid development of computers and internet applications, copyright protection of multimedia data has become an important problem. Watermarking techniques are proposed as a solution to copyright protection of digital media files. In this paper, a new, robust, and high-capacity watermarking method that is based on spatiofrequency (SF) representation is presented. We use the discrete evolutionary transform (DET) calculated by the Gabor expansion to represent an image in the joint SF domain. The watermark is embedded onto selected coefficients in the joint SF domain. Hence, by combining the advantages of spatial and spectral domain watermarking methods, a robust, invisible, secure, and high-capacity watermarking method is presented. A correlation-based detector is also proposed to detect and extract any possible watermarks on an image. The proposed watermarking method was tested on some commonly used test images under different signal processing attacks like additive noise, Wiener and Median filtering, JPEG compression, rotation, and cropping. Simulation results show that our method is robust against all of the attacks.

  16. 'Lyell' Panorama inside Victoria Crater (False Color)

    NASA Technical Reports Server (NTRS)

    2008-01-01

    During four months prior to the fourth anniversary of its landing on Mars, NASA's Mars Exploration Rover Opportunity examined rocks inside an alcove called 'Duck Bay' in the western portion of Victoria Crater. The main body of the crater appears in the upper right of this stereo panorama, with the far side of the crater lying about 800 meters (half a mile) away. Bracketing that part of the view are two promontories on the crater's rim at either side of Duck Bay. They are 'Cape Verde,' about 6 meters (20 feet) tall, on the left, and 'Cabo Frio,' about 15 meters (50 feet) tall, on the right. The rest of the image, other than sky and portions of the rover, is ground within Duck Bay.

    Opportunity's targets of study during the last quarter of 2007 were rock layers within a band exposed around the interior of the crater, about 6 meters (20 feet) from the rim. Bright rocks within the band are visible in the foreground of the panorama. The rover science team assigned informal names to three subdivisions of the band: 'Steno,' 'Smith,' and 'Lyell.'

    This view combines many images taken by Opportunity's panoramic camera (Pancam) from the 1,332nd through 1,379th Martian days, or sols, of the mission (Oct. 23 to Dec. 11, 2007). Images taken through Pancam filters centered on wavelengths of 753 nanometers, 535 nanometers and 432 nanometers were mixed to produce this view, which is presented in a false-color stretch to bring out subtle color differences in the scene. Some visible patterns in dark and light tones are the result of combining frames that were affected by dust on the front sapphire window of the rover's camera.

    Opportunity landed on Jan. 25, 2004, Universal Time, (Jan. 24, Pacific Time) inside a much smaller crater about 6 kilometers (4 miles) north of Victoria Crater, to begin a surface mission designed to last 3 months and drive about 600 meters (0.4 mile).

  17. Local wavelet transform: a cost-efficient custom processor for space image compression

    NASA Astrophysics Data System (ADS)

    Masschelein, Bart; Bormans, Jan G.; Lafruit, Gauthier

    2002-11-01

    Thanks to its intrinsic scalability features, the wavelet transform has become increasingly popular as decorrelator in image compression applications. Throuhgput, memory requirements and complexity are important parameters when developing hardware image compression modules. An implementation of the classical, global wavelet transform requires large memory sizes and implies a large latency between the availability of the input image and the production of minimal data entities for entropy coding. Image tiling methods, as proposed by JPEG2000, reduce the memory sizes and the latency, but inevitably introduce image artefacts. The Local Wavelet Transform (LWT), presented in this paper, is a low-complexity wavelet transform architecture using a block-based processing that results in the same transformed images as those obtained by the global wavelet transform. The architecture minimizes the processing latency with a limited amount of memory. Moreover, as the LWT is an instruction-based custom processor, it can be programmed for specific tasks, such as push-broom processing of infinite-length satelite images. The features of the LWT makes it appropriate for use in space image compression, where high throughput, low memory sizes, low complexity, low power and push-broom processing are important requirements.

  18. LocalMove: computing on-lattice fits for biopolymers

    PubMed Central

    Ponty, Y.; Istrate, R.; Porcelli, E.; Clote, P.

    2008-01-01

    Given an input Protein Data Bank file (PDB) for a protein or RNA molecule, LocalMove is a web server that determines an on-lattice representation for the input biomolecule. The web server implements a Markov Chain Monte-Carlo algorithm with simulated annealing to compute an approximate fit for either the coarse-grain model or backbone model on either the cubic or face-centered cubic lattice. LocalMove returns a PDB file as output, as well as dynamic movie of 3D images of intermediate conformations during the computation. The LocalMove server is publicly available at http://bioinformatics.bc.edu/clotelab/localmove/. PMID:18556754

  19. Exhibits Recognition System for Combining Online Services and Offline Services

    NASA Astrophysics Data System (ADS)

    Ma, He; Liu, Jianbo; Zhang, Yuan; Wu, Xiaoyu

    2017-10-01

    In order to achieve a more convenient and accurate digital museum navigation, we have developed a real-time and online-to-offline museum exhibits recognition system using image recognition method based on deep learning. In this paper, the client and server of the system are separated and connected through the HTTP. Firstly, by using the client app in the Android mobile phone, the user can take pictures and upload them to the server. Secondly, the features of the picture are extracted using the deep learning network in the server. With the help of the features, the pictures user uploaded are classified with a well-trained SVM. Finally, the classification results are sent to the client and the detailed exhibition’s introduction corresponding to the classification results are shown in the client app. Experimental results demonstrate that the recognition accuracy is close to 100% and the computing time from the image uploading to the exhibit information show is less than 1S. By means of exhibition image recognition algorithm, our implemented exhibits recognition system can combine online detailed exhibition information to the user in the offline exhibition hall so as to achieve better digital navigation.

  20. Tropical Sectors - NOAA GOES Geostationary Satellite Server

    Science.gov Websites

    Hurricane IR Image (Pacific) Loop Visible Full Size Hurricane VIS Image (Pacific) Loop Water Vapor Full Size purposes only, they are not considered "operational". This web site should not be used to support

  1. Design and implementation of a cloud based lithography illumination pupil processing application

    NASA Astrophysics Data System (ADS)

    Zhang, Youbao; Ma, Xinghua; Zhu, Jing; Zhang, Fang; Huang, Huijie

    2017-02-01

    Pupil parameters are important parameters to evaluate the quality of lithography illumination system. In this paper, a cloud based full-featured pupil processing application is implemented. A web browser is used for the UI (User Interface), the websocket protocol and JSON format are used for the communication between the client and the server, and the computing part is implemented in the server side, where the application integrated a variety of high quality professional libraries, such as image processing libraries libvips and ImageMagic, automatic reporting system latex, etc., to support the program. The cloud based framework takes advantage of server's superior computing power and rich software collections, and the program could run anywhere there is a modern browser due to its web UI design. Compared to the traditional way of software operation model: purchased, licensed, shipped, downloaded, installed, maintained, and upgraded, the new cloud based approach, which is no installation, easy to use and maintenance, opens up a new way. Cloud based application probably is the future of the software development.

  2. CIS3/398: Implementation of a Web-Based Electronic Patient Record for Transplant Recipients

    PubMed Central

    Fritsche, L; Lindemann, G; Schroeter, K; Schlaefer, A; Neumayer, H-H

    1999-01-01

    Introduction While the "Electronic patient record" (EPR) is a frequently quoted term in many areas of healthcare, only few working EPR-systems are available so far. To justify their use, EPRs must be able to store and display all kinds of medical information in a reliable, secure, time-saving, user-friendly way at an affordable price. Fields with patients who are attended to by a large number of medical specialists over a prolonged period of time are best suited to demonstrate the potential benefits of an EPR. The aim of our project was to investigate the feasibility of an EPR based solely on "of-the-shelf"-software and Internet-technology in the field of organ transplantation. Methods The EPR-system consists of three main elements: Data-storage facilities, a Web-server and a user-interface. Data are stored either in a relational database (Sybase Adaptive 11.5, Sybase Inc., CA) or in case of pictures (JPEG) and files in application formats (e. g. Word-Documents) on a Windows NT 4.0 Server (Microsoft Corp., WA). The entire communication of all data is handled by a Web-server (IIS 4.0, Microsoft) with an Active Server Pages extension. The database is accessed by ActiveX Data Objects via the ODBC-interface. The only software required on the user's computer is the Internet Explorer 4.01 (Microsoft), during the first use of the EPR, the ActiveX HTML Layout Control is automatically added. The user can access the EPR via Local or Wide Area Network or by dial-up connection. If the EPR is accessed from outside the firewall, all communication is encrypted (SSL 3.0, Netscape Comm. Corp., CA).The speed of the EPR-system was tested with 50 repeated measurements of the duration of two key-functions: 1) Display of all lab results for a given day and patient and 2) automatic composition of a letter containing diagnoses, medication, notes and lab results. For the test a 233 MHz Pentium II Processor with 10 Mbit/s Ethernet connection (ping-time below 10 ms) over 2 hubs to the server (400 MHz Pentium II, 256 MB RAM) was used. Results So far the EPR-system has been running for eight consecutive months and contains complete records of 673 transplant recipients with an average follow-up of 9.9 (SD :4.9) years and a total of 1.1 million lab values. Instruction to enable new users to perform basic operations took less than two hours in all cases. The average duration of laboratory access was 0.9 (SD:0.5) seconds, the automatic composition of a letter took 6.1 (SD:2.4) seconds. Apart from the database and Windows NT, all other components are available for free. The development of the EPR-system required less than two person-years. Conclusion Implementation of an Electronic patient record that meets the requirements of comprehensiveness, reliability, security, speed, user-friendliness and affordability using a combination of "of-the-shelf" software-products can be feasible, if the current state-of-the-art internet technology is applied.

  3. Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising.

    PubMed

    Zhang, Kai; Zuo, Wangmeng; Chen, Yunjin; Meng, Deyu; Zhang, Lei

    2017-07-01

    The discriminative model learning for image denoising has been recently attracting considerable attentions due to its favorable denoising performance. In this paper, we take one step forward by investigating the construction of feed-forward denoising convolutional neural networks (DnCNNs) to embrace the progress in very deep architecture, learning algorithm, and regularization method into image denoising. Specifically, residual learning and batch normalization are utilized to speed up the training process as well as boost the denoising performance. Different from the existing discriminative denoising models which usually train a specific model for additive white Gaussian noise at a certain noise level, our DnCNN model is able to handle Gaussian denoising with unknown noise level (i.e., blind Gaussian denoising). With the residual learning strategy, DnCNN implicitly removes the latent clean image in the hidden layers. This property motivates us to train a single DnCNN model to tackle with several general image denoising tasks, such as Gaussian denoising, single image super-resolution, and JPEG image deblocking. Our extensive experiments demonstrate that our DnCNN model can not only exhibit high effectiveness in several general image denoising tasks, but also be efficiently implemented by benefiting from GPU computing.

  4. A complete passive blind image copy-move forensics scheme based on compound statistics features.

    PubMed

    Peng, Fei; Nie, Yun-ying; Long, Min

    2011-10-10

    Since most sensor pattern noise based image copy-move forensics methods require a known reference sensor pattern noise, it generally results in non-blinded passive forensics, which significantly confines the application circumstances. In view of this, a novel passive-blind image copy-move forensics scheme is proposed in this paper. Firstly, a color image is transformed into a grayscale one, and wavelet transform based de-noising filter is used to extract the sensor pattern noise, then the variance of the pattern noise, the signal noise ratio between the de-noised image and the pattern noise, the information entropy and the average energy gradient of the original grayscale image are chosen as features, non-overlapping sliding window operations are done to the images to divide them into different sub-blocks. Finally, the tampered areas are detected by analyzing the correlation of the features between the sub-blocks and the whole image. Experimental results and analysis show that the proposed scheme is completely passive-blind, has a good detection rate, and is robust against JPEG compression, noise, rotation, scaling and blurring. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  5. Distributed data collection for a database of radiological image interpretations

    NASA Astrophysics Data System (ADS)

    Long, L. Rodney; Ostchega, Yechiam; Goh, Gin-Hua; Thoma, George R.

    1997-01-01

    The National Library of Medicine, in collaboration with the National Center for Health Statistics and the National Institute for Arthritis and Musculoskeletal and Skin Diseases, has built a system for collecting radiological interpretations for a large set of x-ray images acquired as part of the data gathered in the second National Health and Nutrition Examination Survey. This system is capable of delivering across the Internet 5- and 10-megabyte x-ray images to Sun workstations equipped with X Window based 2048 X 2560 image displays, for the purpose of having these images interpreted for the degree of presence of particular osteoarthritic conditions in the cervical and lumbar spines. The collected interpretations can then be stored in a database at the National Library of Medicine, under control of the Illustra DBMS. This system is a client/server database application which integrates (1) distributed server processing of client requests, (2) a customized image transmission method for faster Internet data delivery, (3) distributed client workstations with high resolution displays, image processing functions and an on-line digital atlas, and (4) relational database management of the collected data.

  6. A network-based training environment: a medical image processing paradigm.

    PubMed

    Costaridou, L; Panayiotakis, G; Sakellaropoulos, P; Cavouras, D; Dimopoulos, J

    1998-01-01

    The capability of interactive multimedia and Internet technologies is investigated with respect to the implementation of a distance learning environment. The system is built according to a client-server architecture, based on the Internet infrastructure, composed of server nodes conceptually modelled as WWW sites. Sites are implemented by customization of available components. The environment integrates network-delivered interactive multimedia courses, network-based tutoring, SIG support, information databases of professional interest, as well as course and tutoring management. This capability has been demonstrated by means of an implemented system, validated with digital image processing content, specifically image enhancement. Image enhancement methods are theoretically described and applied to mammograms. Emphasis is given to the interactive presentation of the effects of algorithm parameters on images. The system end-user access depends on available bandwidth, so high-speed access can be achieved via LAN or local ISDN connections. Network based training offers new means of improved access and sharing of learning resources and expertise, as promising supplements in training.

  7. Toyz: A framework for scientific analysis of large datasets and astronomical images

    NASA Astrophysics Data System (ADS)

    Moolekamp, F.; Mamajek, E.

    2015-11-01

    As the size of images and data products derived from astronomical data continues to increase, new tools are needed to visualize and interact with that data in a meaningful way. Motivated by our own astronomical images taken with the Dark Energy Camera (DECam) we present Toyz, an open source Python package for viewing and analyzing images and data stored on a remote server or cluster. Users connect to the Toyz web application via a web browser, making it ​a convenient tool for students to visualize and interact with astronomical data without having to install any software on their local machines. In addition it provides researchers with an easy-to-use tool that allows them to browse the files on a server and quickly view very large images (>2 Gb) taken with DECam and other cameras with a large FOV and create their own visualization tools that can be added on as extensions to the default Toyz framework.

  8. A simple tool for neuroimaging data sharing

    PubMed Central

    Haselgrove, Christian; Poline, Jean-Baptiste; Kennedy, David N.

    2014-01-01

    Data sharing is becoming increasingly common, but despite encouragement and facilitation by funding agencies, journals, and some research efforts, most neuroimaging data acquired today is still not shared due to political, financial, social, and technical barriers to sharing data that remain. In particular, technical solutions are few for researchers that are not a part of larger efforts with dedicated sharing infrastructures, and social barriers such as the time commitment required to share can keep data from becoming publicly available. We present a system for sharing neuroimaging data, designed to be simple to use and to provide benefit to the data provider. The system consists of a server at the International Neuroinformatics Coordinating Facility (INCF) and user tools for uploading data to the server. The primary design principle for the user tools is ease of use: the user identifies a directory containing Digital Imaging and Communications in Medicine (DICOM) data, provides their INCF Portal authentication, and provides identifiers for the subject and imaging session. The user tool anonymizes the data and sends it to the server. The server then runs quality control routines on the data, and the data and the quality control reports are made public. The user retains control of the data and may change the sharing policy as they need. The result is that in a few minutes of the user’s time, DICOM data can be anonymized and made publicly available, and an initial quality control assessment can be performed on the data. The system is currently functional, and user tools and access to the public image database are available at http://xnat.incf.org/. PMID:24904398

  9. Leveraging Metadata to Create Interactive Images... Today!

    NASA Astrophysics Data System (ADS)

    Hurt, Robert L.; Squires, G. K.; Llamas, J.; Rosenthal, C.; Brinkworth, C.; Fay, J.

    2011-01-01

    The image gallery for NASA's Spitzer Space Telescope has been newly rebuilt to fully support the Astronomy Visualization Metadata (AVM) standard to create a new user experience both on the website and in other applications. We encapsulate all the key descriptive information for a public image, including color representations and astronomical and sky coordinates and make it accessible in a user-friendly form on the website, but also embed the same metadata within the image files themselves. Thus, images downloaded from the site will carry with them all their descriptive information. Real-world benefits include display of general metadata when such images are imported into image editing software (e.g. Photoshop) or image catalog software (e.g. iPhoto). More advanced support in Microsoft's WorldWide Telescope can open a tagged image after it has been downloaded and display it in its correct sky position, allowing comparison with observations from other observatories. An increasing number of software developers are implementing AVM support in applications and an online image archive for tagged images is under development at the Spitzer Science Center. Tagging images following the AVM offers ever-increasing benefits to public-friendly imagery in all its standard forms (JPEG, TIFF, PNG). The AVM standard is one part of the Virtual Astronomy Multimedia Project (VAMP); http://www.communicatingastronomy.org

  10. Alaskan Auroral All-Sky Images on the World Wide Web

    NASA Technical Reports Server (NTRS)

    Stenbaek-Nielsen, H. C.

    1997-01-01

    In response to a 1995 NASA SPDS announcement of support for preservation and distribution of important data sets online, the Geophysical Institute, University of Alaska Fairbanks, Alaska, proposed to provide World Wide Web access to the Poker Flat Auroral All-sky Camera images in real time. The Poker auroral all-sky camera is located in the Davis Science Operation Center at Poker Flat Rocket Range about 30 miles north-east of Fairbanks, Alaska, and is connected, through a microwave link, with the Geophysical Institute where we maintain the data base linked to the Web. To protect the low light-level all-sky TV camera from damage due to excessive light, we only operate during the winter season when the moon is down. The camera and data acquisition is now fully computer controlled. Digital images are transmitted each minute to the Web linked data base where the data are available in a number of different presentations: (1) Individual JPEG compressed images (1 minute resolution); (2) Time lapse MPEG movie of the stored images; and (3) A meridional plot of the entire night activity.

  11. Adaptive proxy map server for efficient vector spatial data rendering

    NASA Astrophysics Data System (ADS)

    Sayar, Ahmet

    2013-01-01

    The rapid transmission of vector map data over the Internet is becoming a bottleneck of spatial data delivery and visualization in web-based environment because of increasing data amount and limited network bandwidth. In order to improve both the transmission and rendering performances of vector spatial data over the Internet, we propose a proxy map server enabling parallel vector data fetching as well as caching to improve the performance of web-based map servers in a dynamic environment. Proxy map server is placed seamlessly anywhere between the client and the final services, intercepting users' requests. It employs an efficient parallelization technique based on spatial proximity and data density in case distributed replica exists for the same spatial data. The effectiveness of the proposed technique is proved at the end of the article by the application of creating map images enriched with earthquake seismic data records.

  12. A data grid for imaging-based clinical trials

    NASA Astrophysics Data System (ADS)

    Zhou, Zheng; Chao, Sander S.; Lee, Jasper; Liu, Brent; Documet, Jorge; Huang, H. K.

    2007-03-01

    Clinical trials play a crucial role in testing new drugs or devices in modern medicine. Medical imaging has also become an important tool in clinical trials because images provide a unique and fast diagnosis with visual observation and quantitative assessment. A typical imaging-based clinical trial consists of: 1) A well-defined rigorous clinical trial protocol, 2) a radiology core that has a quality control mechanism, a biostatistics component, and a server for storing and distributing data and analysis results; and 3) many field sites that generate and send image studies to the radiology core. As the number of clinical trials increases, it becomes a challenge for a radiology core servicing multiple trials to have a server robust enough to administrate and quickly distribute information to participating radiologists/clinicians worldwide. The Data Grid can satisfy the aforementioned requirements of imaging based clinical trials. In this paper, we present a Data Grid architecture for imaging-based clinical trials. A Data Grid prototype has been implemented in the Image Processing and Informatics (IPI) Laboratory at the University of Southern California to test and evaluate performance in storing trial images and analysis results for a clinical trial. The implementation methodology and evaluation protocol of the Data Grid are presented.

  13. Design and implementation of GRID-based PACS in a hospital with multiple imaging departments

    NASA Astrophysics Data System (ADS)

    Yang, Yuanyuan; Jin, Jin; Sun, Jianyong; Zhang, Jianguo

    2008-03-01

    Usually, there were multiple clinical departments providing imaging-enabled healthcare services in enterprise healthcare environment, such as radiology, oncology, pathology, and cardiology, the picture archiving and communication system (PACS) is now required to support not only radiology-based image display, workflow and data flow management, but also to have more specific expertise imaging processing and management tools for other departments providing imaging-guided diagnosis and therapy, and there were urgent demand to integrate the multiple PACSs together to provide patient-oriented imaging services for enterprise collaborative healthcare. In this paper, we give the design method and implementation strategy of developing grid-based PACS (Grid-PACS) for a hospital with multiple imaging departments or centers. The Grid-PACS functions as a middleware between the traditional PACS archiving servers and workstations or image viewing clients and provide DICOM image communication and WADO services to the end users. The images can be stored in distributed multiple archiving servers, but can be managed with central mode. The grid-based PACS has auto image backup and disaster recovery services and can provide best image retrieval path to the image requesters based on the optimal algorithms. The designed grid-based PACS has been implemented in Shanghai Huadong Hospital and been running for two years smoothly.

  14. High Performance Compression of Science Data

    NASA Technical Reports Server (NTRS)

    Storer, James A.; Carpentieri, Bruno; Cohn, Martin

    1994-01-01

    Two papers make up the body of this report. One presents a single-pass adaptive vector quantization algorithm that learns a codebook of variable size and shape entries; the authors present experiments on a set of test images showing that with no training or prior knowledge of the data, for a given fidelity, the compression achieved typically equals or exceeds that of the JPEG standard. The second paper addresses motion compensation, one of the most effective techniques used in interframe data compression. A parallel block-matching algorithm for estimating interframe displacement of blocks with minimum error is presented. The algorithm is designed for a simple parallel architecture to process video in real time.

  15. Development and evaluation of vision rehabilitation devices.

    PubMed

    Luo, Gang; Peli, Eli

    2011-01-01

    We have developed a range of vision rehabilitation devices and techniques for people with impaired vision due to either central vision loss or severely restricted peripheral visual field. We have conducted evaluation studies with patients to test the utilities of these techniques in an effort to document their advantages as well as their limitations. Here we describe our work on a visual field expander based on a head mounted display (HMD) for tunnel vision, a vision enhancement device for central vision loss, and a frequency domain JPEG/MPEG based image enhancement technique. All the evaluation studies included visual search paradigms that are suitable for conducting indoor controllable experiments.

  16. Restoration of Static JPEG Images and RGB Video Frames by Means of Nonlinear Filtering in Conditions of Gaussian and Non-Gaussian Noise

    NASA Astrophysics Data System (ADS)

    Sokolov, R. I.; Abdullin, R. R.

    2017-11-01

    The use of nonlinear Markov process filtering makes it possible to restore both video stream frames and static photos at the stage of preprocessing. The present paper reflects the results of research in comparison of these types image filtering quality by means of special algorithm when Gaussian or non-Gaussian noises acting. Examples of filter operation at different values of signal-to-noise ratio are presented. A comparative analysis has been performed, and the best filtered kind of noise has been defined. It has been shown the quality of developed algorithm is much better than quality of adaptive one for RGB signal filtering at the same a priori information about the signal. Also, an advantage over median filter takes a place when both fluctuation and pulse noise filtering.

  17. Pre-Clinical and Clinical Evaluation of High Resolution, Mobile Gamma Camera and Positron Imaging Devices

    DTIC Science & Technology

    2007-11-01

    accuracy. FPGA ADC data acquisition is controlled by distributed Java -based software. Java -based server application sits on each of the acquisition...JNI ( Java Native Interface) is used to allow Java indirect control of the USB driver. Fig. 5. Photograph of mobile electronics rack...supplies with the monitor and keyboard. The server application on each of these machines is controlled by a remote client Java -based application

  18. Video movie making using remote procedure calls and 4BSD Unix sockets on Unix, UNICOS, and MS-DOS systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robertson, D.W.; Johnston, W.E.; Hall, D.E.

    1990-03-01

    We describe the use of the Sun Remote Procedure Call and Unix socket interprocess communication mechanisms to provide the network transport for a distributed, client-server based, image handling system. Clients run under Unix or UNICOS and servers run under Unix or MS-DOS. The use of remote procedure calls across local or wide-area networks to make video movies is addressed.

  19. [A web-based biomedical image mosaicing system].

    PubMed

    Zhang, Meng; Yan, Zhuang-zhi; Pan, Zhi-jun; Shao, Shi-jie

    2006-11-01

    This paper describes a web service for biomedical image mosaicing. A web site based on CGI (Common Gateway Interface) is implemented. The system is based on Browser/Server model and is tested in www. Finally implementation examples and experiment results are provided.

  20. Web-based segmentation and display of three-dimensional radiologic image data.

    PubMed

    Silverstein, J; Rubenstein, J; Millman, A; Panko, W

    1998-01-01

    In many clinical circumstances, viewing sequential radiological image data as three-dimensional models is proving beneficial. However, designing customized computer-generated radiological models is beyond the scope of most physicians, due to specialized hardware and software requirements. We have created a simple method for Internet users to remotely construct and locally display three-dimensional radiological models using only a standard web browser. Rapid model construction is achieved by distributing the hardware intensive steps to a remote server. Once created, the model is automatically displayed on the requesting browser and is accessible to multiple geographically distributed users. Implementation of our server software on large scale systems could be of great service to the worldwide medical community.

  1. BIRD: Bio-Image Referral Database. Design and implementation of a new web based and patient multimedia data focused system for effective medical diagnosis and therapy.

    PubMed

    Pinciroli, Francesco; Masseroli, Marco; Acerbo, Livio A; Bonacina, Stefano; Ferrari, Roberto; Marchente, Mario

    2004-01-01

    This paper presents a low cost software platform prototype supporting health care personnel in retrieving patient referral multimedia data. These information are centralized in a server machine and structured by using a flexible eXtensible Markup Language (XML) Bio-Image Referral Database (BIRD). Data are distributed on demand to requesting client in an Intranet network and transformed via eXtensible Stylesheet Language (XSL) to be visualized in an uniform way on market browsers. The core server operation software has been developed in PHP Hypertext Preprocessor scripting language, which is very versatile and useful for crafting a dynamic Web environment.

  2. EarthServer: a Summary of Achievements in Technology, Services, and Standards

    NASA Astrophysics Data System (ADS)

    Baumann, Peter

    2015-04-01

    Big Data in the Earth sciences, the Tera- to Exabyte archives, mostly are made up from coverage data, according to ISO and OGC defined as the digital representation of some space-time varying phenomenon. Common examples include 1-D sensor timeseries, 2-D remote sensing imagery, 3D x/y/t image timese ries and x/y/z geology data, and 4-D x/y/z/t atmosphere and ocean data. Analytics on such data requires on-demand processing of sometimes significant complexity, such as getting the Fourier transform of satellite images. As network bandwidth limits prohibit transfer of such Big Data it is indispensable to devise protocols allowing clients to task flexible and fast processing on the server. The transatlantic EarthServer initiative, running from 2011 through 2014, has united 11 partners to establish Big Earth Data Analytics. A key ingredient has been flexibility for users to ask whatever they want, not impeded and complicated by system internals. The EarthServer answer to this is to use high-level, standards-based query languages which unify data and metadata search in a simple, yet powerful way. A second key ingredient is scalability. Without any doubt, scalability ultimately can only be achieved through parallelization. In the past, parallelizing cod e has been done at compile time and usually with manual intervention. The EarthServer approach is to perform a samentic-based dynamic distribution of queries fragments based on networks optimization and further criteria. The EarthServer platform is comprised by rasdaman, the pioneer and leading Array DBMS built for any-size multi-dimensional raster data being extended with support for irregular grids and general meshes; in-situ retrieval (evaluation of database queries on existing archive structures, avoiding data import and, hence, duplication); the aforementioned distributed query processing. Additionally, Web clients for multi-dimensional data visualization are being established. Client/server interfaces are strictly based on OGC and W3C standards, in particular the Web Coverage Processing Service (WCPS) which defines a high-level coverage query language. Reviewers have attested EarthServer that "With no doubt the project has been shaping the Big Earth Data landscape through the standardization activities within OGC, ISO and beyond". We present the project approach, its outcomes and impact on standardization and Big Data technology, and vistas for the future.

  3. EzMol: A Web Server Wizard for the Rapid Visualization and Image Production of Protein and Nucleic Acid Structures.

    PubMed

    Reynolds, Christopher R; Islam, Suhail A; Sternberg, Michael J E

    2018-01-31

    EzMol is a molecular visualization Web server in the form of a software wizard, located at http://www.sbg.bio.ic.ac.uk/ezmol/. It is designed for easy and rapid image manipulation and display of protein molecules, and is intended for users who need to quickly produce high-resolution images of protein molecules but do not have the time or inclination to use a software molecular visualization system. EzMol allows the upload of molecular structure files in PDB format to generate a Web page including a representation of the structure that the user can manipulate. EzMol provides intuitive options for chain display, adjusting the color/transparency of residues, side chains and protein surfaces, and for adding labels to residues. The final adjusted protein image can then be downloaded as a high-resolution image. There are a range of applications for rapid protein display, including the illustration of specific areas of a protein structure and the rapid prototyping of images. Copyright © 2018. Published by Elsevier Ltd.

  4. SINFONI Opens with Upbeat Chords

    NASA Astrophysics Data System (ADS)

    2004-08-01

    First Observations with New VLT Instrument Hold Great Promise [1] Summary The European Southern Observatory, the Max-Planck-Institute for Extraterrestrial Physics (Garching, Germany) and the Nederlandse Onderzoekschool Voor Astronomie (Leiden, The Netherlands), and with them all European astronomers, are celebrating the successful accomplishment of "First Light" for the Adaptive Optics (AO) assisted SINFONI ("Spectrograph for INtegral Field Observation in the Near-Infrared") instrument, just installed on ESO's Very Large Telescope at the Paranal Observatory (Chile). This is the first facility of its type ever installed on an 8-m class telescope, now providing exceptional observing capabilities for the imaging and spectroscopic studies of very complex sky regions, e.g. stellar nurseries and black-hole environments, also in distant galaxies. Following smooth assembly at the 8.2-m VLT Yepun telescope of SINFONI's two parts, the Adaptive Optics Module that feeds the SPIFFI spectrograph, the "First Light" spectrum of a bright star was recorded with SINFONI in the early evening of July 9, 2004. The following thirteen nights served to evaluate the performance of the new instrument and to explore its capabilities by test observations on a selection of exciting astronomical targets. They included the Galactic Centre region, already imaged with the NACO AO-instrument on the same telescope. Unprecedented high-angular resolution spectra and images were obtained of stars in the immediate vicinity of the massive central black hole. During the night of July 15 - 16, SINFONI recorded a flare from this black hole in great detail. Other interesting objects observed during this period include galaxies with active nuclei (e.g., the Circinus Galaxy and NGC 7469), a merging galaxy system (NGC 6240) and a young starforming galaxy pair at redshift 2 (BX 404/405). These first results were greeted with enthusiasm by the team of astronomers and engineers [2] from the consortium of German and Dutch Institutes and ESO who have worked on the development of SINFONI for nearly 7 years. The work on SINFONI at Paranal included successful commissioning in June 2004 of the Adaptive Optics Module built by ESO, during which exceptional test images were obtained of the main-belt asteroid (22) Kalliope and its moon. Moreover, the ability was demonstrated to correct the atmospheric turbulence by means of even very faint "guide" objects (magnitude 17.5), crucial for the observation of astronomical objects in many parts of the sky. SPIFFI - SPectrometer for Infrared Faint Field Imaging - was developed at the Max Planck Institute for Extraterrestrische Physik (MPE) in Garching (Germany), in a collaboration with the Nederlandse Onderzoekschool Voor Astronomie (NOVA) in Leiden and the Netherlands Foundation for Research in Astronomy (ASTRON), and ESO. PR Photo 24a/04: SINFONI Adaptive Optics Module at VLT Yepun (June 2004) PR Photo 24b/04: SINFONI at VLT Yepun, now fully assembled (July 2004) PR Photo 24c/04: "First Light" image from the SINFONI Adaptive Optics Module PR Photo 24d/04: AO-corrected Image of a 17.5-magnitude Star PR Photo 24e/04: SINFONI undergoing Balancing and Flexure Tests at VLT Yepun PR Photo 24f/04: SINFONI "First Light" Spectrum of HD 130163 PR Photo 24g/04: Members of the SINFONI Adaptive Optics Module Commissioning Team PR Photo 24h/04: Members of the SPIFFI Commissioning Team PR Photo 24i/04: The Principle of Integral Field Spectroscopy (IFS) PR Photo 24j/04: The Orbital Motion of Linus around (22) Kalliope PR Photo 24k/04: SINFONI Observations of the Galactic Centre Region PR Photo 24l/04: SINFONI Observations of the Circinus Galaxy PR Photo 24m/04: SINFONI Observations of the AGN Galaxy NGC 7469 PR Photo 24n/04: SINFONI Observations of NGC 6240 PR Photo 24o/04: SINFONI Observations of the Young Starforming Galaxies BX 404/405 PR Video Clip 07/04: The Orbital Motion of Linus around (22) Kalliope SINFONI: A powerful and complex instrument ESO PR Photo 24a/04 ESO PR Photo 24a/04 The SINFONI Adaptive Optics Module Commissioning Setup [Preview - JPEG: 427 x 400 pix - 230k] [Normal - JPEG: 854 x 800 pix - 551k] ESO PR Photo 24b/04 ESO PR Photo 24b/04 SINFONI at the VLT Yepun Cassegrain Focus [Preview - JPEG: 414 x 400 pix - 222k] [Normal - JPEG: 827 x 800 pix - 574k] Captions: ESO PR Photo 24a/04 shows the SINFONI Adaptive Optics Module, installed at the 8.2-m VLT YEPUN telescope during the first tests in June 2004. At this time, SPIFFI was not yet installed. The blue ring is the Adaptive Optics Module. The yellow parts, with a weight of 800 kg, simulate SPIFFI. The IR Test Imager is located inside the yellow ring. On ESO PR Photo 24b/04, the Near-Infrared Spectrograph SPIFFI in its cryogenic aluminium cylinder has now been attached. A new and very powerful astronomical instrument, a world-leader in its field, has been installed on the Very Large Telescope at the Paranal Observatory (Chile), cf. PR Photos 24a-b/04. Known as SINFONI ("Spectrograph for INtegral Field Observation in the Near-Infrared"), it was mounted in two steps at the Cassegrain focus of the 8.2-m VLT YEPUN telescope. First Light of the completed instrument was achieved on July 9, 2004 and various test observations during the subsequent commissioning phase were carried out with great success. SINFONI has two parts, the Near Infrared Integral Field Spectrograph, also known as SPIFFI (SPectrometer for Infrared Faint Field Imaging), and the Adaptive Optics Module. SPIFFI was developed at the Max Planck Institute for Extraterrestrische Physik (MPE) (Garching, Germany), in a collaboration with the Nederlandse Onderzoekschool Voor Astronomie (NOVA) in Leiden, the Netherlands Foundation for Research in Astronomy (ASTRON) (The Netherlands), and the European Southern Observatory (ESO) (Garching, Germany). The Adaptive Optics (AO) Module was developed by ESO. Once fully commissioned, SINFONI will provide adaptive-optics assisted Integral Field Spectroscopy in the near-infrared 1.1 - 2.45 µm waveband. This advanced technique provides simultaneous spectra of numerous adjacent regions in a small sky field, e.g., of an interstellar nebula, the stars in a dense stellar cluster or a galaxy. Astronomers refer to these data as "3D-spectra" or "data cubes" (i.e., one spectrum for each small area in the two-dimensional sky field), cf. Appendix A. The SINFONI Adaptive Optics Module is based on a 60-element curvature system, similar to the Multi Application Curvature Adaptive Optics devices (MACAO), developed by the ESO Adaptive Optics Department and of which three have already been installed at the VLT (ESO PR 11/03); the last one in August 2004. Provided a sufficiently bright reference source ("guide star") is available within 60 arcsec of the observed field, the SINFONI AO module will ultimately offer diffraction-limited images (resolution 0.050 arcsec) at a wavelength of 2 µm. At the centre of the field, partial correction can be performed with guide stars as faint as magnitude 17.5. In about 6-months' time, it will benefit from a sodium Laser Guide Star, achieving a much better sky coverage than what is now possible. SPIFFI is a fully cryogenic near-infrared integral field spectrograph allowing observers to obtain simultaneously spectra of 2048 pixels within a 64 x 32 pixel field-of-view. In conjunction with the AO Module, it performs spectroscopy with slit-width sampling at the diffraction limit of an 8-m class telescope. For observations of very faint, extended celestial objects, the spatial resolution can be degraded so that both sensitivity and field-of-view are increased. SPIFFI works in the near-infrared wavelength range (1.1 - 2.45 µm) with a moderate spectral resolving power (R = 1500 to 4500). More information about the way SPIFFI functions will be found in Appendix A. "First Light with SINFONI's Adaptive Optics Module ESO PR Photo 24c/04 ESO PR Photo 24c/04 SINFONI AO "First Light" Image [Preview - JPEG: 400 x 482 pix - 106k] [Normal - JPEG: 800 x 963 pix - 256k] ESO PR Photo 24d/04 ESO PR Photo 24d/04 AO-corrected image of 17.5-magnitude Star [Preview - JPEG: 509 x 400 pix - 80k] [Normal - JPEG: 1018 x 800 pix - 182k] Captions: ESO PR Photo 24c/04 shows the "First Light" image obtained with the SINFONI AO Module and a high-angular-resolution near-infrared Test Camera during the night of May 31 - June 1, 2004. The magnitude of the observed star is 11 and the seeing conditions median. The diffraction limit at wavelength 2.2 µm of the 8.2-m telescope (FWHM 0.06 arcsec) was reached and is indicated by the bar. ESO PR Photo 24d/04: Image of a very faint guide star (visual magnitude 17.5), obtained with the SINFONI AO Module. To the right, the seeing-limited K-band image (FWHM 0.38 arcsec). To the left, the AO-corrected image (FWHM 0.145 arcsec). The ability to perform AO corrections on very faint guide objects is essential for SINFONI in order to observe very faint extragalactic objects. Because of the complexity of SINFONI, with its two modules, it was decided to perform the installation on the 8.2-m VLT Yepun telescope in two steps. The Adaptive Optics module was completely dismounted at ESO-Garching (Germany) and the corresponding 6 tons of equipment was air-freighted from Frankfurt to Santiago de Chile. The subsequent transport by road arrived at the Paranal Observatory on April 21, 2004. After 6 weeks of reintegration and testing in the Integration Hall, the AO Module was mounted on Yepun on May 30 - 31, together with a high-angular-resolution near-infrared Test Camera, cf. PR Photo 24a/04. Technical "First-Light" with this system was achieved around midnight on May 31st by observing a 11-magnitude star, cf. PR Photo 24c/04, reaching right away the theoretical diffraction limit of the 8.2-m telescope (0.06 arcsec) at this wavelength (2.2 µm). Following this early success, the ESO AO team continued the full on-sky tuning and testing of the AO Module until June 8, setting in particular a new world record by reaching a limiting guide-star magnitude of 17.5, two-and-a-half magnitudes (a factor of 10) fainter than ever achieved with any telescope! The ability to perform AO corrections on very faint guide objects is essential for SINFONI in order to observe very faint extragalactic objects. During this commissioning period, test observations were performed of the binary asteroid (22) Kalliope and its moon Linus. They were made by the ESO AO team and served to demonstrate the high performance of this ESO-built Adaptive Optics (AO) system at near-infrared wavelengths. More information about these observations, including a movie of the orbital motion of Linus is available in Appendix B. "First Light" with SINFONI ESO PR Photo 24e/04 ESO PR Photo 24e/04 SINFONI Undergoing Balancing and Flexure Tests at VLT Yepun [Preview - JPEG: 427 x 400 pix - 269k] [Normal - JPEG: 854 x 800 pix - 730k] ESO PR Photo 24f/04 ESO PR Photo 24f/04 SINFONI "First Light" Spectrum [Preview - JPEG: 427 x 400 pix - 94k] [Normal - JPEG: 854 x 800 pix - 222k] Captions: ESO PR Photo 24e/04 shows SINFONI attached to the Cassegrain focus of the 8.2-m VLT Yepun telescope during balancing and flexure tests. ESO PR Photo 24f/04: "First Light" "data cube" spectrum obtained with SINFONI on the bright star HD 130163 on July 9, 2004, as seen on the science data computer screen. This 7th-magnitude A0 V star was observed in the near-infrared H-band with a moderate seeing of 0.8 arcsec. The width of the slitlets in this image is 0.25 arcsec. The exposure time was 1 second. The fully integrated SPIFFI module was air-freighted from Frankfurt to Santiago de Chile and arrived at Paranal on June 5, 2004. The subsequent cool-down to -195 °C was done and an extensive test programme was carried through during the next two weeks. Meanwhile, the AO Module was removed from the telescope and the "wedding" with SPIFFI was celebrated on June 20 in the Paranal Integration Hall. All went well and the first AO-corrected test spectra were obtained immediately thereafter. The extensive tests of SINFONI continued at this site until July 7, 2004, when the instrument was declared fit for work at the telescope. The installation at the 8.2-m VLT Yepun telescope was then accomplished on July 8 - 9, cf. PR Photos 24b/04 and 24e/04. "First Light" was achieved in the early evening of July 9, 2004, only 30 min after the telescope enclosure was opened. At 19:30 local time, SINFONI recorded the first AO-corrected "data cube" with spectra of HD 130163, cf. PR Photo 24f/04. This 7th-magnitude star was observed in the near-infrared H-band with a moderate seeing of 0.8 arcsec. Test Observations with SINFONI ESO PR Photo 24k/04 ESO PR Photo 24k/04 SINFONI Observations of the Galactic Centre [Preview - JPEG: 427 x 400 pix - 213k] [Normal - JPEG: 854 x 800 pix - 511k] ESO PR Photo 24o/04 ESO PR Photo 24o/04 SINFONI Observations of the Distant Galaxy Pair BX 404/405 [Preview - JPEG: 481 x 400 pix - 86k] [Normal - JPEG: 962 x 800 pix - 251k] Captions: ESO PR Photo 24k/04: The coloured image (background) shows a three-band composite image (H, K, and L-bands) obtained with the AO imager NACO on the 8.2-m VLT Yepun telescope. On July 15, 2004, the new SINFONI instrument, mounted at the Cassegrain focus of the same telescope, observed the innermost region (the central 1 x 1 arcsec) of the Milky Way Galaxy in the combined H+K band (1.45 - 2.45 µm) during a total of 110 min "on-source". The insert (upper left) shows the immediate neighbourhood of the central black hole as seen with SINFONI. The position of the black hole is marked with a yellow circle. Later in the night (03:37 UT on July 16), a flare from the black hole ocurred (a zoom-in is shown in the insert at the lower left) and the first-ever infrared spectrum of this phenomenon was observed. It was also possible to register for the first time in great detail the near-infrared spectra of young massive stars orbiting the black hole; some of these are shown in the inserts at the upper right; stars are identified by their "S"-designations. The lower right inserts show the spectra of stars in "IRS 13 E", a very compact cluster of very young and massive stars, located about 3.5 arcsec to the south-west of the black hole. The wavefront reference ("guide") star employed for these AO observations is comparably faint (red magnitude approx. 15), and it is located about 20 arcsec away from the field centre. The seeing during these observations was about 0.6 arcsec. The width of the slitlets was 0.025 arcsec. See Appendix G for more detail. ESO PR Photo 24o/04 shows the distant galaxy pair BX 404/405, as recorded in the K-band (wavelength 2 µm, centered on the redshifted H-alpha line), without AO-correction because of the lack of a nearby, sufficiently bright "guide" star. The width of each slitlet was 0.25 arcsec and the seeing about 0.6 arcsec. The integration time on the galaxy was 2 hours "on-source". The image shown has been reconstructed by combining all of the spectral elements around the H-alpha spectral line. The spectrum of BX 405 (upper right) clearly reveals signs of a velocity shear while that of BX 404 does not. This may be a sign of rotation, a possible signature of a young disc in this galaxy. More information can be found in Appendix C. Until July 22, test observations on a number of celestial objects were performed in order to tune the instrument, to evaluate the performance and to demonstrate its astronomical capabilities. In particular, spectra were obtained of various highly interesting celestial objects and sky regions. Details about these observations (and some images obtained with the AO Module alone) are available in the Appendices to this Press Release: * a video of the motion of the moon Linus around the main-belt asteroid (22) Kalliope, providing the best view of this binary system obtained so far (Appendix B), * images and first-ever detailed spectra of many of the stars that move near the massive black hole at the Galactic Centre, with crucial information on the nature of the individual stars and their motions (Appendix C), * images and spectra of the heavily dust-obscured, active centre of the Circinus galaxy, one of the closest active galaxies, showing ordered rotation in this area and distinct broad and narrow components of the spectral line of Ca7+-ions (Appendix D), * images and spectra of the less obscured central area of NGC 7469, a more distant active galaxy, with spectral lines of molecular hydrogen and carbon monoxide showing a very different distribution of these species (Appendix E), * images and spectra of the Infrared Luminous Galaxy (ULIRG) NGC 6240, a typical galaxy merger, displaying important differences between the two nuclei (Appendix F), and * images and spectra of the young starforming galaxies BX 404/405, casting more light on the formation of disks in spiral galaxies (Appendix G) The SINFONI Teams ESO PR Photo 24g/04 ESO PR Photo 24g/04 Members of the SINFONI Adaptive Optics Commissioning Team [Preview - JPEG: 646 x 400 pix - 198k] [Normal - JPEG: 1291 x 800 pix - 618k] ESO PR Photo 24h/04 ESO PR Photo 24h/04 Members of the SPIFFI Commissioning Team [Preview - JPEG: 491 x 400 pix - 193k] [Normal - JPEG: 982 x 800 pix - 482k] Captions: ESO PR Photo 24g/04 Members of the SINFONI Adaptice Optics Commissioning Team in the VLT Control Room in the night between June 7 - 8, 2004. From left to right and top to bottom: Thomas Szeifert, Sebastien Tordo, Stefan Stroebele, Jerome Paufique, Chris Lidman, Robert Donaldson, Enrico Fedrigo, Markus Kissler Patig, Norbert Hubin, Henri Bonnet. ESO PR Photo 24h/04: Members of the SPIFFI Commissioning Team on August 17. From left to right, Roberto Abuter, Frank Eisenhauer, Andrea Gilbert and Matthew Horrobin. The first SINFONI results have been greeted with enthusiasm, in particular by the team of astronomers and engineers from the consortium of German and Dutch institutes and ESO who worked on the development of SINFONI for nearly 7 years. Some of the members of the Commissioning Teams are depicted in PR Photos 24g/04 and 24h/04; in addition to the SPIFFI team members present on the second photo, Walter Bornemann, Reinhard Genzel, Hans Gemperlein, Stefan Huber have also been working on the reintegration/commissioning in Paranal. Notes [1] This press release is issued in coordination between ESO, the Max-Planck-Institute for Extraterrestrial Physics (MPE) in Garching, Germany, and the Nederlandse Onderzoekschool Voor Astronomie in Leiden, The Netherlands. A German version is available at http://www.mpg.de/bilderBerichteDokumente/dokumentation/pressemitteilungen/2004/pressemitteilung20040824/index.html and a Dutch version at http://www.astronomy.nl/inhoud/pers/persberichten/30_08_04.html. [2] The SINFONI team consists of Roberto Abuter, Andrew Baker, Walter Bornemann, Ric Davies, Frank Eisenhauer (SPIFFI Principal Investigator), Hans Gemperlein, Reinhard Genzel (MPE Director), Andrea Gilbert, Armin Goldbrunner, Matthew Horrobin, Stefan Huber, Christof Iserlohe, Matthew Lehnert, Werner Lieb, Dieter Lutz, Nicole Nesvadba, Claudia Röhrle, Jürgen Schreiber, Linda Tacconi, Matthias Tecza, Niranjan Thatte, Harald Weisz (Max-Planck-Institut für Extraterrestrische Physik, Garching, Germany), Anthony Brown, Paul van der Werf (NOVA, Leiden, The Netherlands), Eddy Elswijk, Johan Pragt, Jan Kragt, Gabby Kroes, Ton Schoenmaker, Rik ter Horst (ASTRON, Dwingeloo, The Netherlands), Henri Bonnet (SINFONI Project Manager), Roberto Castillo, Ralf Conzelmann, Romuald Damster, Bernard Delabre, Christophe Dupuy, Robert Donaldson, Christophe Dumas, Enrico Fedrigo, Gert Finger, Gordon Gillet, Norbert Hubin (Head of Adaptive Optics Dept.), Andreas Kaufer, Franz Koch, Johann Kolb, Andrea Modigliani, Guy Monnet (Head of Telescope Systems Division), Chris Lidman, Jochen Liske, Jean Louis Lizon, Markus Kissler-Patig (SINFONI Instrument Scientist), Jerome Paufique, Juha Reunanen, Silvio Rossi, Riccardo Schmutzer, Armin Silber, Stefan Ströbele (SINFONI System Engineer), Thomas Szeifert, Sebastien Tordo, Leander Mehrgan, Joerg Stegmeier, Reinhold Dorn (European Southern Observatory). Contacts Frank Eisenhauer Max-Planck-Institut für Extraterrestrische Physik (MPE) Garching, Germany Phone: +49-89-30000-3563 Email: eisenhau@mpe.mpg.de Paul van der Werf Leiden Observatory Leiden, The Netherlands Phone: +31-71-5275883 Email: pvdwerf@strw.leidenuniv.nl Henri Bonnet European Southern Observatory (ESO) Email: hbonnet@eso.org Reinhard Genzel Max-Planck-Institut für Extraterrestrische Physik (MPE) Garching, Germany Phone: +49-89-30000-3280 Email: Norbert Hubin European Southern Observatory (ESO) Email: nhubin@eso.org Appendix A: Integral Field Spectroscopy as a Powerful Discovery Tool ESO PR Photo 24i/04 ESO PR Photo 24i/04 How Integral Field Spectroscopy Works [Preview - JPEG: 400 x 425 pix - 127k] [Normal - JPEG: 800 x 850 pix - 366k] Caption: ESO PR Photo 24i/04 shows the principle of Integrated Field Spectroscopy (IFS). The detailed explanation is found in the text. How does SINFONI work? What is Integral Field Spectroscopy (IFS)? The idea of IFS is to obtain a spectrum of each defined spatial element ("spaxel") in the field-of-view. Several techniques to do this are available - in SINFONI, the slicer principle is applied. This involves (PR Photo 24i/04) that * the two-dimensional field-of-view is cut into slices, the so-called slitlets (short slits in contrast to normal long-slit spectroscopy), * the slitlets are then arranged next to each other to form a pseudo-long-slit, * a grating is used to disperse the light, and * the photons are detected with a Near-InfraRed detector. Following data reduction, the set of generated spectra can be re-arranged in the computer to form a 3-dimensional "data cube" of two spatial, and one wavelength dimension. Thus the term "3D-Spectroscopy" is sometimes used for IFS. Appendix B: Linus' orbital motion around Kalliope ESO PR Photo 24j/04 ESO PR Photo 24j/04 Asteroid Kalliope and its Moon Linus [Preview - JPEG: 400 x 427 pix - 50k] [Normal - JPEG: 800 x 854 pix - 136k] ESO PR Video 07/04 ESO PR Video 07/04 The Motion of Linus around Kalliope [MPG: 800 x 800 pix - 128k] [AVI : 800 x 800 pix - 176k] [Animated GIF : 800 x 800 pix - 592k] Caption: ESO PR Photo 24j/04 and Video Clip 07/04 show the best-ever images of the moon Linus orbiting Asteroid (22) Kalliope. It was obtained with the SINFONI Adaptive Optics Module and a high-angular-resolution near-infrared Test Camera during commissioning in June 2004. At minimum separation, the satellite approaches Kalliope to 0.33 arcsec, i.e. the angle under which a 1 Euro coin is seen at a distance of 15 kilometers. At maximum separation, the angular distance is nearly twice as large. For clarity, the brightness of the asteroid has been artificially decreased by a factor of 15, to the level of the moon. This image processing technique also permits to perceive the variation of the asteroid's shape as Kalliope spins around its own axis with a period of 4.15 hours. The asteroid, with an angular diameter of 0.11 arcsec, is barely resolved in these VLT images (resolution 0.06 arcsec at wavelength 2.2 µm). The satellite measures about 50 km acroos and orbits Kalliope at a distance of about 1000 kilometers. ESO Video Clip 07/04 shows the 3.6-day orbital motion of the satellite (moon) Linus around the main-belt asteroid (22) Kalliope. Kalliope orbits the Sun between Mars and Jupiter; it measures about 180 km across and the diameter of its moon is 50 km. This system was observed with the SINFONI AO Module for short periods over four consecutive nights. Linus moves around Kalliope in a circular orbit, at a distance of 1000 km and with a direction of motion similar to the rotation of Kalliope (prograde rotation); the orbital plane of the moon was seen under a 60°-angle with respect to the line-of-sight. The unobserved parts of this orbit are indicated by a dotted line. A hypothetical observer on the surface of Kalliope would live in a strange world: the days would be 14 hours long, and the sky would be filled by a moon five times bigger than our own! The brightness changes of the Linus images is due to variations in the sky conditions at the time of the observations. Rapid changes in the atmosphere result in variations in the sharpness of the corrected images. During the first two nights, seeing conditions were very good, but less so during the last two nights; this can be seen as a slight loss of sharpness of the corresponding satellite images. The discovery of this asteroid satellite, named Linus after the son of Kalliope, the Greek muse of heroic poetry, was first reported in September 2001 by a group of astronomers using the Canadian-France-Hawaii telescope on Mauna Kea (Hawaii, USA). Although previously believed to consist of metal-rich material, the discovery of Linus allowed the scientists to determine the mean density of Kalliope as ~ 2 g/cm3, a rather low value and not consistent with a metal-rich object. Kalliope is now believed to be a "rubble-pile" stony asteroid. Its porous interior is due to a catastrophic collision with another, smaller asteroid early in its history and which also gave birth to Linus. Other references related to Kalliope can be found in the International Astronomical Union Circular (IAUC) 7703 (2001) and a research article "A low density M-type asteroid in the main-belt" by Margot and Brown (Science 300, 193, 2003). Appendix C: Stars at the Galactic Centre and a Flare from the Black Hole ESO PR Photo 24k/04 ESO PR Photo 24k/04 SINFONI Observations of the Galactic Centre [Preview - JPEG: 427 x 400 pix - 213k] [Normal - JPEG: 854 x 800 pix - 511k] Caption: ESO PR Photo 24k/04: The coloured image (background) shows a three-band composite image (H, K, and L-bands) obtained with the AO imager NACO on the 8.2-m VLT Yepun telescope. On July 15, 2004, the new SINFONI instrument, mounted at the Cassegrain focus of the same telescope, observed the innermost region (the central 1 x 1 arcsec) of the Milky Way Galaxy in the combined H+K band (1.45 - 2.45 µm) during a total of 110 min "on-source". The insert (upper left) shows the immediate neighbourhood of the central black hole as seen with SINFONI. The position of the black hole is marked with a yellow circle. Later in the night (03:37 UT on July 16), a flare from the black hole ocurred (a zoom-in is shown in the insert at the lower left) and the first-ever infrared spectrum of this phenomenon was observed. It was also possible to register for the first time in great detail the near-infrared spectra of young massive stars orbiting the black hole; some of these are shown in the inserts at the upper right; stars are identified by their "S"-designations. The lower right inserts show the spectra of stars in "IRS 13 E", a very compact cluster of very young and massive stars, located about 3.5 arcsec to the south-west of the black hole. The wavefront reference ("guide") star employed for these AO observations is comparably faint (red magnitude approx. 15), and it is located about 20 arcsec away from the field centre. The seeing during these observations was about 0.6 arcsec. The width of the slitlets was 0.025 arcsec. The Milky Way Centre is a unique laboratory for studying physical processes that are thought to be common in galactic nuclei. The Galactic Centre is not only the best studied case of a supermassive black hole, but the region also hosts the largest population of high-mass stars in the Galaxy. Diffraction-limited near-IR integral field spectroscopy offers a unique opportunity for exploring in detail the physical phenomena responsible for the active phases of this supermassive black hole, and for studying the dynamics and evolution of the star cluster in its immediate vicinity. Earlier observations with the VLT have been described in ESO PR 17/02 and ESO PR 26/03. With the new SINFONI observations, some of which are displayed in PR Photo 24k/04, it was possible to obtain for the first time very detailed near-infrared spectra of several young and massive stars orbiting the black hole at the centre of our galaxy. The presence of spectral signatures from ionised hydrogen (the Bracket-gamma line) and Helium clearly classify these stars as young, massive early-type stars. They are comparatively short-lived, and the large fraction of such stars in the immediate vicinity of a supermassive black hole is a mystery. The first SINFONI observations of the stellar populations in the innermost Galactic Centre region will now help to explain the origin and formation process of those stars. Moreover, the observed spectral features allow measuring their motions along the line-of-sight (the "radial velocities"). Combining them with the motions in the sky (the "proper motions") obtained from previous observations with the NACO instrument (ESO PR 17/02), it is now possible to determine all orbital parameters for the "S"-stars. This in turn makes it possible to measure directly the mass and the distance of the supermassive black hole at the centre of our galaxy. But not only this! Even more exciting, it became possible to register for the first time the infrared spectrum of a flare from the Galactic Centre black hole (cf. ESO PR 26/03). From the earlier imaging observations, it is known that such outbursts occur approximately once every 4 hours, giving us a uniquely detailed glimpse of a black hole feeding on left-over gas in its close surroundings. It is only the innovative technique of SINFONI - providing spectra for every pixel in a diffraction-limited image - that made it possible to capture the infrared spectrum of such a flare. Such spectra from SINFONI will soon allow to understand better the physics and mechanisms involved in the flare emission. Appendix D: The Active Circinus Galaxy ESO PR Photo 24l/04 ESO PR Photo 24l/04 SINFONI Observations of the Circinus Galaxy [Preview - JPEG: 824 x 400 pix - 324k] [Normal - JPEG: 412 x 800 pix - 131k] Caption: ESO PR Photo 24l/04: The Circinus galaxy - one of the nearest galaxies with an active centre (AGN) - was observed in the K-band (wavelength 2 µm) using the nucleus to guide the SINFONI AO Module. The seeing was 0.5 arcsec and the width of each slitlet 0.025 arcsec; the total integration time on the galaxy was 40 min. At the top is a K-band image of the central arcsec of the galaxy (left insert) and a K-band spectrum of the nucleus (right). In the lower half are images (left) in the light of ionised hydrogen (the Brackett-gamma line) and molecular hydrogen lines (H2), together with their combined rotation curve (middle), as well as images of the broad and narrow components of the high excitation [Ca VIII] spectral line (right). The false-colours in the images represent regions of different surface brightness. At a distance of about 13 million light-years, the Circinus galaxy is one of the nearest galaxies with a very active black hole at the centre. It is seen behind a highly obscured sky field, only 3° from the Milky Way main plane in the southern constellation of this name ("The Pair of Compasses"). Using the nucleus of this galaxy to guide the AO Module, SINFONI was able to zoom in on the central arcsec region - only 60 light-years across - and to map the immediate environment of the black hole at the centre, cf. PR Photo 24l/04. The K-band (wavelength 2 µm) image (insert at the upper left) displays a very compact structure; the emission recorded at this wavelength comes from hot dust heated by radiation from the accretion disc around the black hole. However, as may be seen in the two inserts below, both the emission from ionized hydrogen (the Brackett-gamma line) and molecular hydrogen (H2) are more extended, up to about 30 light-years. As these spectral lines (cf. the spectral tracing at the upper right) are quite narrow and show ordered rotation up to ±40km/s, it is likely that they arise from star formation in a disk around the central black hole. A surprise from the SINFONI observations is that the spectral line of Ca7+-ions (seven times ionised Calcium atoms, or [Ca VIII], which are produced by the ionizing effect of very energetic ultraviolet radiation) in this area appears to have distinct broad and narrow components (images at the lower right). The broad component is centred on the region around the black hole, and probably arises in the so-called "Broad-Line Region". The narrow component is displaced to the north-west and most likely indicates a region where there is a direct line-of-sight from the black hole to some gas clouds. Appendix E: The Active Nucleus in NGC 7469 ESO PR Photo 24m/04 ESO PR Photo 24m/04 SINFONI Observations of NGC 7469 [Preview - JPEG: 470 x 400 pix - 116k] [Normal - JPEG: 939 x 800 pix - 324k] Caption: ESO PR Photo 24m/04: NGC 7469 was observed in K band (wavelength 2 µm) using the nucleus to guide the adaptive optics. The width of each slitlet was 0.025 arcsec and the seeing was 1.1 arcsec. The total integration time on the galaxy was 70 min "on-source". To the upper left is a K-band image (2 µm) of the central arcsec of the NGC7469 and to the upper right, the spectrum of the nucleus. To the lower left is an image of the molecular hydrogen line, together with its rotation curve. There is an image in the light of ionized hydrogen (Bracket-gamma line) at the lower middle and an image of the CO 2-0 absorption bandhead which traces young stars (lower right). The galaxy NGC 7469 (seen north of the celestial equator in the constellation Pegasus) also hosts an active galactic nucleus, but contrary to the Circinus galaxy, it is relatively unobscured. Since NGC 7469 is at a much larger distance, about 225 million light-years, the 0.15 arcsec resolution achieved by SINFONI here corresponds to about 165 light-years. The K-band image (PR Photo 24m/04) shows the bright, compact nucleus of this galaxy, and the spectrum displays very broad lines of ionized hydrogen (the Brackett-gamma line) and helium. This emission arises in the "Broad-Line" region which is still unresolved, as shown by the Brackett-gamma image. On the other hand, the molecular hydrogen extends up to 650 light-years from the centre and shows an ordered rotation. In contrast, the image obtained in the light of CO-molecules - which directly traces late-type stars typical for starbursts - appears very compact. These results confirm those obtained by means of earlier AO observations, but with the new SINFONI data corresponding to various spectral lines, the detailed, two-dimensional structure and motions close to the central black hole are now clearly revealed for the first time. Appendix F: The Galaxy Merger NGC 6240 ESO PR Photo 24n/04 ESO PR Photo 24n/04 SINFONI Observations of NGC 6240 [Preview - JPEG: 506 x 400 pix - 96k] [Normal - JPEG: 1011 x 800 pix - 277k] Caption: ESO PR Photo 24n/04: The galaxy merger system NGC 6240 was observed with SINFONI in the K-band (wavelength 2 µm). This object has two nuclei; the image of the southern one is also shown enlarged, together with the corresponding spectrum. The width of each slitlet was 0.025 arcsec and the seeing was 0.8 arcsec. The total integration time on the galaxy was 80 min. The false-colours in the images represent regions of different surface brightness. The infrared-luminous galaxy NGC 6240 in the constellation Ophiuchus (The Serpent-holder) is in many ways the prototype of a gas-rich, infrared-(ultra-)luminous galaxy merger. This system has two rapidly rotating, massive bulges/nuclei at a projected angular separation of 1.6 arcsec. Each of them contains a powerful starburst region and a luminous, highly obscured, X-ray-emitting supermassive black hole. As such, NGC 6240 is probably a nearby example of dust and gas-rich galaxy merger systems seen at larger distances. NGC6240 is also the most luminous, nearby source of molecular hydrogen emission. It was observed in the K-band (wavelength 2 µm), using a faint star at a distance of about 35 arcsec as the AO "guide" star. The starburst activity is traced by the ionized gas and occurs mostly at the two nuclei in regions measuring around 650 light-years across. The distribution of the molecular gas is very different. It follows a complex spatial and dynamical pattern with several extended streamers. The high-resolution SINFONI data now makes it possible - for the first time - to investigate the distribution and motion of the molecular gas, as well as the stellar population in this galaxy with a "resolution" of about 80 light-years. Appendix G: Motions in the Young Star-Forming Galaxies BX 404/405 ESO PR Photo 24o/04 ESO PR Photo 24o/04 SINFONI Observations of the Distant Galaxy Pair BX 404/405 [Preview - JPEG: 481 x 400 pix - 86k] [Normal - JPEG: 962 x 800 pix - 251k] Caption: ESO PR Photo 24o/04 shows the distant galaxy pair BX 404/405, as recorded in the K-band (wavelength 2 µm, centered on the redshifted H-alpha line), without AO-correction because of the lack of a nearby, sufficiently bright "guide" star. The width of each slitlet was 0.25 arcsec and the seeing about 0.6 arcsec. The integration time on the galaxy was 2 hours "on-source". The image shown has been reconstructed by combining all of the spectral elements around the H-alpha spectral line. The spectrum of BX 405 (upper right) clearly reveals signs of a velocity shear while that of BX 404 does not. This may be a sign of rotation, a possible signature of a young disc in this galaxy. How and when did the discs in spiral galaxies like the Milky Way form? This is one of the longest-standing puzzles in modern cosmology. Two general models presently describe how disk galaxies may form. One is based on a scenario in which there is a gentle collapse of gas clouds that collide and lose momentum. They sink towards a "centre", thereby producing a disc of gas in which stars are formed. The other implies that galaxies grow through repeated mergers of smaller gas-rich galaxies. Together they first produce a spherical mass distribution at the centre and any remaining gas then settles into a disk. Recent studies of stars in the Milky Way system and nearby spiral galaxies suggest that the discs now present in these systems formed about 10,000 million years ago. This corresponds to the epoch when we observe galaxies at redshifts of about 1.5 - 2.5. Interestingly, studies of galaxies at these distances seem consistent with current ideas about when disks may have formed, and there is some evidence that most of the mass in the galaxies was also assembled at that time. In any case, the most direct way to verify such a connection is to observe galaxies at redshifts 1.5-2.5, in order to elucidate whether their observed properties are consistent with velocity patterns of rotating disks of gas and stars. This would be visible as a "velocity shear", i.e., a significant difference in velocity of neigbouring regions. In addition, such observations may provide a good test of the above mentioned hypotheses for how discs may have formed. Various groups of astrophysicists in the US and Europe have developed observational selection criteria which may be used to identify galaxies with properties similar to those expected for young disc galaxies. Observations with SINFONI was made of one of these objects, the galaxy pair BX 404/405 discovered by a group of astronomers at Caltech (USA). For BX 405, clear signs were found of a "velocity shear" like that expected for rotation of a forming disk, but the other object does not show this. It may thus be that the properties of star-forming galaxies at this epoch are quite complex and that only some of them have young disks.

  5. Post-hurricane Joaquin Coastal Oblique Aerial Photographs Collected from the South Carolina/North Carolina Border to Montauk Point, New York, October 7–9, 2015

    USGS Publications Warehouse

    Morgan, Karen L.M.

    2016-06-27

    The U.S. Geological Survey (USGS), as part of the National Assessment of Coastal Change Hazards project, conducts baseline and storm-response photography missions to document and understand the changes in vulnerability of the Nation's coasts to extreme storms (Morgan, 2009). On October 7–9, 2015, the USGS conducted an oblique aerial photographic survey of the coast from the South Carolina/North Carolina border to Montauk Point, New York (fig. 1), aboard a Cessna 182 (aircraft) at an altitude of 500 feet (ft) and approximately 1,200 ft offshore fig. 2. This mission was conducted to collect post-Hurricane Joaquin data for assessing incremental changes in the beach and nearshore area since the last surveys, mission flown in September 2014 (Virginia to New York: Morgan, 2015), November 2012 (northern North Carolina: Morgan and others, 2014) and May 2008 (southern North Carolina: unpublished report), and the data can be used to assess of future coastal change.The photographs in this report are Joint Photographic Experts Group (JPEG) images. ExifTool was used to add the following to the header of each photo: time of collection, Global Positioning System (GPS) latitude, GPS longitude, keywords, credit, artist (photographer), caption, copyright, and contact information. The photograph locations are an estimate of the position of the aircraft at the time the photograph was taken and do not indicate the location of any feature in the images (see the Navigation Data page). These photographs document the state of the barrier islands and other coastal features at the time of the survey. Pages containing thumbnail images of the photographs, referred to as contact sheets, were created in 5-minute segments of flight time. These segments can be found on the Photos and Maps page. Photographs can be opened directly with any JPEG-compatible image viewer by clicking on a thumbnail on the contact sheet.In addition to the photographs, a Google Earth Keyhole Markup Language (KML) file is provided and can be used to view the images by clicking on the marker and then clicking on either the thumbnail or the link above the thumbnail. The KML file was created using the photographic navigation files. This KML file can be found in the kml folder.

  6. VLTI First Fringes with Two Auxiliary Telescopes at Paranal

    NASA Astrophysics Data System (ADS)

    2005-03-01

    World's Largest Interferometer with Moving Optical Telescopes on Track Summary The Very Large Telescope Interferometer (VLTI) at Paranal Observatory has just seen another extension of its already impressive capabilities by combining interferometrically the light from two relocatable 1.8-m Auxiliary Telescopes. Following the installation of the first Auxiliary Telescope (AT) in January 2004 (see ESO PR 01/04), the second AT arrived at the VLT platform by the end of 2004. Shortly thereafter, during the night of February 2 to 3, 2005, the two high-tech telescopes teamed up and quickly succeeded in performing interferometric observations. This achievement heralds an era of new scientific discoveries. Both Auxiliary Telescopes will be offered from October 1, 2005 to the community of astronomers for routine observations, together with the MIDI instrument. By the end of 2006, Paranal will be home to four operational ATs that may be placed at 30 different positions and thus be combined in a very large number of ways ("baselines"). This will enable the VLTI to operate with enormous flexibility and, in particular, to obtain extremely detailed (sharp) images of celestial objects - ultimately with a resolution that corresponds to detecting an astronaut on the Moon. PR Photo 07a/05: Paranal Observing Platform with AT1 and AT2 PR Photo 07b/05: AT1 and AT2 with Open Domes PR Photo 07c/05: Evening at Paranal with AT1 and AT2 PR Photo 07d/05: AT1 and AT2 under the Southern Sky PR Photo 07e/05: First Fringes with AT1 and AT2 PR Video Clip 01/05: Two ATs at Paranal (Extract from ESO Newsreel 15) A Most Advanced Device ESO PR Video 01/05 ESO PR Video 01/05 Two Auxiliary Telescopes at Paranal [QuickTime: 160 x 120 pix - 37Mb - 4:30 min] [QuickTime: 320 x 240 pix - 64Mb - 4:30 min] ESO PR Photo 07a/05 ESO PR Photo 07a/05 [Preview - JPEG: 493 x400 pix - 44k] [Normal - JPEG: 985 x 800 pix - 727k] [HiRes - JPEG: 5000 x 4060 pix - 13.8M] Captions: ESO PR Video Clip 01/05 is an extract from ESO Video Newsreel 15, released on March 14, 2005. It provides an introduction to the VLT Interferometer (VLTI) and the two Auxiliary Telescopes (ATs) now installed at Paranal. ESO PR Photo 07a/05 shows the impressive ensemble at the summit of Paranal. From left to right, the enclosure of VLT Antu, Kueyen and Melipal, AT1, the VLT Survey Telescope (VST) in the background, AT2 and VLT Yepun. Located at the summit of the 2,600-m high Cerro Paranal in the Atacama Desert (Chile), ESO's Very Large Telescope (VLT) is at the forefront of astronomical technology and is one of the premier facilities in the world for optical and near-infrared observations. The VLT is composed of four 8.2-m Unit Telescope (Antu, Kueyen, Melipal and Yepun). They have been progressively put into service together with a vast suite of the most advanced astronomical instruments and are operated every night in the year. Contrary to other large astronomical telescopes, the VLT was designed from the beginning with the use of interferometry as a major goal. The href="/instruments/vlti">VLT Interferometer (VLTI) combines starlight captured by two 8.2- VLT Unit Telescopes, dramatically increasing the spatial resolution and showing fine details of a large variety of celestial objects. The VLTI is arguably the world's most advanced optical device of this type. It has already demonstrated its powerful capabilities by addressing several key scientific issues, such as determining the size and the shape of a variety of stars (ESO PR 22/02, PR 14/03 and PR 31/03), measuring distances to stars (ESO PR 25/04), probing the innermost regions of the proto-planetary discs around young stars (ESO PR 27/04) or making the first detection by infrared interferometry of an extragalactic object (ESO PR 17/03). "Little Brothers" ESO PR Photo 07b/05 ESO PR Photo 07b/05 [Preview - JPEG: 597 x 400 pix - 47k] [Normal - JPEG: 1193 x 800 pix - 330k] [HiRes - JPEG: 5000 x 3354 pix - 10.0M] ESO PR Photo 07c/05 ESO PR Photo 07c/05 [Preview - JPEG: 537 x 400 pix - 31k] [Normal - JPEG: 1074 x 800 pix - 555k] [HiRes - JPEG: 3000 x 2235 pix - 6.0M] ESO PR Photo 07d/05 ESO PR Photo 07d/05 [Preview - JPEG: 400 x 550 pix - 60k] [Normal - JPEG: 800 x 1099 pix - 946k] [HiRes - JPEG: 2414 x 3316 pix - 11.0M] Captions: ESO PR Photo 07b/05 shows VLTI Auxiliary Telescopes 1 and 2 (AT1 and AT2) in the early evening light, with the spherical domes opened and ready for observations. In ESO PR Photo 07c/05, the same scene is repeated later in the evening, with three of the large telescope enclosures in the background. This photo and ESO PR Photo 07c/05 which is a time-exposure with AT1 and AT2 under the beautiful night sky with the southern Milky Way band were obtained by ESO staff member Frédéric Gomté. However, most of the time the large telescopes are used for other research purposes. They are therefore only available for interferometric observations during a limited number of nights every year. Thus, in order to exploit the VLTI each night and to achieve the full potential of this unique setup, some other (smaller), dedicated telescopes were included into the overall VLT concept. These telescopes, known as the VLTI Auxiliary Telescopes (ATs), are mounted on tracks and can be placed at precisely defined "parking" observing positions on the observatory platform. From these positions, their light beams are fed into the same common focal point via a complex system of reflecting mirrors mounted in an underground system of tunnels. The Auxiliary Telescopes are real technological jewels. They are placed in ultra-compact enclosures, complete with all necessary electronics, an air conditioning system and cooling liquid for thermal control, compressed air for enclosure seals, a hydraulic plant for opening the dome shells, etc. Each AT is also fitted with a transporter that lifts the telescope and relocates it from one station to another. It moves around with its own housing on the top of Paranal, almost like a snail. Moreover, these moving ultra-high precision telescopes, each weighing 33 tonnes, fulfill very stringent mechanical stability requirements: "The telescopes are unique in the world", says Bertrand Koehler, the VLTI AT Project Manager. "After being relocated to a new position, the telescope is repositioned to a precision better than one tenth of a millimetre - that is, the size of a human hair! The image of the star is stabilized to better than thirty milli-arcsec - this is how we would see an object of the same size as one of the VLT enclosures on the Moon. Finally, the path followed by the light inside the telescope after bouncing on ten mirrors is stable to better than a few nanometres, which is the size of about one hundred atoms." A World Premiere ESO PR Photo 07e/05 ESO PR Photo 07e/05 "First Fringes" with two ATs [Preview - JPEG: 400 x 559 pix - 61k] [Normal - JPEG: 800 x 1134 pix - 357k] Caption: ESO PR Photo 07e/05 The "First Fringes" obtained with the first two VLTI Auxiliary Telescopes, as seen on the computer screen during the observation. The fringe pattern arises when the light beams from the two 1.8-m telescopes are brought together inside the VINCI instrument. The pattern itself contains information about the angular extension of the observed object, here the 6th-magnitude star HD62082. The fringes are acquired by moving a mirror back and forth around the position of equal path length for the two telescopes. One such scan can be seen in the third row window. This pattern results from the raw interferometric signals (the last two rows) after calibration and filtering using the photometric signals (the 4th and 5th row). The first two rows show the spectrum of the fringe pattern signal. More details about the interpretation of this pattern is given in Appendix A of PR 06/01. The possibility to move the ATs around and thus to perform observations with a large number of different telescope configurations ensures a great degree of flexibility, unique for an optical interferometric installation of this size and crucial for its exceptional performance. The ATs may be placed at 30 different positions and thus be combined in a very large number of ways. If the 8.2-m VLT Unit Telescopes are also taken into account, no less than 254 independent pairings of two telescopes ("baselines"), different in length and/or orientation, are available. Moreover, while the largest possible distance between two 8.2-m telescopes (ANTU and YEPUN) is about 130 metres, the maximal distance between two ATs may reach 200 metres. As the achievable image sharpness increases with telescope separation, interferometric observations with the ATs positioned at the extreme positions will therefore yield sharper images than is possible by combining light from the large telescopes alone. All of this will enable the VLTI to obtain exceedingly detailed (sharp) and very complete images of celestial objects - ultimately with a resolution that corresponds to detecting an astronaut on the Moon. Auxiliary Telescope no. 1 (AT1) was installed on the observatory's platform in January 2004. Now, one year later, the second of the four to be delivered, has been integrated into the VLTI. The installation period lasted two months and ended around midnight during the night of February 2-3, 2005. With extensive experience from the installation of AT1, the team of engineers and astronomers were able to combine the light from the two Auxiliary Telescopes in a very short time. In fact, following the necessary preparations, it took them only five minutes to adjust this extremely complex optical system and successfully capture the "First Fringes" with the VINCI test instrument! The star which was observed is named HD62082 and is just at the limit of what can be observed with the unaided eye (its visual magnitude is 6.2). The fringes were as clear as ever, and the VLTI control system kept them stable for more than one hour. Four nights later this exercise was repeated successfully with the mid-infrared science instrument MIDI. Fringes on the star Alphard (Alpha Hydrae) were acquired on February 7 at 4:05 local time. For Roberto Gilmozzi, Director of ESO's La Silla Paranal Observatory, "this is a very important new milestone. The introduction of the Auxiliary Telescopes in the development of the VLT Interferometer will bring interferometry out of the specialist experiment and into the domain of common user instrumentation for every astronomer in Europe. Without doubt, it will enormously increase the potentiality of the VLTI." With two more telescopes to be delivered within a year to the Paranal Observatory, ESO cements its position as world-leader in ground-based optical astronomy, providing Europe's scientists with the tools they need to stay at the forefront in this exciting science. The VLT Interferometer will, for example, allow astronomers to study details on the surface of stars or to probe proto-planetary discs and other objects for which ultra-high precision imaging is required. It is premature to speculate on what the Very Large Telescope Interferometer will soon discover, but it is easy to imagine that there may be quite some surprises in store for all of us.

  7. A novel color image compression algorithm using the human visual contrast sensitivity characteristics

    NASA Astrophysics Data System (ADS)

    Yao, Juncai; Liu, Guizhong

    2017-03-01

    In order to achieve higher image compression ratio and improve visual perception of the decompressed image, a novel color image compression scheme based on the contrast sensitivity characteristics of the human visual system (HVS) is proposed. In the proposed scheme, firstly the image is converted into the YCrCb color space and divided into sub-blocks. Afterwards, the discrete cosine transform is carried out for each sub-block, and three quantization matrices are built to quantize the frequency spectrum coefficients of the images by combining the contrast sensitivity characteristics of HVS. The Huffman algorithm is used to encode the quantized data. The inverse process involves decompression and matching to reconstruct the decompressed color image. And simulations are carried out for two color images. The results show that the average structural similarity index measurement (SSIM) and peak signal to noise ratio (PSNR) under the approximate compression ratio could be increased by 2.78% and 5.48%, respectively, compared with the joint photographic experts group (JPEG) compression. The results indicate that the proposed compression algorithm in the text is feasible and effective to achieve higher compression ratio under ensuring the encoding and image quality, which can fully meet the needs of storage and transmission of color images in daily life.

  8. PlanetServer: Innovative approaches for the online analysis of hyperspectral satellite data from Mars

    NASA Astrophysics Data System (ADS)

    Oosthoek, J. H. P.; Flahaut, J.; Rossi, A. P.; Baumann, P.; Misev, D.; Campalani, P.; Unnithan, V.

    2014-06-01

    PlanetServer is a WebGIS system, currently under development, enabling the online analysis of Compact Reconnaissance Imaging Spectrometer (CRISM) hyperspectral data from Mars. It is part of the EarthServer project which builds infrastructure for online access and analysis of huge Earth Science datasets. Core functionality consists of the rasdaman Array Database Management System (DBMS) for storage, and the Open Geospatial Consortium (OGC) Web Coverage Processing Service (WCPS) for data querying. Various WCPS queries have been designed to access spatial and spectral subsets of the CRISM data. The client WebGIS, consisting mainly of the OpenLayers javascript library, uses these queries to enable online spatial and spectral analysis. Currently the PlanetServer demonstration consists of two CRISM Full Resolution Target (FRT) observations, surrounding the NASA Curiosity rover landing site. A detailed analysis of one of these observations is performed in the Case Study section. The current PlanetServer functionality is described step by step, and is tested by focusing on detecting mineralogical evidence described in earlier Gale crater studies. Both the PlanetServer methodology and its possible use for mineralogical studies will be further discussed. Future work includes batch ingestion of CRISM data and further development of the WebGIS and analysis tools.

  9. Vcs.js - Visualization Control System for the Web

    NASA Astrophysics Data System (ADS)

    Chaudhary, A.; Lipsa, D.; Doutriaux, C.; Beezley, J. D.; Williams, D. N.; Fries, S.; Harris, M. B.

    2016-12-01

    VCS is a general purpose visualization library, optimized for climate data, which is part of the UV-CDAT system. It provides a Python API for drawing 2D plots such as lineplots, scatter plots, Taylor diagrams, data colored by scalar values, vector glyphs, isocontours and map projections. VCS is based on the VTK library. Vcs.js is the corresponding JavaScript API, designed to be as close as possible to the original VCS Python API and to provide similar functionality for the Web. Vcs.js includes additional functionality when compared with VCS. This additional API is used to introspect data files available on the server and variables available in a data file. Vcs.js can display plots in the browser window. It always works with a server that reads a data file, extracts variables from the file and subsets the data. From this point, two alternate paths are possible. First the system can render the data on the server using VCS producing an image which is send to the browser to be displayed. This path works for for all plot types and produces a reference image identical with the images produced by VCS. This path uses the VTK-Web library. As an optimization, usable in certain conditions, a second path is possible. Data is packed, and sent to the browser which uses a JavaScript plotting library, such as plotly, to display the data. Plots that work well in the browser are line-plots, scatter-plots for any data and many other plot types for small data and supported grid types. As web technology matures, more plots could be supported for rendering in the browser. Rendering can be done either on the client or on the server and we expect that the best place to render will change depending on the available web technology, data transfer costs, server management costs and value provided to users. We intend to provide a flexible solution that allows for both client and server side rendering and a meaningful way to choose between the two. We provide a web-based user interface called vCdat which uses Vcs.js as its visualization library. Our paper will discuss the principles guiding our design choices for Vcs.js, present our design in detail and show a sample usage of the library.

  10. [The future of telepathology. An Internet "distributed system" with "open standards"].

    PubMed

    Brauchli, K; Helfrich, M; Christen, H; Jundt, G; Haroske, G; Mihatsch, M; Oberli, H; Oberholzer, M

    2002-05-01

    With the availability of Internet, the interest in the possibilities of telepathology has increased considerably. In the foreground is thereby the need of the non-expert to bring in the opinions of experts on morphological findings by means of a fast and simple procedure. The new telepathology system iPath is in compliance with these needs. The system is based on small, but when possible independently working modules. This concept allows a simple adaptation of the system to the individual environment of the user (e.g. for different cameras, frame-grabbers, microscope steering tables etc.) and for individual needs. iPath has been in use for 6 months with various working groups. In telepathology a distinction is made between "passive" and "active" consultations but for both forms a non-expert brings in the opinion of an expert. In an active consultation both are in direct connection with each other (orally or via a chat-function), this is however not the case with a passive consultation. An active consultation can include the interactive discussion of the expert with the non-expert on images in an image database or the direct interpretation of images from a microscope by the expert. Four software modules are available for a free and as fast as possible application: (1) the module "Microscope control", (2) the module "Connector" (insertion of images directly from the microscope without a motorized microscope), (3) the module "Client-application" via the web-browser and (4) the module "Server" with a database. The server is placed in the internet and not behind a firewall. The server permanently receives information from the periphery and returns the information to the periphery on request. The only thing which the expert, the non-expert and the microscope have to know is how contact can made with the server.

  11. UAV field demonstration of social media enabled tactical data link

    NASA Astrophysics Data System (ADS)

    Olson, Christopher C.; Xu, Da; Martin, Sean R.; Castelli, Jonathan C.; Newman, Andrew J.

    2015-05-01

    This paper addresses the problem of enabling Command and Control (C2) and data exfiltration functions for missions using small, unmanned, airborne surveillance and reconnaissance platforms. The authors demonstrated the feasibility of using existing commercial wireless networks as the data transmission infrastructure to support Unmanned Aerial Vehicle (UAV) autonomy functions such as transmission of commands, imagery, metadata, and multi-vehicle coordination messages. The authors developed and integrated a C2 Android application for ground users with a common smart phone, a C2 and data exfiltration Android application deployed on-board the UAVs, and a web server with database to disseminate the collected data to distributed users using standard web browsers. The authors performed a mission-relevant field test and demonstration in which operators commanded a UAV from an Android device to search and loiter; and remote users viewed imagery, video, and metadata via web server to identify and track a vehicle on the ground. Social media served as the tactical data link for all command messages, images, videos, and metadata during the field demonstration. Imagery, video, and metadata were transmitted from the UAV to the web server via multiple Twitter, Flickr, Facebook, YouTube, and similar media accounts. The web server reassembled images and video with corresponding metadata for distributed users. The UAV autopilot communicated with the on-board Android device via on-board Bluetooth network.

  12. The Atacama Large Millimeter Array (ALMA)

    NASA Astrophysics Data System (ADS)

    1999-06-01

    The Atacama Large Millimeter Array (ALMA) is the new name [2] for a giant millimeter-wavelength telescope project. As described in the accompanying joint press release by ESO and the U.S. National Science Foundation , the present design and development phase is now a Europe-U.S. collaboration, and may soon include Japan. ALMA may become the largest ground-based astronomy project of the next decade after VLT/VLTI, and one of the major new facilities for world astronomy. ALMA will make it possible to study the origins of galaxies, stars and planets. As presently envisaged, ALMA will be comprised of up to 64 12-meter diameter antennas distributed over an area 10 km across. ESO PR Photo 24a/99 shows an artist's concept of a portion of the array in a compact configuration. ESO PR Video Clip 03/99 illustrates how all the antennas will move in unison to point to a single astronomical object and follow it as it traverses the sky. In this way the combined telescope will produce astronomical images of great sharpness and sensitivity [3]. An exceptional site For such observations to be possible the atmosphere above the telescope must be transparent at millimeter and submillimeter wavelengths. This requires a site that is high and dry, and a high plateau in the Atacama desert of Chile, probably the world's driest, is ideal - the next best thing to outer space for these observations. ESO PR Photo 24b/99 shows the location of the chosen site at Chajnantor, at 5000 meters altitude and 60 kilometers east of the village of San Pedro de Atacama, as seen from the Space Shuttle during a servicing mission of the Hubble Space Telescope. ESO PR Photo 24c/99 and ESO PR Photo 24d/99 show a satellite image of the immediate vicinity and the site marked on a map of northern Chile. ALMA will be the highest continuously operated observatory in the world. The stark nature of this extreme site is well illustrated by the panoramic view in ESO PR Photo 24e/99. High sensitivity and sharp images ALMA will be extremely sensitive to radiation at milllimeter and submillimeter wavelengths. The large number of antennas gives a total collecting area of over 7000 square meters, larger than a football field. At the same time, the shape of the surface of each antenna must be extremely precise under all conditions; the overall accuracy over the entire 12-m diameter must be better than 0.025 millimeters (25µm), or one-third of the diameter of a human hair. The combination of large collecting area and high precision results in extremely high sensitivity to faint cosmic signals. The telescope must also be able to resolve the fine details of the objects it detects. In order to do this at millimeter wavelengths the effective diameter of the overall telescope must be very large - about 10 km. As it is impossible to build a single antenna with this diameter, an array of antennas is used instead, with the outermost antennas being 10 km apart. By combining the signals from all antennas together in a large central computer, it is possible to synthesize the effect of a single dish 10 km across. The resulting angular resolution is about 10 milli-arcseconds, less than one-thousandth the angular size of Saturn. Exciting research perspectives The scientific case for this revolutionary telescope is overwhelming. ALMA will make it possible to witness the formation of the earliest and most distant galaxies. It will also look deep into the dust-obscured regions where stars are born, to examine the details of star and planet formation. But ALMA will go far beyond these main science drivers, and will have a major impact on virtually all areas of astronomy. It will be a millimeter-wave counterpart to the most powerful optical/infrared telescopes such as ESO's Very Large Telescope (VLT) and the Hubble Space Telescope, with the additional advantage of being unhindered by cosmic dust opacity. The first galaxies in the Universe are expected to become rapidly enshrouded in the dust produced by the first stars. The dust can dim the galaxies at optical wavelengths, but the same dust radiates brightly at longer wavelengths. In addition, the expansion of the Universe causes the radiation from distant galaxies to be shifted to longer wavelengths. For both reasons, the earliest galaxies at the epoch of first light can be found with ALMA, and the subsequent evolution of galaxies can be mapped over cosmic time. ALMA will be of great importance for our understanding of the origins of stars and planetary systems. Stellar nurseries are completely obscured at optical wavelengths by dense "cocoons" of dust and gas, but ALMA can probe deep into these regions and study the fundamental processes by which stars are assembled. Moreover, it can observe the major reservoirs of biogenic elements (carbon, oxygen, nitrogen) and follow their incorporation into new planetary systems. A particularly exciting prospect for ALMA is to use its exceptionally sharp images to obtain evidence for planet formation by the presence of gaps in dusty disks around young stars, cleared by large bodies coalescing around the stars. Equally fundamental are observations of the dying gasps of stars at the other end of the stellar lifecycle, when they are often surrounded by shells of molecules and dust enriched in heavy elements produced by the nuclear fires now slowly dying. ALMA will offer exciting new views of our solar system. Studies of the molecular content of planetary atmospheres with ALMA's high resolving power will provide detailed weather maps of Mars, Jupiter, and the other planets and even their satellites. Studies of comets with ALMA will be particularly interesting. The molecular ices of these visitors from the outer reaches of the solar system have a composition that is preserved from ages when the solar system was forming. They evaporate when the comet comes close to the sun, and studies of the resulting gases with ALMA will allow accurate analysis of the chemistry of the presolar nebula. The road ahead The three-year design and development phase of the project is now underway as a collaboration between Europe and the U.S., and Japan may also join in this effort. Assuming the construction phase begins about two years from now, limited operations of the array may begin in 2005 and the full array may become operational by 2009. Notes [1] Press Releases about this event have also been issued by some of the other organisations participating in this project: * CNRS (in French) * MPG (in German) * NOVA (in Dutch) * NRAO * NSF (ASCII and HTML versions) * PPARC [2] "ALMA" means "soul" in Spanish. [3] Additional information about ALMA is available on the web: * Articles in the ESO Messenger - "The Large Southern Array" (March 1998), "European Site Testing at Chajnantor" (December 1998) and "The ALMA Project" (June 1999), cf. http://www.eso.org/gen-fac/pubs/messenger/ * ALMA website at ESO at http://www.eso.org/projects/alma/ * ALMA website at the U.S. National Radio Astronomy Observatory (NRAO) at http://www.mma.nrao.edu/ * ALMA website in The Netherlands about the detectors at http://www.sron.rug.nl/alma/ ALMA/Chajnantor Video Clip and Photos ESO PR Video Clip 03/99 [MPEG-version] ESO PR Video Clip 03/99 (2450 frames/1:38 min) [MPEG Video; 160x120 pix; 2.1Mb] [MPEG Video; 320x240 pix; 10.0Mb] [RealMedia; streaming; 700k] [RealMedia; streaming; 2.3M] About ESO Video Clip 03/99 : This video clip about the ALMA project contains two sequences. The first shows a panoramic scan of the Chajnantor plain from approx. north-east to north-west. The Chajnantor mountain passes through the field-of-view and the perfect cone of the Licancabur volcano (5900 m) on the Bolivian border is seen at the end (compare also with ESO PR 24e/99 below. The second is a 52-sec animation with a change of viewing perspective of the array and during which the antennas move in unison. For convenience, the clip is available in four versions: two MPEG files of different sizes and two streamer-versions of different quality that require RealPlayer software. There is no audio. Note that ESO Video News Reel No. 5 with more related scenes and in professional format with complete shot list is also available. ESO PR Photo 24b/99 ESO PR Photo 24b/99 [Preview - JPEG: 400 x 446 pix - 184k] [Normal - JPEG: 800 x 892 pix - 588k] [High-Res - JPEG: 3000 x 3345 pix - 5.4M] Caption to ESO PR Photo 24b/99 : View of Northern Chile, as seen from the NASA Space Shuttle during a servicing mission to the Hubble Space Telescope (partly visible to the left). The Atacama Desert, site of the ESO VLT at Paranal Observatory and the proposed location for ALMA at Chajnantor, is seen from North (foreground) to South. The two sites are only a few hundred km distant from each other. Few clouds are seen in this extremely dry area, due to the influence of the cold Humboldt Stream along the Chilean Pacific coast (right) and the high Andes mountains (left) that act as a barrier. Photo courtesy ESA astronaut Claude Nicollier. ESO PR Photo 24c/99 ESO PR Photo 24c/99 [Preview - JPEG: 400 x 318 pix - 212k] [Normal - JPEG: 800 x 635 pix - 700k] [High-Res - JPEG: 3000 x 2382 pix - 5.9M] Caption to ESO PR Photo 24c/99 : This satellite image of the Chajnantor area was produced in 1998 at Cornell University (USA), by Jennifer Yu, Jeremy Darling and Riccardo Giovanelli, using the Thematic Mapper data base maintained at the Geology Department laboratory directed by Bryan Isacks. It is a composite of three exposures in spectral bands at 1.6 µm (rendered as red), 1.0 µm (green) and 0.5 µm (blue). The horizontal resolution of the false-colour image is about 30 meters. North is at the top of the photo. ESO PR Photo 24d/99 ESO PR Photo 24d/99 [Preview - JPEG: 400 x 381 pix - 108k] [Normal - JPEG: 800 x 762 pix - 240k] [High-Res - JPEG: 2300 x 2191 pix - 984k] Caption to ESO PR Photo 24d/99 : Geographical map with the sites of the VLT and ALMA indicated. ESO PR Photo 24e/99 ESO PR Photo 24e/99 [Preview - JPEG: 400 x 238 pix - 93k] [Normal - JPEG: 800 x 475 pix - 279k] [High-Res - JPEG: 2862 x 1701 pix - 4.2M] Caption to ESO PR Photo 24e/99 : Panoramic view of the proposed site for ALMA at Chajnantor. This high-altitude plain (elevation 5000 m) in the Chilean Andes mountains is an ideal site for ALMA. In this view towards the north, the Chajnantor mountain (5600 m) is in the foreground, left of the centre. The perfect cone of the Licancabur volcano (5900 m) on the Bolivian border is in the background further to the left. This image is a wide-angle composite (140° x 70°) of three photos (Hasselblad 6x6 with SWC 1:4.5/38 mm Biogon), obtained in December 1998. How to obtain ESO Press Information ESO Press Information is made available on the World-Wide Web (URL: http://www.eso.org../ ). ESO Press Photos may be reproduced, if credit is given to the European Southern Observatory.

  13. Segmentation-driven compound document coding based on H.264/AVC-INTRA.

    PubMed

    Zaghetto, Alexandre; de Queiroz, Ricardo L

    2007-07-01

    In this paper, we explore H.264/AVC operating in intraframe mode to compress a mixed image, i.e., composed of text, graphics, and pictures. Even though mixed contents (compound) documents usually require the use of multiple compressors, we apply a single compressor for both text and pictures. For that, distortion is taken into account differently between text and picture regions. Our approach is to use a segmentation-driven adaptation strategy to change the H.264/AVC quantization parameter on a macroblock by macroblock basis, i.e., we deviate bits from pictorial regions to text in order to keep text edges sharp. We show results of a segmentation driven quantizer adaptation method applied to compress documents. Our reconstructed images have better text sharpness compared to straight unadapted coding, at negligible visual losses on pictorial regions. Our results also highlight the fact that H.264/AVC-INTRA outperforms coders such as JPEG-2000 as a single coder for compound images.

  14. IfA Catalogs of Solar Data Products

    NASA Astrophysics Data System (ADS)

    Habbal, Shadia R.; Scholl, I.; Morgan, H.

    2009-05-01

    This paper presents a new set of online catalogs of solar data products. The IfA Catalogs of Solar Data Products were developed to enhance the scientific output of coronal images acquired from ground and space, starting with the SoHO era. Image processing tools have played a significant role in the production of these catalogs [Morgan et al. 2006, 2008, Scholl and Habbal 2008]. Two catalogs are currently available at http://alshamess.ifa.hawaii.edu/ : 1) Catalog of daily coronal images: One coronal image per day from EIT, MLSO and LASCO/C2 and C3 have been processed using the Normalizing Radial-Graded-Filter (NRGF) image processing tool. These images are available individually or as composite images. 2) Catalog of LASCO data: The whole LASCO dataset has been re-processed using the same method. The user can search files by dates and instruments, and images can be retrieved as JPEG or FITS files. An option to make on-line GIF movies from selected images is also available. In addition, the LASCO data set can be searched from existing CME catalogs (CDAW and Cactus). By browsing one of the two CME catalogs, the user can refine the query and access LASCO data covering the time frame of a CME. The catalogs will be continually updated as more data become publicly available.

  15. Informatics in radiology (infoRAD): free DICOM image viewing and processing software for the Macintosh computer: what's available and what it can do for you.

    PubMed

    Escott, Edward J; Rubinstein, David

    2004-01-01

    It is often necessary for radiologists to use digital images in presentations and conferences. Most imaging modalities produce images in the Digital Imaging and Communications in Medicine (DICOM) format. The image files tend to be large and thus cannot be directly imported into most presentation software, such as Microsoft PowerPoint; the large files also consume storage space. There are many free programs that allow viewing and processing of these files on a personal computer, including conversion to more common file formats such as the Joint Photographic Experts Group (JPEG) format. Free DICOM image viewing and processing software for computers running on the Microsoft Windows operating system has already been evaluated. However, many people use the Macintosh (Apple Computer) platform, and a number of programs are available for these users. The World Wide Web was searched for free DICOM image viewing or processing software that was designed for the Macintosh platform or is written in Java and is therefore platform independent. The features of these programs and their usability were evaluated. There are many free programs for the Macintosh platform that enable viewing and processing of DICOM images. (c) RSNA, 2004.

  16. Exploring the feasibility of traditional image querying tasks for industrial radiographs

    NASA Astrophysics Data System (ADS)

    Bray, Iliana E.; Tsai, Stephany J.; Jimenez, Edward S.

    2015-08-01

    Although there have been great strides in object recognition with optical images (photographs), there has been comparatively little research into object recognition for X-ray radiographs. Our exploratory work contributes to this area by creating an object recognition system designed to recognize components from a related database of radiographs. Object recognition for radiographs must be approached differently than for optical images, because radiographs have much less color-based information to distinguish objects, and they exhibit transmission overlap that alters perceived object shapes. The dataset used in this work contained more than 55,000 intermixed radiographs and photographs, all in a compressed JPEG form and with multiple ways of describing pixel information. For this work, a robust and efficient system is needed to combat problems presented by properties of the X-ray imaging modality, the large size of the given database, and the quality of the images contained in said database. We have explored various pre-processing techniques to clean the cluttered and low-quality images in the database, and we have developed our object recognition system by combining multiple object detection and feature extraction methods. We present the preliminary results of the still-evolving hybrid object recognition system.

  17. Low bit-rate image compression via adaptive down-sampling and constrained least squares upconversion.

    PubMed

    Wu, Xiaolin; Zhang, Xiangjun; Wang, Xiaohan

    2009-03-01

    Recently, many researchers started to challenge a long-standing practice of digital photography: oversampling followed by compression and pursuing more intelligent sparse sampling techniques. In this paper, we propose a practical approach of uniform down sampling in image space and yet making the sampling adaptive by spatially varying, directional low-pass prefiltering. The resulting down-sampled prefiltered image remains a conventional square sample grid, and, thus, it can be compressed and transmitted without any change to current image coding standards and systems. The decoder first decompresses the low-resolution image and then upconverts it to the original resolution in a constrained least squares restoration process, using a 2-D piecewise autoregressive model and the knowledge of directional low-pass prefiltering. The proposed compression approach of collaborative adaptive down-sampling and upconversion (CADU) outperforms JPEG 2000 in PSNR measure at low to medium bit rates and achieves superior visual quality, as well. The superior low bit-rate performance of the CADU approach seems to suggest that oversampling not only wastes hardware resources and energy, and it could be counterproductive to image quality given a tight bit budget.

  18. A Functional Approach to Hyperspectral Image Analysis in the Cloud

    NASA Astrophysics Data System (ADS)

    Wilson, A.; Lindholm, D. M.; Coddington, O.; Pilewskie, P.

    2017-12-01

    Hyperspectral image volumes are very large. A hyperspectral image analysis (HIA) may use 100TB of data, a huge barrier to their use. Hylatis is a new NASA project to create a toolset for HIA. Through web notebook and cloud technology, Hylatis will provide a more interactive experience for HIA by defining and implementing concepts and operations for HIA, identified and vetted by subject matter experts, and callable within a general purpose language, particularly Python. Hylatis leverages LaTiS, a data access framework developed at LASP. With an OPeNDAP compliant interface plus additional server side capabilities, the LaTiS API provides a uniform interface to virtually any data source, and has been applied to various storage systems, including: file systems, databases, remote servers, and in various domains including: space science, systems administration and stock quotes. In the LaTiS architecture, data `adapters' read data into a data model, where server-side computations occur. Data `writers' write data from the data model into the desired format. The Hylatis difference is the data model. In LaTiS, data are represented as mathematical functions of independent and dependent variables. Domain semantics are not present at this level, but are instead present in higher software layers. The benefit of a domain agnostic, mathematical representation is having the power of math, particularly functional algebra, unconstrained by domain semantics. This agnosticism supports reusable server side functionality applicable in any domain, such as statistical, filtering, or projection operations. Algorithms to aggregate or fuse data can be simpler because domain semantics are separated from the math. Hylatis will map the functional model onto the Spark relational interface, thereby adding a functional interface to that big data engine.This presentation will discuss Hylatis goals, strategies, and current state.

  19. Distributed file management for remote clinical image-viewing stations

    NASA Astrophysics Data System (ADS)

    Ligier, Yves; Ratib, Osman M.; Girard, Christian; Logean, Marianne; Trayser, Gerhard

    1996-05-01

    The Geneva PACS is based on a distributed architecture, with different archive servers used to store all the image files produced by digital imaging modalities. Images can then be visualized on different display stations with the Osiris software. Image visualization require to have the image file physically present on the local station. Thus, images must be transferred from archive servers to local display stations in an acceptable way, which means fast and user friendly where the notion of file must be hidden to users. The transfer of image files is done according to different schemes including prefetching and direct image selection. Prefetching allows the retrieval of previous studies of a patient in advance. A direct image selection is also provided in order to retrieve images on request. When images are transferred locally on the display station, they are stored in Papyrus files, each file containing a set of images. File names are used by the Osiris viewing software to open image sequences. But file names alone are not explicit enough to properly describe the content of the file. A specific utility has been developed to present a list of patients, and for each patient a list of exams which can be selected and automatically displayed. The system has been successfully tested in different clinical environments. It will be soon extended on a hospital wide basis.

  20. MovieMaker: a web server for rapid rendering of protein motions and interactions

    PubMed Central

    Maiti, Rajarshi; Van Domselaar, Gary H.; Wishart, David S.

    2005-01-01

    MovieMaker is a web server that allows short (∼10 s), downloadable movies of protein motions to be generated. It accepts PDB files or PDB accession numbers as input and automatically calculates, renders and merges the necessary image files to create colourful animations covering a wide range of protein motions and other dynamic processes. Users have the option of animating (i) simple rotation, (ii) morphing between two end-state conformers, (iii) short-scale, picosecond vibrations, (iv) ligand docking, (v) protein oligomerization, (vi) mid-scale nanosecond (ensemble) motions and (vii) protein folding/unfolding. MovieMaker does not perform molecular dynamics calculations. Instead it is an animation tool that uses a sophisticated superpositioning algorithm in conjunction with Cartesian coordinate interpolation to rapidly and automatically calculate the intermediate structures needed for many of its animations. Users have extensive control over the rendering style, structure colour, animation quality, background and other image features. MovieMaker is intended to be a general-purpose server that allows both experts and non-experts to easily generate useful, informative protein animations for educational and illustrative purposes. MovieMaker is accessible at . PMID:15980488

  1. Medical Image Processing Server applied to Quality Control of Nuclear Medicine.

    NASA Astrophysics Data System (ADS)

    Vergara, C.; Graffigna, J. P.; Marino, E.; Omati, S.; Holleywell, P.

    2016-04-01

    This paper is framed within the area of medical image processing and aims to present the process of installation, configuration and implementation of a processing server of medical images (MIPS) in the Fundación Escuela de Medicina Nuclear located in Mendoza, Argentina (FUESMEN). It has been developed in the Gabinete de Tecnologia Médica (GA.TE.ME), Facultad de Ingeniería-Universidad Nacional de San Juan. MIPS is a software that using the DICOM standard, can receive medical imaging studies of different modalities or viewing stations, then it executes algorithms and finally returns the results to other devices. To achieve the objectives previously mentioned, preliminary tests were conducted in the laboratory. More over, tools were remotely installed in clinical enviroment. The appropiate protocols for setting up and using them in different services were established once defined those suitable algorithms. Finally, it’s important to focus on the implementation and training that is provided in FUESMEN, using nuclear medicine quality control processes. Results on implementation are exposed in this work.

  2. Collaborative learning using Internet2 and remote collections of stereo dissection images.

    PubMed

    Dev, Parvati; Srivastava, Sakti; Senger, Steven

    2006-04-01

    We have investigated collaborative learning of anatomy over Internet2, using an application called remote stereo viewer (RSV). This application offers a unique method of teaching anatomy, using high-resolution stereoscopic images, in a client-server architecture. Rotated sequences of stereo image pairs were produced by volumetric rendering of the Visible female and by dissecting and photographing a cadaveric hand. A client-server application (RSV) was created to provide access to these image sets, using a highly interactive interface. The RSV system was used to provide a "virtual anatomy" session for students in the Stanford Medical School Gross Anatomy course. The RSV application allows both independent and collaborative modes of viewing. The most appealing aspects of the RSV application were the capacity for stereoscopic viewing and the potential to access the content remotely within a flexible temporal framework. The RSV technology, used over Internet2, thus serves as an effective complement to traditional methods of teaching gross anatomy. (c) 2006 Wiley-Liss, Inc.

  3. Distributing medical images with internet technologies: a DICOM web server and a DICOM java viewer.

    PubMed

    Fernàndez-Bayó, J; Barbero, O; Rubies, C; Sentís, M; Donoso, L

    2000-01-01

    With the advent of filmless radiology, it becomes important to be able to distribute radiologic images digitally throughout an entire hospital. A new approach based on World Wide Web technologies was developed to accomplish this objective. This approach involves a Web server that allows the query and retrieval of images stored in a Digital Imaging and Communications in Medicine (DICOM) archive. The images can be viewed inside a Web browser with use of a small Java program known as the DICOM Java Viewer, which is executed inside the browser. The system offers several advantages over more traditional picture archiving and communication systems (PACS): It is easy to install and maintain, is platform independent, allows images to be manipulated and displayed efficiently, and is easy to integrate with existing systems that are already making use of Web technologies. The system is user-friendly and can easily be used from outside the hospital if a security policy is in place. The simplicity and flexibility of Internet technologies makes them highly preferable to the more complex PACS workstations. The system works well, especially with magnetic resonance and computed tomographic images, and can help improve and simplify interdepartmental relationships in a filmless hospital environment.

  4. A New Color Image Encryption Scheme Using CML and a Fractional-Order Chaotic System

    PubMed Central

    Wu, Xiangjun; Li, Yang; Kurths, Jürgen

    2015-01-01

    The chaos-based image cryptosystems have been widely investigated in recent years to provide real-time encryption and transmission. In this paper, a novel color image encryption algorithm by using coupled-map lattices (CML) and a fractional-order chaotic system is proposed to enhance the security and robustness of the encryption algorithms with a permutation-diffusion structure. To make the encryption procedure more confusing and complex, an image division-shuffling process is put forward, where the plain-image is first divided into four sub-images, and then the position of the pixels in the whole image is shuffled. In order to generate initial conditions and parameters of two chaotic systems, a 280-bit long external secret key is employed. The key space analysis, various statistical analysis, information entropy analysis, differential analysis and key sensitivity analysis are introduced to test the security of the new image encryption algorithm. The cryptosystem speed is analyzed and tested as well. Experimental results confirm that, in comparison to other image encryption schemes, the new algorithm has higher security and is fast for practical image encryption. Moreover, an extensive tolerance analysis of some common image processing operations such as noise adding, cropping, JPEG compression, rotation, brightening and darkening, has been performed on the proposed image encryption technique. Corresponding results reveal that the proposed image encryption method has good robustness against some image processing operations and geometric attacks. PMID:25826602

  5. JPEG XS-based frame buffer compression inside HEVC for power-aware video compression

    NASA Astrophysics Data System (ADS)

    Willème, Alexandre; Descampe, Antonin; Rouvroy, Gaël.; Pellegrin, Pascal; Macq, Benoit

    2017-09-01

    With the emergence of Ultra-High Definition video, reference frame buffers (FBs) inside HEVC-like encoders and decoders have to sustain huge bandwidth. The power consumed by these external memory accesses accounts for a significant share of the codec's total consumption. This paper describes a solution to significantly decrease the FB's bandwidth, making HEVC encoder more suitable for use in power-aware applications. The proposed prototype consists in integrating an embedded lightweight, low-latency and visually lossless codec at the FB interface inside HEVC in order to store each reference frame as several compressed bitstreams. As opposed to previous works, our solution compresses large picture areas (ranging from a CTU to a frame stripe) independently in order to better exploit the spatial redundancy found in the reference frame. This work investigates two data reuse schemes namely Level-C and Level-D. Our approach is made possible thanks to simplified motion estimation mechanisms further reducing the FB's bandwidth and inducing very low quality degradation. In this work, we integrated JPEG XS, the upcoming standard for lightweight low-latency video compression, inside HEVC. In practice, the proposed implementation is based on HM 16.8 and on XSM 1.1.2 (JPEG XS Test Model). Through this paper, the architecture of our HEVC with JPEG XS-based frame buffer compression is described. Then its performance is compared to HM encoder. Compared to previous works, our prototype provides significant external memory bandwidth reduction. Depending on the reuse scheme, one can expect bandwidth and FB size reduction ranging from 50% to 83.3% without significant quality degradation.

  6. Integration of a clinical trial database with a PACS

    NASA Astrophysics Data System (ADS)

    van Herk, M.

    2014-03-01

    Many clinical trials use Electronic Case Report Forms (ECRF), e.g., from OpenClinica. Trial data is augmented if DICOM scans, dose cubes, etc. from the Picture Archiving and Communication System (PACS) are included for data mining. Unfortunately, there is as yet no structured way to collect DICOM objects in trial databases. In this paper, we obtain a tight integration of ECRF and PACS using open source software. Methods: DICOM identifiers for selected images/series/studies are stored in associated ECRF events (e.g., baseline) as follows: 1) JavaScript added to OpenClinica communicates using HTML with a gateway server inside the hospitals firewall; 2) On this gateway, an open source DICOM server runs scripts to query and select the data, returning anonymized identifiers; 3) The scripts then collects, anonymizes, zips and transmits selected data to a central trial server; 4) Here data is stored in a DICOM archive which allows authorized ECRF users to view and download the anonymous images associated with each event. Results: All integration scripts are open source. The PACS administrator configures the anonymization script and decides to use the gateway in passive (receiving) mode or in an active mode going out to the PACS to gather data. Our ECRF centric approach supports automatic data mining by iterating over the cases in the ECRF database, providing the identifiers to load images and the clinical data to correlate with image analysis results. Conclusions: Using open source software and web technology, a tight integration has been achieved between PACS and ECRF.

  7. Image Reference Database in Teleradiology: Migrating to WWW

    NASA Astrophysics Data System (ADS)

    Pasqui, Valdo

    The paper presents a multimedia Image Reference Data Base (IRDB) used in Teleradiology. The application was developed at the University of Florence in the framework of the European Community TELEMED Project. TELEMED overall goals and IRDB requirements are outlined and the resulting architecture is described. IRDB is a multisite database containing radiological images, selected because their scientific interest, and their related information. The architecture consists of a set of IRDB Installations which are accessed from Viewing Stations (VS) located at different medical sites. The interaction between VS and IRDB Installations follows the client-server paradigm and uses an OSI level-7 protocol, named Telemed Communication Language. After reviewing Florence prototype implementation and experimentation, IRDB migration to World Wide Web (WWW) is discussed. A possible scenery to implement IRDB on the basis of WWW model is depicted in order to exploit WWW servers and browsers capabilities. Finally, the advantages of this conversion are outlined.

  8. Diamond Eye: a distributed architecture for image data mining

    NASA Astrophysics Data System (ADS)

    Burl, Michael C.; Fowlkes, Charless; Roden, Joe; Stechert, Andre; Mukhtar, Saleem

    1999-02-01

    Diamond Eye is a distributed software architecture, which enables users (scientists) to analyze large image collections by interacting with one or more custom data mining servers via a Java applet interface. Each server is coupled with an object-oriented database and a computational engine, such as a network of high-performance workstations. The database provides persistent storage and supports querying of the 'mined' information. The computational engine provides parallel execution of expensive image processing, object recognition, and query-by-content operations. Key benefits of the Diamond Eye architecture are: (1) the design promotes trial evaluation of advanced data mining and machine learning techniques by potential new users (all that is required is to point a web browser to the appropriate URL), (2) software infrastructure that is common across a range of science mining applications is factored out and reused, and (3) the system facilitates closer collaborations between algorithm developers and domain experts.

  9. Lossless Data Embedding—New Paradigm in Digital Watermarking

    NASA Astrophysics Data System (ADS)

    Fridrich, Jessica; Goljan, Miroslav; Du, Rui

    2002-12-01

    One common drawback of virtually all current data embedding methods is the fact that the original image is inevitably distorted due to data embedding itself. This distortion typically cannot be removed completely due to quantization, bit-replacement, or truncation at the grayscales 0 and 255. Although the distortion is often quite small and perceptual models are used to minimize its visibility, the distortion may not be acceptable for medical imagery (for legal reasons) or for military images inspected under nonstandard viewing conditions (after enhancement or extreme zoom). In this paper, we introduce a new paradigm for data embedding in images (lossless data embedding) that has the property that the distortion due to embedding can be completely removed from the watermarked image after the embedded data has been extracted. We present lossless embedding methods for the uncompressed formats (BMP, TIFF) and for the JPEG format. We also show how the concept of lossless data embedding can be used as a powerful tool to achieve a variety of nontrivial tasks, including lossless authentication using fragile watermarks, steganalysis of LSB embedding, and distortion-free robust watermarking.

  10. Color constancy in dermatoscopy with smartphone

    NASA Astrophysics Data System (ADS)

    Cugmas, Blaž; Pernuš, Franjo; Likar, Boštjan

    2017-12-01

    The recent spread of cheap dermatoscopes for smartphones can empower patients to acquire images of skin lesions on their own and send them to dermatologists. Since images are acquired by different smartphone cameras under unique illumination conditions, the variability in colors is expected. Therefore, the mobile dermatoscopic systems should be calibrated in order to ensure the color constancy in skin images. In this study, we have tested a dermatoscope DermLite DL1 basic, attached to Samsung Galaxy S4 smartphone. Under the controlled conditions, jpeg images of standard color patches were acquired and a model between an unknown device-dependent RGB and a deviceindependent Lab color space has been built. Results showed that median and the best color error was 7.77 and 3.94, respectively. Results are in the range of a human eye detection capability (color error ≈ 4) and video and printing industry standards (color error is expected to be between 5 and 6). It can be concluded that a calibrated smartphone dermatoscope can provide sufficient color constancy and can serve as an interesting opportunity to bring dermatologists closer to the patients.

  11. Computational scalability of large size image dissemination

    NASA Astrophysics Data System (ADS)

    Kooper, Rob; Bajcsy, Peter

    2011-01-01

    We have investigated the computational scalability of image pyramid building needed for dissemination of very large image data. The sources of large images include high resolution microscopes and telescopes, remote sensing and airborne imaging, and high resolution scanners. The term 'large' is understood from a user perspective which means either larger than a display size or larger than a memory/disk to hold the image data. The application drivers for our work are digitization projects such as the Lincoln Papers project (each image scan is about 100-150MB or about 5000x8000 pixels with the total number to be around 200,000) and the UIUC library scanning project for historical maps from 17th and 18th century (smaller number but larger images). The goal of our work is understand computational scalability of the web-based dissemination using image pyramids for these large image scans, as well as the preservation aspects of the data. We report our computational benchmarks for (a) building image pyramids to be disseminated using the Microsoft Seadragon library, (b) a computation execution approach using hyper-threading to generate image pyramids and to utilize the underlying hardware, and (c) an image pyramid preservation approach using various hard drive configurations of Redundant Array of Independent Disks (RAID) drives for input/output operations. The benchmarks are obtained with a map (334.61 MB, JPEG format, 17591x15014 pixels). The discussion combines the speed and preservation objectives.

  12. Impact of Altering Various Image Parameters on Human Epidermal Growth Factor Receptor 2 Image Analysis Data Quality.

    PubMed

    Pantanowitz, Liron; Liu, Chi; Huang, Yue; Guo, Huazhang; Rohde, Gustavo K

    2017-01-01

    The quality of data obtained from image analysis can be directly affected by several preanalytical (e.g., staining, image acquisition), analytical (e.g., algorithm, region of interest [ROI]), and postanalytical (e.g., computer processing) variables. Whole-slide scanners generate digital images that may vary depending on the type of scanner and device settings. Our goal was to evaluate the impact of altering brightness, contrast, compression, and blurring on image analysis data quality. Slides from 55 patients with invasive breast carcinoma were digitized to include a spectrum of human epidermal growth factor receptor 2 (HER2) scores analyzed with Visiopharm (30 cases with score 0, 10 with 1+, 5 with 2+, and 10 with 3+). For all images, an ROI was selected and four parameters (brightness, contrast, JPEG2000 compression, out-of-focus blurring) then serially adjusted. HER2 scores were obtained for each altered image. HER2 scores decreased with increased illumination, higher compression ratios, and increased blurring. HER2 scores increased with greater contrast. Cases with HER2 score 0 were least affected by image adjustments. This experiment shows that variations in image brightness, contrast, compression, and blurring can have major influences on image analysis results. Such changes can result in under- or over-scoring with image algorithms. Standardization of image analysis is recommended to minimize the undesirable impact such variations may have on data output.

  13. Performance of asynchronous transfer mode (ATM) local area and wide area networks for medical imaging transmission in clinical environment.

    PubMed

    Huang, H K; Wong, A W; Zhu, X

    1997-01-01

    Asynchronous transfer mode (ATM) technology emerges as a leading candidate for medical image transmission in both local area network (LAN) and wide area network (WAN) applications. This paper describes the performance of an ATM LAN and WAN network at the University of California, San Francisco. The measurements were obtained using an intensive care unit (ICU) server connecting to four image workstations (WS) at four different locations of a hospital-integrated picture archiving and communication system (HI-PACS) in a daily regular clinical environment. Four types of performance were evaluated: magnetic disk-to-disk, disk-to-redundant array of inexpensive disks (RAID), RAID-to-memory, and memory-to-memory. Results demonstrate that the transmission rate between two workstations can reach 5-6 Mbytes/s from RAID-to-memory, and 8-10 Mbytes/s from memory-to-memory. When the server has to send images to all four workstations simultaneously, the transmission rate to each WS is about 4 Mbytes/s. Both situations are adequate for radiologic image communications for picture archiving and communication systems (PACS) and teleradiology applications.

  14. Developing an interactive teleradiology system for SARS diagnosis

    NASA Astrophysics Data System (ADS)

    Sun, Jianyong; Zhang, Jianguo; Zhuang, Jun; Chen, Xiaomeng; Yong, Yuanyuan; Tan, Yongqiang; Chen, Liu; Lian, Ping; Meng, Lili; Huang, H. K.

    2004-04-01

    Severe acute respiratory syndrome (SARS) is a respiratory illness that had been reported in Asia, North America, and Europe in last spring. Most of the China cases of SARS have occurred by infection in hospitals or among travelers. To protect the physicians, experts and nurses from the SARS during the diagnosis and treatment procedures, the infection control mechanisms were built in SARS hospitals. We built a Web-based interactive teleradiology system to assist the radiologists and physicians both in side and out side control area to make image diagnosis. The system consists of three major components: DICOM gateway (GW), Web-based image repository server (Server), and Web-based DICOM viewer (Viewer). This system was installed and integrated with CR, CT and the hospital information system (HIS) in Shanghai Xinhua hospital to provide image-based ePR functions for SARS consultation between the radiologists, physicians and experts inside and out side control area. The both users inside and out side the control area can use the system to process and manipulate the DICOM images interactively, and the system provide the remote control mechanism to synchronize their operations on images and display.

  15. Architecture and prototypical implementation of a semantic querying system for big Earth observation image bases

    PubMed Central

    Tiede, Dirk; Baraldi, Andrea; Sudmanns, Martin; Belgiu, Mariana; Lang, Stefan

    2017-01-01

    ABSTRACT Spatiotemporal analytics of multi-source Earth observation (EO) big data is a pre-condition for semantic content-based image retrieval (SCBIR). As a proof of concept, an innovative EO semantic querying (EO-SQ) subsystem was designed and prototypically implemented in series with an EO image understanding (EO-IU) subsystem. The EO-IU subsystem is automatically generating ESA Level 2 products (scene classification map, up to basic land cover units) from optical satellite data. The EO-SQ subsystem comprises a graphical user interface (GUI) and an array database embedded in a client server model. In the array database, all EO images are stored as a space-time data cube together with their Level 2 products generated by the EO-IU subsystem. The GUI allows users to (a) develop a conceptual world model based on a graphically supported query pipeline as a combination of spatial and temporal operators and/or standard algorithms and (b) create, save and share within the client-server architecture complex semantic queries/decision rules, suitable for SCBIR and/or spatiotemporal EO image analytics, consistent with the conceptual world model. PMID:29098143

  16. Applying image quality in cell phone cameras: lens distortion

    NASA Astrophysics Data System (ADS)

    Baxter, Donald; Goma, Sergio R.; Aleksic, Milivoje

    2009-01-01

    This paper describes the framework used in one of the pilot studies run under the I3A CPIQ initiative to quantify overall image quality in cell-phone cameras. The framework is based on a multivariate formalism which tries to predict overall image quality from individual image quality attributes and was validated in a CPIQ pilot program. The pilot study focuses on image quality distortions introduced in the optical path of a cell-phone camera, which may or may not be corrected in the image processing path. The assumption is that the captured image used is JPEG compressed and the cellphone camera is set to 'auto' mode. As the used framework requires that the individual attributes to be relatively perceptually orthogonal, in the pilot study, the attributes used are lens geometric distortion (LGD) and lateral chromatic aberrations (LCA). The goal of this paper is to present the framework of this pilot project starting with the definition of the individual attributes, up to their quantification in JNDs of quality, a requirement of the multivariate formalism, therefore both objective and subjective evaluations were used. A major distinction in the objective part from the 'DSC imaging world' is that the LCA/LGD distortions found in cell-phone cameras, rarely exhibit radial behavior, therefore a radial mapping/modeling cannot be used in this case.

  17. New Image of Comet Halley in the Cold

    NASA Astrophysics Data System (ADS)

    2003-09-01

    VLT Observes Famous Traveller at Record Distance Summary Seventeen years after the last passage of Comet Halley , the ESO Very Large Telescope at Paranal (Chile) has captured a unique image of this famous object as it cruises through the outer solar system. It is completely inactive in this cold environment. No other comet has ever been observed this far - 4200 million km from the Sun - or that faint - nearly 1000 million times fainter than what can be perceived with the unaided eye. This observation is a byproduct of a dedicated search [1] for small Trans-Neptunian Objects, a population of icy bodies of which more than 600 have been found during the past decade. PR Photo 27a/03 : VLT image (cleaned) of Comet Halley PR Photo 27b/03 : Sky field in which Comet Halley was observed PR Photo 27c/03 : Combined VLT image with star trails and Comet Halley The Halley image ESO PR Photo 27a/03 ESO PR Photo 27a/03 [Preview - JPEG: 546 x 400 pix - 207k] [Normal - JPEG: 1092 x 800 pix - 614k] [FullRes - JPEG: 1502 x 1100 pix - 1.1M] Caption : PR Photo 27a/03 shows the faint, star-like image of Comet Halley (centre), observed with the ESO Very Large Telescope (VLT) at the Paranal Observatory on March 6-8, 2003. 81 individual exposures from three of the four 8.2-m VLT telescopes with a total exposure time of about 9 hours were combined to show the magnitude 28.2 object. At this time, Comet Halley was about 4200 million km from the Sun (28.06 AU) and 4080 million km (27.26 AU) from the Earth. All images of stars and galaxies in the field were removed during the extensive image processing needed to produce this unique image. Due to the remaining, unavoidable "background noise", it is best to view the comet image from some distance. The field measures 60 x 40 arcsec 2 ; North is up and East is left. Remember Comet Halley - the famous "haired star" that has been observed with great regularity - about once every 76 years - during more than two millennia? Which was visited by an international spacecraft armada when it last passed through the inner solar system in 1986? And which put on a fine display in the sky at that time? Now, 17 years after that passage, this cosmic traveller has again been observed at the European Southern Observatory. Moving outward along its elongated orbit into the deep-freeze outer regions of the solar system, it is now almost as far away as Neptune, the most distant giant planet in our system. At 4,200 million km from the Sun, Comet Halley has now completed four-fifths of its travel towards the most distant point of this orbit. As the motion is getting ever slower, it will reach that turning point in December 2023, after which it begins its long return towards the next passage through the inner solar system in 2062. The new image of Halley was taken with the Very Large Telescope (VLT) at Paranal (Chile); a "cleaned" version is shown in PR Photo 27a/03 . It was obtained as a byproduct of an observing program aimed at studying the population of icy bodies at the rim of the solar system. The image shows the raven-black, 10-km cometary nucleus of ice and dust as an unresolved faint point of light, without any signs of activity. A cold and inactive "dirty snowball" The brightness of the comet was measured as visual magnitude V = 28.2, or nearly 1000 million times fainter than the faintest objects that can be perceived in a dark sky with the unaided eye. The pitch black nucleus of Halley reflects about 4% of the sunlight; it is a very "dirty" snowball indeed. We know from the images obtained by the ESA Giotto spacecraft in 1986 that it is avocado-shaped and on the average measures about 10 km diameter across. The VLT observation is therefore equivalent to seeing a 5-cm piece of coal at a distance of 20,500 km (about the distance between the Earth's poles) and to do so in the evening twilight. This is because at the large distance of Comet Halley, the infalling sunlight is 800 times fainter than here on Earth. The measured brightness of the cometary image perfectly matches that expected for the nucleus alone, taking into account the distance, the solar illumination and the reflectivity of the surface. This shows that all cometary activity has now ceased. The nucleus is now an inert ball of ice and dust, and is likely to remain so until it again returns to the solar neighbourhood, more than half a century from now. A record observation At 28.06 AU heliocentric distance (1 AU = 149,600,000 km - the mean distance between the Earth and the Sun), this is by far the most distant observation ever made of a comet [2]. It is also the faintest comet ever detected (by a factor of about 5); the previous record, magnitude 26.5, was co-held by comet Halley at 18.8 AU (with the ESO New Technology Telescope in 1994) and Comet Sanguin at 8.5 AU (with the Keck II telescope in 1997). Interestingly, when Comet Halley reaches its largest distance from the Sun in December 2023, about 35 AU, it will only be 2.5 times fainter than it is now. The comet would still have been detected within the present exposure time. This means that with the VLT, for the first time in the long history of this comet, the astronomers now possess the means to observe it at any point in its 76-year orbit! A census of faint Transneptunian Objects The image of Halley was obtained by combining a series of exposures obtained simultaneously with three of the 8.2-m telescopes (ANTU, MELIPAL and YEPUN) during 3 consecutive nights with the main goal to count the number of small icy bodies orbiting the Sun beyond Neptune, known as Transneptunian Objects (TNOs). Since the discovery of the first TNO in 1992, more than 600 have been found, most of these measuring several hundred km across. The VLT observations aim at a census of smaller TNOs - the incorporation of the sky field with Comet Halley allows verification of the associated, extensive data processing. Similar TNO-surveys have been performed before, but this is the first time that several very large telescopes are used simultaneously in order to observe extremely faint, hitherto inaccessible objects. The VLT observations will provide very useful information about the frequency of (smaller) TNOs of different sizes and thereby, indirectly, about the rate of collisions they have suffered since their formation. This study will also cast more light on the mystery of the apparent "emptiness" of the very distant solar system. Why are so few objects found beyond 45 AU? It is not known whether this is because there are no objects out there or if they are simply too small or too dark, or both, to have been detected so far. How to extract a very faint comet image ESO PR Photo 27b/03 ESO PR Photo 27b/03 [Preview - JPEG: 546 x 400 pix - 211k] [Normal - JPEG: 1092 x 800 pix - 649k] [FullRes - JPEG: 1502 x 1100 pix - 1.1M] ESO PR Photo 27c/03 ESO PR Photo 27c/03 [Preview - JPEG: 530 x 400 pix - 184k] [Normal - JPEG: 1059 x 800 pix - 573k] [FullRes - JPEG: 1515 x 1145 pix - 983k] Caption : PR Photo 27b/03 shows the sky field in which Comet Halley was observed with the ESO Very Large Telescope (VLT) at the Paranal Observatory on March 6-8, 2003. 81 individual exposures with a total exposure time of 32284 sec (almost 9 hours) from three of the four 8.2-m telescopes were cleaned and combined to produce this composite photo, displaying numerous faint stars and galaxies in the field. The predicted motion of Comet Halley during the three nights is indicated by short red lines. The long straight lines at the top and to the right were caused by artificial satellites in orbit around the Earth that passed through the field during the exposure. The field measures 300 x 180 arcsec 2. PR Photo 27c/03 was produced by adding the same frames, however, while shifting their positions according to the motion of the comet. The faint, star-like image of Comet Halley is now visible (in circle, at centre); all other objects (stars, galaxies) in the field are "trailed". A satellite trail is visible at the very top. The field measures 60 x 40 arcsec 2 ; North is up and East is left in both photos. The combination of the images from three 8.2-m telescopes obtained during three consecutive nights is not straightforward. The individual characteristics of the imaging instruments (FORS1 on ANTU, VIMOS on MELIPAL and FORS2 on YEPUN) must be taken into account and corrected. Moreover, the motion of the very faint moving objects has to be compensated for, even though they are too faint to be seen on individual exposures; they only reveal themselves when several (many!) frames are combined during the final steps of the process. It is for this reason that the presence of a known, faint object like Comet Halley in the field-of-view provides a powerful control of the data processing. If Halley is visible at the end, it has been done properly. The extensive data processing is now under way and the intensive search for new Transneptunian objects has started. The field with Comet Halley was observed with the giant telescopes during each of three consecutive nights, yielding 81 individual exposures with a total exposure time of almost 9 hours. The faint comet is completely invisible on the individual images. On PR Photo 27b/03 , these frames have been added directly, showing very faint stars and galaxies. Also this photo does not show the moving comet, but by shifting the frames before they are added in such a way that the comet remains fixed, a faint image does emerge among the stellar trails, cf. PR Photo 27c/03 . A better, but much more cumbersome method is to "subtract" the images of all stars and galaxies from the individual exposures, before they are added. PR Photo 27a/03 has been produced in this way and shows the image of Comet Halley more clearly. In total, about 20,000 photons were detected from the comet, i.e. about one photon per 8.2-m telescope every 1.6 second. However, during the same time, the telescopes collected about one thousand times more photons from molecular emission in the Earth's atmosphere within the sky area covered by the comet's image. The presence of this considerable "noise" calls for very careful image processing in order to detect the faint comet signal. The identity of the comet is beyond doubt: the image is faintly visible on composite photos obtained during a single night, demonstrating that the direction and rate of motion of the detected object perfectly matches that predicted for Comet Halley from its well-known orbit. Moreover, the image is located within 1 arcsec from the predicted position in the sky. Outlook After its passage in 1910, Comet Halley was again seen in 1982, when David Jewitt first observed its faint image with the 5-m Palomar telescope at a time when it was 11 AU from the Sun, a little further than planet Saturn. It was observed from La Silla two months later. As the comet approached, the ice in the nucleus began to evaporate (sublimate), and the comet soon became surrounded by a cloud of dust and gas (the "coma"). It developed the tail that is typical of comets and was extensively observed, also from several spacecraft passing close to its nucleus in early 1986. Observations have since been made of Comet Halley as it moves away from the Sun, documenting a steady decrease of activity. When it reached the distance of Saturn, the tail and coma had disappeared completely, leaving only the 5 x 5 x 15 km avocado-shaped "dirty snowball" nucleus. However, Halley was still good for a major surprise: in 1991, a gigantic explosion happened, providing it with an expanding, extensive cloud of dust for several months. It is not known whether this event was caused by a collision with an unknown piece of rock or by internal processes (a last "sigh" on the way out). Until now, the most recent observation of Comet Halley was done in 1994 with the New Technology Telescope (NTT) at La Silla, at that time the most powerful ESO telescope. It showed the comet to be completely inactive. Nine years later, so does the present VLT observation. It is unlikely that any activity will be seen until this famous object again approaches the Sun, more than 50 years from now.

  18. Recovering DC coefficients in block-based DCT.

    PubMed

    Uehara, Takeyuki; Safavi-Naini, Reihaneh; Ogunbona, Philip

    2006-11-01

    It is a common approach for JPEG and MPEG encryption systems to provide higher protection for dc coefficients and less protection for ac coefficients. Some authors have employed a cryptographic encryption algorithm for the dc coefficients and left the ac coefficients to techniques based on random permutation lists which are known to be weak against known-plaintext and chosen-ciphertext attacks. In this paper we show that in block-based DCT, it is possible to recover dc coefficients from ac coefficients with reasonable image quality and show the insecurity of image encryption methods which rely on the encryption of dc values using a cryptoalgorithm. The method proposed in this paper combines dc recovery from ac coefficients and the fact that ac coefficients can be recovered using a chosen ciphertext attack. We demonstrate that a method proposed by Tang to encrypt and decrypt MPEG video can be completely broken.

  19. DCTune Perceptual Optimization of Compressed Dental X-Rays

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Null, Cynthia H. (Technical Monitor)

    1997-01-01

    In current dental practice, x-rays of completed dental work are often sent to the insurer for verification. It is faster and cheaper to transmit instead digital scans of the x-rays. Further economies result if the images are sent in compressed form. DCtune is a technology for optimizing DCT quantization matrices to yield maximum perceptual quality for a given bit-rate, or minimum bit-rate for a given perceptual quality. In addition, the technology provides a means of setting the perceptual quality of compressed imagery in a systematic way. The purpose of this research was, with respect to dental x-rays: (1) to verify the advantage of DCTune over standard JPEG; (2) to verify the quality control feature of DCTune; and (3) to discover regularities in the optimized matrices of a set of images. Additional information is contained in the original extended abstract.

  20. A visual detection model for DCT coefficient quantization

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Peterson, Heidi A.

    1993-01-01

    The discrete cosine transform (DCT) is widely used in image compression, and is part of the JPEG and MPEG compression standards. The degree of compression, and the amount of distortion in the decompressed image are determined by the quantization of the transform coefficients. The standards do not specify how the DCT coefficients should be quantized. Our approach is to set the quantization level for each coefficient so that the quantization error is at the threshold of visibility. Here we combine results from our previous work to form our current best detection model for DCT coefficient quantization noise. This model predicts sensitivity as a function of display parameters, enabling quantization matrices to be designed for display situations varying in luminance, veiling light, and spatial frequency related conditions (pixel size, viewing distance, and aspect ratio). It also allows arbitrary color space directions for the representation of color.

Top