Visually Lossless JPEG 2000 for Remote Image Browsing
Oh, Han; Bilgin, Ali; Marcellin, Michael
2017-01-01
Image sizes have increased exponentially in recent years. The resulting high-resolution images are often viewed via remote image browsing. Zooming and panning are desirable features in this context, which result in disparate spatial regions of an image being displayed at a variety of (spatial) resolutions. When an image is displayed at a reduced resolution, the quantization step sizes needed for visually lossless quality generally increase. This paper investigates the quantization step sizes needed for visually lossless display as a function of resolution, and proposes a method that effectively incorporates the resulting (multiple) quantization step sizes into a single JPEG2000 codestream. This codestream is JPEG2000 Part 1 compliant and allows for visually lossless decoding at all resolutions natively supported by the wavelet transform as well as arbitrary intermediate resolutions, using only a fraction of the full-resolution codestream. When images are browsed remotely using the JPEG2000 Interactive Protocol (JPIP), the required bandwidth is significantly reduced, as demonstrated by extensive experimental results. PMID:28748112
NASA Astrophysics Data System (ADS)
Joshi, Rajan L.
2006-03-01
In medical imaging, the popularity of image capture modalities such as multislice CT and MRI is resulting in an exponential increase in the amount of volumetric data that needs to be archived and transmitted. At the same time, the increased data is taxing the interpretation capabilities of radiologists. One of the workflow strategies recommended for radiologists to overcome the data overload is the use of volumetric navigation. This allows the radiologist to seek a series of oblique slices through the data. However, it might be inconvenient for a radiologist to wait until all the slices are transferred from the PACS server to a client, such as a diagnostic workstation. To overcome this problem, we propose a client-server architecture based on JPEG2000 and JPEG2000 Interactive Protocol (JPIP) for rendering oblique slices through 3D volumetric data stored remotely at a server. The client uses the JPIP protocol for obtaining JPEG2000 compressed data from the server on an as needed basis. In JPEG2000, the image pixels are wavelet-transformed and the wavelet coefficients are grouped into precincts. Based on the positioning of the oblique slice, compressed data from only certain precincts is needed to render the slice. The client communicates this information to the server so that the server can transmit only relevant compressed data. We also discuss the use of caching on the client side for further reduction in bandwidth requirements. Finally, we present simulation results to quantify the bandwidth savings for rendering a series of oblique slices.
Efficient transmission of compressed data for remote volume visualization.
Krishnan, Karthik; Marcellin, Michael W; Bilgin, Ali; Nadar, Mariappan S
2006-09-01
One of the goals of telemedicine is to enable remote visualization and browsing of medical volumes. There is a need to employ scalable compression schemes and efficient client-server models to obtain interactivity and an enhanced viewing experience. First, we present a scheme that uses JPEG2000 and JPIP (JPEG2000 Interactive Protocol) to transmit data in a multi-resolution and progressive fashion. The server exploits the spatial locality offered by the wavelet transform and packet indexing information to transmit, in so far as possible, compressed volume data relevant to the clients query. Once the client identifies its volume of interest (VOI), the volume is refined progressively within the VOI from an initial lossy to a final lossless representation. Contextual background information can also be made available having quality fading away from the VOI. Second, we present a prioritization that enables the client to progressively visualize scene content from a compressed file. In our specific example, the client is able to make requests to progressively receive data corresponding to any tissue type. The server is now capable of reordering the same compressed data file on the fly to serve data packets prioritized as per the client's request. Lastly, we describe the effect of compression parameters on compression ratio, decoding times and interactivity. We also present suggestions for optimizing JPEG2000 for remote volume visualization and volume browsing applications. The resulting system is ideally suited for client-server applications with the server maintaining the compressed volume data, to be browsed by a client with a low bandwidth constraint.
Workflow opportunities using JPEG 2000
NASA Astrophysics Data System (ADS)
Foshee, Scott
2002-11-01
JPEG 2000 is a new image compression standard from ISO/IEC JTC1 SC29 WG1, the Joint Photographic Experts Group (JPEG) committee. Better thought of as a sibling to JPEG rather than descendant, the JPEG 2000 standard offers wavelet based compression as well as companion file formats and related standardized technology. This paper examines the JPEG 2000 standard for features in four specific areas-compression, file formats, client-server, and conformance/compliance that enable image workflows.
Request redirection paradigm in medical image archive implementation.
Dragan, Dinu; Ivetić, Dragan
2012-08-01
It is widely recognized that the JPEG2000 facilitates issues in medical imaging: storage, communication, sharing, remote access, interoperability, and presentation scalability. Therefore, JPEG2000 support was added to the DICOM standard Supplement 61. Two approaches to support JPEG2000 medical image are explicitly defined by the DICOM standard: replacing the DICOM image format with corresponding JPEG2000 codestream, or by the Pixel Data Provider service, DICOM supplement 106. The latest one supposes two-step retrieval of medical image: DICOM request and response from a DICOM server, and then JPIP request and response from a JPEG2000 server. We propose a novel strategy for transmission of scalable JPEG2000 images extracted from a single codestream over DICOM network using the DICOM Private Data Element without sacrificing system interoperability. It employs the request redirection paradigm: DICOM request and response from JPEG2000 server through DICOM server. The paper presents programming solution for implementation of request redirection paradigm in a DICOM transparent manner. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Clunie, David A.
2000-05-01
Proprietary compression schemes have a cost and risk associated with their support, end of life and interoperability. Standards reduce this cost and risk. The new JPEG-LS process (ISO/IEC 14495-1), and the lossless mode of the proposed JPEG 2000 scheme (ISO/IEC CD15444-1), new standard schemes that may be incorporated into DICOM, are evaluated here. Three thousand, six hundred and seventy-nine (3,679) single frame grayscale images from multiple anatomical regions, modalities and vendors, were tested. For all images combined JPEG-LS and JPEG 2000 performed equally well (3.81), almost as well as CALIC (3.91), a complex predictive scheme used only as a benchmark. Both out-performed existing JPEG (3.04 with optimum predictor choice per image, 2.79 for previous pixel prediction as most commonly used in DICOM). Text dictionary schemes performed poorly (gzip 2.38), as did image dictionary schemes without statistical modeling (PNG 2.76). Proprietary transform based schemes did not perform as well as JPEG-LS or JPEG 2000 (S+P Arithmetic 3.4, CREW 3.56). Stratified by modality, JPEG-LS compressed CT images (4.00), MR (3.59), NM (5.98), US (3.4), IO (2.66), CR (3.64), DX (2.43), and MG (2.62). CALIC always achieved the highest compression except for one modality for which JPEG-LS did better (MG digital vendor A JPEG-LS 4.02, CALIC 4.01). JPEG-LS outperformed existing JPEG for all modalities. The use of standard schemes can achieve state of the art performance, regardless of modality, JPEG-LS is simple, easy to implement, consumes less memory, and is faster than JPEG 2000, though JPEG 2000 will offer lossy and progressive transmission. It is recommended that DICOM add transfer syntaxes for both JPEG-LS and JPEG 2000.
JPEG vs. JPEG 2000: an objective comparison of image encoding quality
NASA Astrophysics Data System (ADS)
Ebrahimi, Farzad; Chamik, Matthieu; Winkler, Stefan
2004-11-01
This paper describes an objective comparison of the image quality of different encoders. Our approach is based on estimating the visual impact of compression artifacts on perceived quality. We present a tool that measures these artifacts in an image and uses them to compute a prediction of the Mean Opinion Score (MOS) obtained in subjective experiments. We show that the MOS predictions by our proposed tool are a better indicator of perceived image quality than PSNR, especially for highly compressed images. For the encoder comparison, we compress a set of 29 test images with two JPEG encoders (Adobe Photoshop and IrfanView) and three JPEG2000 encoders (JasPer, Kakadu, and IrfanView) at various compression ratios. We compute blockiness, blur, and MOS predictions as well as PSNR of the compressed images. Our results show that the IrfanView JPEG encoder produces consistently better images than the Adobe Photoshop JPEG encoder at the same data rate. The differences between the JPEG2000 encoders in our test are less pronounced; JasPer comes out as the best codec, closely followed by IrfanView and Kakadu. Comparing the JPEG- and JPEG2000-encoding quality of IrfanView, we find that JPEG has a slight edge at low compression ratios, while JPEG2000 is the clear winner at medium and high compression ratios.
Dynamic code block size for JPEG 2000
NASA Astrophysics Data System (ADS)
Tsai, Ping-Sing; LeCornec, Yann
2008-02-01
Since the standardization of the JPEG 2000, it has found its way into many different applications such as DICOM (digital imaging and communication in medicine), satellite photography, military surveillance, digital cinema initiative, professional video cameras, and so on. The unified framework of the JPEG 2000 architecture makes practical high quality real-time compression possible even in video mode, i.e. motion JPEG 2000. In this paper, we present a study of the compression impact using dynamic code block size instead of fixed code block size as specified in the JPEG 2000 standard. The simulation results show that there is no significant impact on compression if dynamic code block sizes are used. In this study, we also unveil the advantages of using dynamic code block sizes.
An evaluation of the effect of JPEG, JPEG2000, and H.264/AVC on CQR codes decoding process
NASA Astrophysics Data System (ADS)
Vizcarra Melgar, Max E.; Farias, Mylène C. Q.; Zaghetto, Alexandre
2015-02-01
This paper presents a binarymatrix code based on QR Code (Quick Response Code), denoted as CQR Code (Colored Quick Response Code), and evaluates the effect of JPEG, JPEG2000 and H.264/AVC compression on the decoding process. The proposed CQR Code has three additional colors (red, green and blue), what enables twice as much storage capacity when compared to the traditional black and white QR Code. Using the Reed-Solomon error-correcting code, the CQR Code model has a theoretical correction capability of 38.41%. The goal of this paper is to evaluate the effect that degradations inserted by common image compression algorithms have on the decoding process. Results show that a successful decoding process can be achieved for compression rates up to 0.3877 bits/pixel, 0.1093 bits/pixel and 0.3808 bits/pixel for JPEG, JPEG2000 and H.264/AVC formats, respectively. The algorithm that presents the best performance is the H.264/AVC, followed by the JPEG2000, and JPEG.
FBCOT: a fast block coding option for JPEG 2000
NASA Astrophysics Data System (ADS)
Taubman, David; Naman, Aous; Mathew, Reji
2017-09-01
Based on the EBCOT algorithm, JPEG 2000 finds application in many fields, including high performance scientific, geospatial and video coding applications. Beyond digital cinema, JPEG 2000 is also attractive for low-latency video communications. The main obstacle for some of these applications is the relatively high computational complexity of the block coder, especially at high bit-rates. This paper proposes a drop-in replacement for the JPEG 2000 block coding algorithm, achieving much higher encoding and decoding throughputs, with only modest loss in coding efficiency (typically < 0.5dB). The algorithm provides only limited quality/SNR scalability, but offers truly reversible transcoding to/from any standard JPEG 2000 block bit-stream. The proposed FAST block coder can be used with EBCOT's post-compression RD-optimization methodology, allowing a target compressed bit-rate to be achieved even at low latencies, leading to the name FBCOT (Fast Block Coding with Optimized Truncation).
NASA Astrophysics Data System (ADS)
Agueh, Max; Diouris, Jean-François; Diop, Magaye; Devaux, François-Olivier; De Vleeschouwer, Christophe; Macq, Benoit
2008-12-01
Based on the analysis of real mobile ad hoc network (MANET) traces, we derive in this paper an optimal wireless JPEG 2000 compliant forward error correction (FEC) rate allocation scheme for a robust streaming of images and videos over MANET. The packet-based proposed scheme has a low complexity and is compliant to JPWL, the 11th part of the JPEG 2000 standard. The effectiveness of the proposed method is evaluated using a wireless Motion JPEG 2000 client/server application; and the ability of the optimal scheme to guarantee quality of service (QoS) to wireless clients is demonstrated.
Scan-Based Implementation of JPEG 2000 Extensions
NASA Technical Reports Server (NTRS)
Rountree, Janet C.; Webb, Brian N.; Flohr, Thomas J.; Marcellin, Michael W.
2001-01-01
JPEG 2000 Part 2 (Extensions) contains a number of technologies that are of potential interest in remote sensing applications. These include arbitrary wavelet transforms, techniques to limit boundary artifacts in tiles, multiple component transforms, and trellis-coded quantization (TCQ). We are investigating the addition of these features to the low-memory (scan-based) implementation of JPEG 2000 Part 1. A scan-based implementation of TCQ has been realized and tested, with a very small performance loss as compared with the full image (frame-based) version. A proposed amendment to JPEG 2000 Part 2 will effect the syntax changes required to make scan-based TCQ compatible with the standard.
Estimating JPEG2000 compression for image forensics using Benford's Law
NASA Astrophysics Data System (ADS)
Qadir, Ghulam; Zhao, Xi; Ho, Anthony T. S.
2010-05-01
With the tremendous growth and usage of digital images nowadays, the integrity and authenticity of digital content is becoming increasingly important, and a growing concern to many government and commercial sectors. Image Forensics, based on a passive statistical analysis of the image data only, is an alternative approach to the active embedding of data associated with Digital Watermarking. Benford's Law was first introduced to analyse the probability distribution of the 1st digit (1-9) numbers of natural data, and has since been applied to Accounting Forensics for detecting fraudulent income tax returns [9]. More recently, Benford's Law has been further applied to image processing and image forensics. For example, Fu et al. [5] proposed a Generalised Benford's Law technique for estimating the Quality Factor (QF) of JPEG compressed images. In our previous work, we proposed a framework incorporating the Generalised Benford's Law to accurately detect unknown JPEG compression rates of watermarked images in semi-fragile watermarking schemes. JPEG2000 (a relatively new image compression standard) offers higher compression rates and better image quality as compared to JPEG compression. In this paper, we propose the novel use of Benford's Law for estimating JPEG2000 compression for image forensics applications. By analysing the DWT coefficients and JPEG2000 compression on 1338 test images, the initial results indicate that the 1st digit probability of DWT coefficients follow the Benford's Law. The unknown JPEG2000 compression rates of the image can also be derived, and proved with the help of a divergence factor, which shows the deviation between the probabilities and Benford's Law. Based on 1338 test images, the mean divergence for DWT coefficients is approximately 0.0016, which is lower than DCT coefficients at 0.0034. However, the mean divergence for JPEG2000 images compression rate at 0.1 is 0.0108, which is much higher than uncompressed DWT coefficients. This result clearly indicates a presence of compression in the image. Moreover, we compare the results of 1st digit probability and divergence among JPEG2000 compression rates at 0.1, 0.3, 0.5 and 0.9. The initial results show that the expected difference among them could be used for further analysis to estimate the unknown JPEG2000 compression rates.
Evaluation of image compression for computer-aided diagnosis of breast tumors in 3D sonography
NASA Astrophysics Data System (ADS)
Chen, We-Min; Huang, Yu-Len; Tao, Chi-Chuan; Chen, Dar-Ren; Moon, Woo-Kyung
2006-03-01
Medical imaging examinations form the basis for physicians diagnosing diseases, as evidenced by the increasing use of digital medical images for picture archiving and communications systems (PACS). However, with enlarged medical image databases and rapid growth of patients' case reports, PACS requires image compression to accelerate the image transmission rate and conserve disk space for diminishing implementation costs. For this purpose, JPEG and JPEG2000 have been accepted as legal formats for the digital imaging and communications in medicine (DICOM). The high compression ratio is felt to be useful for medical imagery. Therefore, this study evaluates the compression ratios of JPEG and JPEG2000 standards for computer-aided diagnosis (CAD) of breast tumors in 3-D medical ultrasound (US) images. The 3-D US data sets with various compression ratios are compressed using the two efficacious image compression standards. The reconstructed data sets are then diagnosed by a previous proposed CAD system. The diagnostic accuracy is measured based on receiver operating characteristic (ROC) analysis. Namely, the ROC curves are used to compare the diagnostic performance of two or more reconstructed images. Analysis results ensure a comparison of the compression ratios by using JPEG and JPEG2000 for 3-D US images. Results of this study provide the possible bit rates using JPEG and JPEG2000 for 3-D breast US images.
JPEG2000 and dissemination of cultural heritage over the Internet.
Politou, Eugenia A; Pavlidis, George P; Chamzas, Christodoulos
2004-03-01
By applying the latest technologies in image compression for managing the storage of massive image data within cultural heritage databases and by exploiting the universality of the Internet we are now able not only to effectively digitize, record and preserve, but also to promote the dissemination of cultural heritage. In this work we present an application of the latest image compression standard JPEG2000 in managing and browsing image databases, focusing on the image transmission aspect rather than database management and indexing. We combine the technologies of JPEG2000 image compression with client-server socket connections and client browser plug-in, as to provide with an all-in-one package for remote browsing of JPEG2000 compressed image databases, suitable for the effective dissemination of cultural heritage.
JPEG2000 encoding with perceptual distortion control.
Liu, Zhen; Karam, Lina J; Watson, Andrew B
2006-07-01
In this paper, a new encoding approach is proposed to control the JPEG2000 encoding in order to reach a desired perceptual quality. The new method is based on a vision model that incorporates various masking effects of human visual perception and a perceptual distortion metric that takes spatial and spectral summation of individual quantization errors into account. Compared with the conventional rate-based distortion minimization JPEG2000 encoding, the new method provides a way to generate consistent quality images at a lower bit rate.
LDPC-based iterative joint source-channel decoding for JPEG2000.
Pu, Lingling; Wu, Zhenyu; Bilgin, Ali; Marcellin, Michael W; Vasic, Bane
2007-02-01
A framework is proposed for iterative joint source-channel decoding of JPEG2000 codestreams. At the encoder, JPEG2000 is used to perform source coding with certain error-resilience (ER) modes, and LDPC codes are used to perform channel coding. During decoding, the source decoder uses the ER modes to identify corrupt sections of the codestream and provides this information to the channel decoder. Decoding is carried out jointly in an iterative fashion. Experimental results indicate that the proposed method requires fewer iterations and improves overall system performance.
JPEG2000 vs. full frame wavelet packet compression for smart card medical records.
Leehan, Joaquín Azpirox; Lerallut, Jean-Francois
2006-01-01
This paper describes a comparison among different compression methods to be used in the context of electronic health records in the newer version of "smart cards". The JPEG2000 standard is compared to a full-frame wavelet packet compression method at high (33:1 and 50:1) compression rates. Results show that the full-frame method outperforms the JPEG2K standard qualitatively and quantitatively.
The impact of skull bone intensity on the quality of compressed CT neuro images
NASA Astrophysics Data System (ADS)
Kowalik-Urbaniak, Ilona; Vrscay, Edward R.; Wang, Zhou; Cavaro-Menard, Christine; Koff, David; Wallace, Bill; Obara, Boguslaw
2012-02-01
The increasing use of technologies such as CT and MRI, along with a continuing improvement in their resolution, has contributed to the explosive growth of digital image data being generated. Medical communities around the world have recognized the need for efficient storage, transmission and display of medical images. For example, the Canadian Association of Radiologists (CAR) has recommended compression ratios for various modalities and anatomical regions to be employed by lossy JPEG and JPEG2000 compression in order to preserve diagnostic quality. Here we investigate the effects of the sharp skull edges present in CT neuro images on JPEG and JPEG2000 lossy compression. We conjecture that this atypical effect is caused by the sharp edges between the skull bone and the background regions as well as between the skull bone and the interior regions. These strong edges create large wavelet coefficients that consume an unnecessarily large number of bits in JPEG2000 compression because of its bitplane coding scheme, and thus result in reduced quality at the interior region, which contains most diagnostic information in the image. To validate the conjecture, we investigate a segmentation based compression algorithm based on simple thresholding and morphological operators. As expected, quality is improved in terms of PSNR as well as the structural similarity (SSIM) image quality measure, and its multiscale (MS-SSIM) and informationweighted (IW-SSIM) versions. This study not only supports our conjecture, but also provides a solution to improve the performance of JPEG and JPEG2000 compression for specific types of CT images.
Cell edge detection in JPEG2000 wavelet domain - analysis on sigmoid function edge model.
Punys, Vytenis; Maknickas, Ramunas
2011-01-01
Big virtual microscopy images (80K x 60K pixels and larger) are usually stored using the JPEG2000 image compression scheme. Diagnostic quantification, based on image analysis, might be faster if performed on compressed data (approx. 20 times less the original amount), representing the coefficients of the wavelet transform. The analysis of possible edge detection without reverse wavelet transform is presented in the paper. Two edge detection methods, suitable for JPEG2000 bi-orthogonal wavelets, are proposed. The methods are adjusted according calculated parameters of sigmoid edge model. The results of model analysis indicate more suitable method for given bi-orthogonal wavelet.
Rate distortion optimal bit allocation methods for volumetric data using JPEG 2000.
Kosheleva, Olga M; Usevitch, Bryan E; Cabrera, Sergio D; Vidal, Edward
2006-08-01
Computer modeling programs that generate three-dimensional (3-D) data on fine grids are capable of generating very large amounts of information. These data sets, as well as 3-D sensor/measured data sets, are prime candidates for the application of data compression algorithms. A very flexible and powerful compression algorithm for imagery data is the newly released JPEG 2000 standard. JPEG 2000 also has the capability to compress volumetric data, as described in Part 2 of the standard, by treating the 3-D data as separate slices. As a decoder standard, JPEG 2000 does not describe any specific method to allocate bits among the separate slices. This paper proposes two new bit allocation algorithms for accomplishing this task. The first procedure is rate distortion optimal (for mean squared error), and is conceptually similar to postcompression rate distortion optimization used for coding codeblocks within JPEG 2000. The disadvantage of this approach is its high computational complexity. The second bit allocation algorithm, here called the mixed model (MM) approach, mathematically models each slice's rate distortion curve using two distinct regions to get more accurate modeling at low bit rates. These two bit allocation algorithms are applied to a 3-D Meteorological data set. Test results show that the MM approach gives distortion results that are nearly identical to the optimal approach, while significantly reducing computational complexity.
Digital cinema system using JPEG2000 movie of 8-million pixel resolution
NASA Astrophysics Data System (ADS)
Fujii, Tatsuya; Nomura, Mitsuru; Shirai, Daisuke; Yamaguchi, Takahiro; Fujii, Tetsuro; Ono, Sadayasu
2003-05-01
We have developed a prototype digital cinema system that can store, transmit and display extra high quality movies of 8-million pixel resolution, using JPEG2000 coding algorithm. The image quality is 4 times better than HDTV in resolution, and enables us to replace conventional films with digital cinema archives. Using wide-area optical gigabit IP networks, cinema contents are distributed and played back as a video-on-demand (VoD) system. The system consists of three main devices, a video server, a real-time JPEG2000 decoder, and a large-venue LCD projector. All digital movie data are compressed by JPEG2000 and stored in advance. The coded streams of 300~500 Mbps can be continuously transmitted from the PC server using TCP/IP. The decoder can perform the real-time decompression at 24/48 frames per second, using 120 parallel JPEG2000 processing elements. The received streams are expanded into 4.5Gbps raw video signals. The prototype LCD projector uses 3 pieces of 3840×2048 pixel reflective LCD panels (D-ILA) to show RGB 30-bit color movies fed by the decoder. The brightness exceeds 3000 ANSI lumens for a 300-inch screen. The refresh rate is chosen to 96Hz to thoroughly eliminate flickers, while preserving compatibility to cinema movies of 24 frames per second.
Codestream-Based Identification of JPEG 2000 Images with Different Coding Parameters
NASA Astrophysics Data System (ADS)
Watanabe, Osamu; Fukuhara, Takahiro; Kiya, Hitoshi
A method of identifying JPEG 2000 images with different coding parameters, such as code-block sizes, quantization-step sizes, and resolution levels, is presented. It does not produce false-negative matches regardless of different coding parameters (compression rate, code-block size, and discrete wavelet transform (DWT) resolutions levels) or quantization step sizes. This feature is not provided by conventional methods. Moreover, the proposed approach is fast because it uses the number of zero-bit-planes that can be extracted from the JPEG 2000 codestream by only parsing the header information without embedded block coding with optimized truncation (EBCOT) decoding. The experimental results revealed the effectiveness of image identification based on the new method.
A high-throughput two channel discrete wavelet transform architecture for the JPEG2000 standard
NASA Astrophysics Data System (ADS)
Badakhshannoory, Hossein; Hashemi, Mahmoud R.; Aminlou, Alireza; Fatemi, Omid
2005-07-01
The Discrete Wavelet Transform (DWT) is increasingly recognized in image and video compression standards, as indicated by its use in JPEG2000. The lifting scheme algorithm is an alternative DWT implementation that has a lower computational complexity and reduced resource requirement. In the JPEG2000 standard two lifting scheme based filter banks are introduced: the 5/3 and 9/7. In this paper a high throughput, two channel DWT architecture for both of the JPEG2000 DWT filters is presented. The proposed pipelined architecture has two separate input channels that process the incoming samples simultaneously with minimum memory requirement for each channel. The architecture had been implemented in VHDL and synthesized on a Xilinx Virtex2 XCV1000. The proposed architecture applies DWT on a 2K by 1K image at 33 fps with a 75 MHZ clock frequency. This performance is achieved with 70% less resources than two independent single channel modules. The high throughput and reduced resource requirement has made this architecture the proper choice for real time applications such as Digital Cinema.
Performance comparison of leading image codecs: H.264/AVC Intra, JPEG2000, and Microsoft HD Photo
NASA Astrophysics Data System (ADS)
Tran, Trac D.; Liu, Lijie; Topiwala, Pankaj
2007-09-01
This paper provides a detailed rate-distortion performance comparison between JPEG2000, Microsoft HD Photo, and H.264/AVC High Profile 4:4:4 I-frame coding for high-resolution still images and high-definition (HD) 1080p video sequences. This work is an extension to our previous comparative study published in previous SPIE conferences [1, 2]. Here we further optimize all three codecs for compression performance. Coding simulations are performed on a set of large-format color images captured from mainstream digital cameras and 1080p HD video sequences commonly used for H.264/AVC standardization work. Overall, our experimental results show that all three codecs offer very similar coding performances at the high-quality, high-resolution setting. Differences tend to be data-dependent: JPEG2000 with the wavelet technology tends to be the best performer with smooth spatial data; H.264/AVC High-Profile with advanced spatial prediction modes tends to cope best with more complex visual content; Microsoft HD Photo tends to be the most consistent across the board. For the still-image data sets, JPEG2000 offers the best R-D performance gains (around 0.2 to 1 dB in peak signal-to-noise ratio) over H.264/AVC High-Profile intra coding and Microsoft HD Photo. For the 1080p video data set, all three codecs offer very similar coding performance. As in [1, 2], neither do we consider scalability nor complexity in this study (JPEG2000 is operating in non-scalable, but optimal performance mode).
JPEG 2000 Encoding with Perceptual Distortion Control
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Liu, Zhen; Karam, Lina J.
2008-01-01
An alternative approach has been devised for encoding image data in compliance with JPEG 2000, the most recent still-image data-compression standard of the Joint Photographic Experts Group. Heretofore, JPEG 2000 encoding has been implemented by several related schemes classified as rate-based distortion-minimization encoding. In each of these schemes, the end user specifies a desired bit rate and the encoding algorithm strives to attain that rate while minimizing a mean squared error (MSE). While rate-based distortion minimization is appropriate for transmitting data over a limited-bandwidth channel, it is not the best approach for applications in which the perceptual quality of reconstructed images is a major consideration. A better approach for such applications is the present alternative one, denoted perceptual distortion control, in which the encoding algorithm strives to compress data to the lowest bit rate that yields at least a specified level of perceptual image quality. Some additional background information on JPEG 2000 is prerequisite to a meaningful summary of JPEG encoding with perceptual distortion control. The JPEG 2000 encoding process includes two subprocesses known as tier-1 and tier-2 coding. In order to minimize the MSE for the desired bit rate, a rate-distortion- optimization subprocess is introduced between the tier-1 and tier-2 subprocesses. In tier-1 coding, each coding block is independently bit-plane coded from the most-significant-bit (MSB) plane to the least-significant-bit (LSB) plane, using three coding passes (except for the MSB plane, which is coded using only one "clean up" coding pass). For M bit planes, this subprocess involves a total number of (3M - 2) coding passes. An embedded bit stream is then generated for each coding block. Information on the reduction in distortion and the increase in the bit rate associated with each coding pass is collected. This information is then used in a rate-control procedure to determine the contribution of each coding block to the output compressed bit stream.
JPEG2000 still image coding quality.
Chen, Tzong-Jer; Lin, Sheng-Chieh; Lin, You-Chen; Cheng, Ren-Gui; Lin, Li-Hui; Wu, Wei
2013-10-01
This work demonstrates the image qualities between two popular JPEG2000 programs. Two medical image compression algorithms are both coded using JPEG2000, but they are different regarding the interface, convenience, speed of computation, and their characteristic options influenced by the encoder, quantization, tiling, etc. The differences in image quality and compression ratio are also affected by the modality and compression algorithm implementation. Do they provide the same quality? The qualities of compressed medical images from two image compression programs named Apollo and JJ2000 were evaluated extensively using objective metrics. These algorithms were applied to three medical image modalities at various compression ratios ranging from 10:1 to 100:1. Following that, the quality of the reconstructed images was evaluated using five objective metrics. The Spearman rank correlation coefficients were measured under every metric in the two programs. We found that JJ2000 and Apollo exhibited indistinguishable image quality for all images evaluated using the above five metrics (r > 0.98, p < 0.001). It can be concluded that the image quality of the JJ2000 and Apollo algorithms is statistically equivalent for medical image compression.
JPEG 2000-based compression of fringe patterns for digital holographic microscopy
NASA Astrophysics Data System (ADS)
Blinder, David; Bruylants, Tim; Ottevaere, Heidi; Munteanu, Adrian; Schelkens, Peter
2014-12-01
With the advent of modern computing and imaging technologies, digital holography is becoming widespread in various scientific disciplines such as microscopy, interferometry, surface shape measurements, vibration analysis, data encoding, and certification. Therefore, designing an efficient data representation technology is of particular importance. Off-axis holograms have very different signal properties with respect to regular imagery, because they represent a recorded interference pattern with its energy biased toward the high-frequency bands. This causes traditional images' coders, which assume an underlying 1/f2 power spectral density distribution, to perform suboptimally for this type of imagery. We propose a JPEG 2000-based codec framework that provides a generic architecture suitable for the compression of many types of off-axis holograms. This framework has a JPEG 2000 codec at its core, extended with (1) fully arbitrary wavelet decomposition styles and (2) directional wavelet transforms. Using this codec, we report significant improvements in coding performance for off-axis holography relative to the conventional JPEG 2000 standard, with Bjøntegaard delta-peak signal-to-noise ratio improvements ranging from 1.3 to 11.6 dB for lossy compression in the 0.125 to 2.00 bpp range and bit-rate reductions of up to 1.6 bpp for lossless compression.
Mutual information-based analysis of JPEG2000 contexts.
Liu, Zhen; Karam, Lina J
2005-04-01
Context-based arithmetic coding has been widely adopted in image and video compression and is a key component of the new JPEG2000 image compression standard. In this paper, the contexts used in JPEG2000 are analyzed using the mutual information, which is closely related to the compression performance. We first show that, when combining the contexts, the mutual information between the contexts and the encoded data will decrease unless the conditional probability distributions of the combined contexts are the same. Given I, the initial number of contexts, and F, the final desired number of contexts, there are S(I, F) possible context classification schemes where S(I, F) is called the Stirling number of the second kind. The optimal classification scheme is the one that gives the maximum mutual information. Instead of using an exhaustive search, the optimal classification scheme can be obtained through a modified generalized Lloyd algorithm with the relative entropy as the distortion metric. For binary arithmetic coding, the search complexity can be reduced by using dynamic programming. Our experimental results show that the JPEG2000 contexts capture the correlations among the wavelet coefficients very well. At the same time, the number of contexts used as part of the standard can be reduced without loss in the coding performance.
A novel high-frequency encoding algorithm for image compression
NASA Astrophysics Data System (ADS)
Siddeq, Mohammed M.; Rodrigues, Marcos A.
2017-12-01
In this paper, a new method for image compression is proposed whose quality is demonstrated through accurate 3D reconstruction from 2D images. The method is based on the discrete cosine transform (DCT) together with a high-frequency minimization encoding algorithm at compression stage and a new concurrent binary search algorithm at decompression stage. The proposed compression method consists of five main steps: (1) divide the image into blocks and apply DCT to each block; (2) apply a high-frequency minimization method to the AC-coefficients reducing each block by 2/3 resulting in a minimized array; (3) build a look up table of probability data to enable the recovery of the original high frequencies at decompression stage; (4) apply a delta or differential operator to the list of DC-components; and (5) apply arithmetic encoding to the outputs of steps (2) and (4). At decompression stage, the look up table and the concurrent binary search algorithm are used to reconstruct all high-frequency AC-coefficients while the DC-components are decoded by reversing the arithmetic coding. Finally, the inverse DCT recovers the original image. We tested the technique by compressing and decompressing 2D images including images with structured light patterns for 3D reconstruction. The technique is compared with JPEG and JPEG2000 through 2D and 3D RMSE. Results demonstrate that the proposed compression method is perceptually superior to JPEG with equivalent quality to JPEG2000. Concerning 3D surface reconstruction from images, it is demonstrated that the proposed method is superior to both JPEG and JPEG2000.
Compression of electromyographic signals using image compression techniques.
Costa, Marcus Vinícius Chaffim; Berger, Pedro de Azevedo; da Rocha, Adson Ferreira; de Carvalho, João Luiz Azevedo; Nascimento, Francisco Assis de Oliveira
2008-01-01
Despite the growing interest in the transmission and storage of electromyographic signals for long periods of time, few studies have addressed the compression of such signals. In this article we present an algorithm for compression of electromyographic signals based on the JPEG2000 coding system. Although the JPEG2000 codec was originally designed for compression of still images, we show that it can also be used to compress EMG signals for both isotonic and isometric contractions. For EMG signals acquired during isometric contractions, the proposed algorithm provided compression factors ranging from 75 to 90%, with an average PRD ranging from 3.75% to 13.7%. For isotonic EMG signals, the algorithm provided compression factors ranging from 75 to 90%, with an average PRD ranging from 3.4% to 7%. The compression results using the JPEG2000 algorithm were compared to those using other algorithms based on the wavelet transform.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeon, Chang Ho; Kim, Bohyoung; Gu, Bon Seung
2013-10-15
Purpose: To modify the preprocessing technique, which was previously proposed, improving compressibility of computed tomography (CT) images to cover the diversity of three dimensional configurations of different body parts and to evaluate the robustness of the technique in terms of segmentation correctness and increase in reversible compression ratio (CR) for various CT examinations.Methods: This study had institutional review board approval with waiver of informed patient consent. A preprocessing technique was previously proposed to improve the compressibility of CT images by replacing pixel values outside the body region with a constant value resulting in maximizing data redundancy. Since the technique wasmore » developed aiming at only chest CT images, the authors modified the segmentation method to cover the diversity of three dimensional configurations of different body parts. The modified version was evaluated as follows. In randomly selected 368 CT examinations (352 787 images), each image was preprocessed by using the modified preprocessing technique. Radiologists visually confirmed whether the segmented region covers the body region or not. The images with and without the preprocessing were reversibly compressed using Joint Photographic Experts Group (JPEG), JPEG2000 two-dimensional (2D), and JPEG2000 three-dimensional (3D) compressions. The percentage increase in CR per examination (CR{sub I}) was measured.Results: The rate of correct segmentation was 100.0% (95% CI: 99.9%, 100.0%) for all the examinations. The median of CR{sub I} were 26.1% (95% CI: 24.9%, 27.1%), 40.2% (38.5%, 41.1%), and 34.5% (32.7%, 36.2%) in JPEG, JPEG2000 2D, and JPEG2000 3D, respectively.Conclusions: In various CT examinations, the modified preprocessing technique can increase in the CR by 25% or more without concerning about degradation of diagnostic information.« less
NASA Astrophysics Data System (ADS)
Muneyasu, Mitsuji; Odani, Shuhei; Kitaura, Yoshihiro; Namba, Hitoshi
On the use of a surveillance camera, there is a case where privacy protection should be considered. This paper proposes a new privacy protection method by automatically degrading the face region in surveillance images. The proposed method consists of ROI coding of JPEG2000 and a face detection method based on template matching. The experimental result shows that the face region can be detected and hidden correctly.
NASA Astrophysics Data System (ADS)
Wijaya, Surya Li; Savvides, Marios; Vijaya Kumar, B. V. K.
2005-02-01
Face recognition on mobile devices, such as personal digital assistants and cell phones, is a big challenge owing to the limited computational resources available to run verifications on the devices themselves. One approach is to transmit the captured face images by use of the cell-phone connection and to run the verification on a remote station. However, owing to limitations in communication bandwidth, it may be necessary to transmit a compressed version of the image. We propose using the image compression standard JPEG2000, which is a wavelet-based compression engine used to compress the face images to low bit rates suitable for transmission over low-bandwidth communication channels. At the receiver end, the face images are reconstructed with a JPEG2000 decoder and are fed into the verification engine. We explore how advanced correlation filters, such as the minimum average correlation energy filter [Appl. Opt. 26, 3633 (1987)] and its variants, perform by using face images captured under different illumination conditions and encoded with different bit rates under the JPEG2000 wavelet-encoding standard. We evaluate the performance of these filters by using illumination variations from the Carnegie Mellon University's Pose, Illumination, and Expression (PIE) face database. We also demonstrate the tolerance of these filters to noisy versions of images with illumination variations.
Clinical evaluation of JPEG2000 compression for digital mammography
NASA Astrophysics Data System (ADS)
Sung, Min-Mo; Kim, Hee-Joung; Kim, Eun-Kyung; Kwak, Jin-Young; Yoo, Jae-Kyung; Yoo, Hyung-Sik
2002-06-01
Medical images, such as computed radiography (CR), and digital mammographic images will require large storage facilities and long transmission times for picture archiving and communications system (PACS) implementation. American College of Radiology and National Equipment Manufacturers Association (ACR/NEMA) group is planning to adopt a JPEG2000 compression algorithm in digital imaging and communications in medicine (DICOM) standard to better utilize medical images. The purpose of the study was to evaluate the compression ratios of JPEG2000 for digital mammographic images using peak signal-to-noise ratio (PSNR), receiver operating characteristic (ROC) analysis, and the t-test. The traditional statistical quality measures such as PSNR, which is a commonly used measure for the evaluation of reconstructed images, measures how the reconstructed image differs from the original by making pixel-by-pixel comparisons. The ability to accurately discriminate diseased cases from normal cases is evaluated using ROC curve analysis. ROC curves can be used to compare the diagnostic performance of two or more reconstructed images. The t test can be also used to evaluate the subjective image quality of reconstructed images. The results of the t test suggested that the possible compression ratios using JPEG2000 for digital mammographic images may be as much as 15:1 without visual loss or with preserving significant medical information at a confidence level of 99%, although both PSNR and ROC analyses suggest as much as 80:1 compression ratio can be achieved without affecting clinical diagnostic performance.
NASA Astrophysics Data System (ADS)
Siddeq, M. M.; Rodrigues, M. A.
2015-09-01
Image compression techniques are widely used on 2D image 2D video 3D images and 3D video. There are many types of compression techniques and among the most popular are JPEG and JPEG2000. In this research, we introduce a new compression method based on applying a two level discrete cosine transform (DCT) and a two level discrete wavelet transform (DWT) in connection with novel compression steps for high-resolution images. The proposed image compression algorithm consists of four steps. (1) Transform an image by a two level DWT followed by a DCT to produce two matrices: DC- and AC-Matrix, or low and high frequency matrix, respectively, (2) apply a second level DCT on the DC-Matrix to generate two arrays, namely nonzero-array and zero-array, (3) apply the Minimize-Matrix-Size algorithm to the AC-Matrix and to the other high-frequencies generated by the second level DWT, (4) apply arithmetic coding to the output of previous steps. A novel decompression algorithm, Fast-Match-Search algorithm (FMS), is used to reconstruct all high-frequency matrices. The FMS-algorithm computes all compressed data probabilities by using a table of data, and then using a binary search algorithm for finding decompressed data inside the table. Thereafter, all decoded DC-values with the decoded AC-coefficients are combined in one matrix followed by inverse two levels DCT with two levels DWT. The technique is tested by compression and reconstruction of 3D surface patches. Additionally, this technique is compared with JPEG and JPEG2000 algorithm through 2D and 3D root-mean-square-error following reconstruction. The results demonstrate that the proposed compression method has better visual properties than JPEG and JPEG2000 and is able to more accurately reconstruct surface patches in 3D.
McCord, Layne K; Scarfe, William C; Naylor, Rachel H; Scheetz, James P; Silveira, Anibal; Gillespie, Kevin R
2007-05-01
The objectives of this study were to compare the effect of JPEG 2000 compression of hand-wrist radiographs on observer image quality qualitative assessment and to compare with a software-derived quantitative image quality index. Fifteen hand-wrist radiographs were digitized and saved as TIFF and JPEG 2000 images at 4 levels of compression (20:1, 40:1, 60:1, and 80:1). The images, including rereads, were viewed by 13 orthodontic residents who determined the image quality rating on a scale of 1 to 5. A quantitative analysis was also performed by using a readily available software based on the human visual system (Image Quality Measure Computer Program, version 6.2, Mitre, Bedford, Mass). ANOVA was used to determine the optimal compression level (P < or =.05). When we compared subjective indexes, JPEG compression greater than 60:1 significantly reduced image quality. When we used quantitative indexes, the JPEG 2000 images had lower quality at all compression ratios compared with the original TIFF images. There was excellent correlation (R2 >0.92) between qualitative and quantitative indexes. Image Quality Measure indexes are more sensitive than subjective image quality assessments in quantifying image degradation with compression. There is potential for this software-based quantitative method in determining the optimal compression ratio for any image without the use of subjective raters.
NASA Astrophysics Data System (ADS)
Starosolski, Roman
2016-07-01
Reversible denoising and lifting steps (RDLS) are lifting steps integrated with denoising filters in such a way that, despite the inherently irreversible nature of denoising, they are perfectly reversible. We investigated the application of RDLS to reversible color space transforms: RCT, YCoCg-R, RDgDb, and LDgEb. In order to improve RDLS effects, we propose a heuristic for image-adaptive denoising filter selection, a fast estimator of the compressed image bitrate, and a special filter that may result in skipping of the steps. We analyzed the properties of the presented methods, paying special attention to their usefulness from a practical standpoint. For a diverse image test-set and lossless JPEG-LS, JPEG 2000, and JPEG XR algorithms, RDLS improves the bitrates of all the examined transforms. The most interesting results were obtained for an estimation-based heuristic filter selection out of a set of seven filters; the cost of this variant was similar to or lower than the transform cost, and it improved the average lossless JPEG 2000 bitrates by 2.65% for RDgDb and by over 1% for other transforms; bitrates of certain images were improved to a significantly greater extent.
NASA Astrophysics Data System (ADS)
2001-01-01
Last year saw very good progress at ESO's Paranal Observatory , the site of the Very Large Telescope (VLT). The third and fourth 8.2-m Unit Telescopes, MELIPAL and YEPUN had "First Light" (cf. PR 01/00 and PR 18/00 ), while the first two, ANTU and KUEYEN , were busy collecting first-class data for hundreds of astronomers. Meanwhile, work continued towards the next phase of the VLT project, the combination of the telescopes into the VLT Interferometer. The test instrument, VINCI (cf. PR 22/00 ) is now being installed in the VLTI Laboratory at the centre of the observing platform on the top of Paranal. Below is a new collection of video sequences and photos that illustrate the latest developments at the Paranal Observatory. The were obtained by the EPR Video Team in December 2000. The photos are available in different formats, including "high-resolution" that is suitable for reproduction purposes. A related ESO Video News Reel for professional broadcasters will soon become available and will be announced via the usual channels. Overview Paranal Observatory (Dec. 2000) Video Clip 02a/01 [MPEG - 4.5Mb] ESO PR Video Clip 02a/01 "Paranal Observatory (December 2000)" (4875 frames/3:15 min) [MPEG Video+Audio; 160x120 pix; 4.5Mb] [MPEG Video+Audio; 320x240 pix; 13.5 Mb] [RealMedia; streaming; 34kps] [RealMedia; streaming; 200kps] ESO Video Clip 02a/01 shows some of the construction activities at the Paranal Observatory in December 2000, beginning with a general view of the site. Then follow views of the Residencia , a building that has been designed by Architects Auer and Weber in Munich - it integrates very well into the desert, creating a welcome recreational site for staff and visitors in this harsh environment. The next scenes focus on the "stations" for the auxiliary telescopes for the VLTI and the installation of two delay lines in the 140-m long underground tunnel. The following part of the video clip shows the start-up of the excavation work for the 2.6-m VLT Survey Telescope (VST) as well as the location known as the "NTT Peak", now under consideration for the installation of the 4-m VISTA telescope. The last images are from to the second 8.2-m Unit Telescope, KUEYEN, that has been in full use by the astronomers with the UVES and FORS2 instruments since April 2000. ESO PR Photo 04a/01 ESO PR Photo 04a/01 [Preview - JPEG: 466 x 400 pix - 58k] [Normal - JPEG: 931 x 800 pix - 688k] [Hires - JPEG: 3000 x 2577 pix - 7.6M] Caption : PR Photo 04a/01 shows an afternoon view from the Paranal summit towards East, with the Base Camp and the new Residencia on the slope to the right, above the valley in the shadow of the mountain. ESO PR Photo 04b/01 ESO PR Photo 04b/01 [Preview - JPEG: 791 x 400 pix - 89k] [Normal - JPEG: 1582 x 800 pix - 1.1Mk] [Hires - JPEG: 3000 x 1517 pix - 3.6M] PR Photo 04b/01 shows the ramp leading to the main entrance to the partly subterranean Residencia , with the steel skeleton for the dome over the central area in place. ESO PR Photo 04c/01 ESO PR Photo 04c/01 [Preview - JPEG: 498 x 400 pix - 65k] [Normal - JPEG: 995 x 800 pix - 640k] [Hires - JPEG: 3000 x 2411 pix - 6.6M] PR Photo 04c/01 is an indoor view of the reception hall under the dome, looking towards the main entrance. ESO PR Photo 04d/01 ESO PR Photo 04d/01 [Preview - JPEG: 472 x 400 pix - 61k] [Normal - JPEG: 944 x 800 pix - 632k] [Hires - JPEG: 3000 x 2543 pix - 5.8M] PR Photo 04d/01 shows the ramps from the reception area towards the rooms. The VLT Interferometer The Delay Lines consitute a most important element of the VLT Interferometer , cf. PR Photos 26a-e/00. At this moment, two Delay Lines are operational on site. A third system will be integrated early this year. The VLTI Delay Line is located in an underground tunnel that is 168 metres long and 8 metres wide. This configuration has been designed to accommodate up to eight Delay Lines, including their transfer optics in an ideal environment: stable temperature, high degree of cleanliness, low levels of straylight, low air turbulence. The positions of the Delay Line carriages are computed to adjust the Optical Path Lengths requested for the fringe pattern observation. The positions are controlled in real time by a laser metrology system, specially developed for this purpose. The position precision is about 20 nm (1 nm = 10 -9 m, or 1 millionth of a millimetre) over a distance of 120 metres. The maximum velocity is 0.50 m/s in position mode and maximum 0.05 m/s in operation. The system is designed for 25 year of operation and to survive earthquake up to 8.6 magnitude on the Richter scale. The VLTI Delay Line is a three-year project, carried out by ESO in collaboration with Dutch Space Holdings (formerly Fokker Space) and TPD-TNO . VLTI Delay Lines (December 2000) - ESO PR Video Clip 02b/01 [MPEG - 3.6Mb] ESO PR Video Clip 02b/01 "VLTI Delay Lines (December 2000)" (2000 frames/1:20 min) [MPEG Video+Audio; 160x120 pix; 3.6Mb] [MPEG Video+Audio; 320x240 pix; 13.7 Mb] [RealMedia; streaming; 34kps] [RealMedia; streaming; 200kps] ESO Video Clip 02b/00 shows the Delay Lines of the VLT Interferometer facility at Paranal during tests. One of the carriages is moving on 66-metre long rectified rails, driven by a linear motor. The carriage is equipped with three wheels in order to preserve high guidance accuracy. Another important element is the Cat's Eye that reflects the light from the telescope to the VLT instrumentation. This optical system is made of aluminium (including the mirrors) to avoid thermo-mechanical problems. ESO PR Photo 04e/01 ESO PR Photo 04e/01 [Preview - JPEG: 400 x 402 pix - 62k] [Normal - JPEG: 800 x 804 pix - 544k] [Hires - JPEG: 3000 x 3016 pix - 6.2M] Caption : PR Photo 04e/01 shows one of the 30 "stations" for the movable 1.8-m Auxiliary Telescopes. When one of these telescopes is positioned ("parked") on top of it, The light will be guided through the hole towards the Interferometric Tunnel and the Delay Lines. ESO PR Photo 04f/01 ESO PR Photo 04f/01 [Preview - JPEG: 568 x 400 pix - 96k] [Normal - JPEG: 1136 x 800 pix - 840k] [Hires - JPEG: 3000 x 2112 pix - 4.6M] PR Photo 04f/01 shows a general view of the Interferometric Tunnel and the Delay Lines. ESO PR Photo 04g/01 ESO PR Photo 04g/01 [Preview - JPEG: 406 x 400 pix - 62k] [Normal - JPEG: 812 x 800 pix - 448k] [Hires - JPEG: 3000 x 2956 pix - 5.5M] PR Photo 04g/01 shows one of the Delay Line carriages in parking position. The "NTT Peak" The "NTT Peak" is a mountain top located about 2 km to the north of Paranal. It received this name when ESO considered to move the 3.58-m New Technology Telescope from La Silla to this peak. The possibility of installing the 4-m VISTA telescope (cf. PR 03/00 ) on this peak is now being discussed. ESO PR Photo 04h/01 ESO PR Photo 04h/01 [Preview - JPEG: 630 x 400 pix - 89k] [Normal - JPEG: 1259 x 800 pix - 1.1M] [Hires - JPEG: 3000 x 1907 pix - 5.2M] PR Photo 04h/01 shows the view from the "NTT Peak" towards south, vith the Paranal mountain and the VLT enclosures in the background. ESO PR Photo 04i/01 ESO PR Photo 04i/01 [Preview - JPEG: 516 x 400 pix - 50k] [Normal - JPEG: 1031 x 800 pix - 664k] [Hires - JPEG: 3000 x 2328 pix - 6.0M] PR Photo 04i/01 is a view towards the "NTT Peak" from the top of the Paranal mountain. The access road and the concrete pillar that was used to support a site testing telescope at the top of this peak are seen This is the caption to ESO PR Photos 04a-1/01 and PR Video Clips 02a-b/01 . They may be reproduced, if credit is given to the European Southern Observatory. The ESO PR Video Clips service to visitors to the ESO website provides "animated" illustrations of the ongoing work and events at the European Southern Observatory. The most recent clip was: ESO PR Video Clip 01/01 about the Physics On Stage Festival (11 January 2001) . Information is also available on the web about other ESO videos.
NASA Astrophysics Data System (ADS)
Hayat, Khizar; Puech, William; Gesquière, Gilles
2010-04-01
We propose an adaptively synchronous scalable spread spectrum (A4S) data-hiding strategy to integrate disparate data, needed for a typical 3-D visualization, into a single JPEG2000 format file. JPEG2000 encoding provides a standard format on one hand and the needed multiresolution for scalability on the other. The method has the potential of being imperceptible and robust at the same time. While the spread spectrum (SS) methods are known for the high robustness they offer, our data-hiding strategy is removable at the same time, which ensures highest possible visualization quality. The SS embedding of the discrete wavelet transform (DWT)-domain depth map is carried out in transform domain YCrCb components from the JPEG2000 coding stream just after the DWT stage. To maintain synchronization, the embedding is carried out while taking into account the correspondence of subbands. Since security is not the immediate concern, we are at liberty with the strength of embedding. This permits us to increase the robustness and bring the reversibility of our method. To estimate the maximum tolerable error in the depth map according to a given viewpoint, a human visual system (HVS)-based psychovisual analysis is also presented.
Region of interest and windowing-based progressive medical image delivery using JPEG2000
NASA Astrophysics Data System (ADS)
Nagaraj, Nithin; Mukhopadhyay, Sudipta; Wheeler, Frederick W.; Avila, Ricardo S.
2003-05-01
An important telemedicine application is the perusal of CT scans (digital format) from a central server housed in a healthcare enterprise across a bandwidth constrained network by radiologists situated at remote locations for medical diagnostic purposes. It is generally expected that a viewing station respond to an image request by displaying the image within 1-2 seconds. Owing to limited bandwidth, it may not be possible to deliver the complete image in such a short period of time with traditional techniques. In this paper, we investigate progressive image delivery solutions by using JPEG 2000. An estimate of the time taken in different network bandwidths is performed to compare their relative merits. We further make use of the fact that most medical images are 12-16 bits, but would ultimately be converted to an 8-bit image via windowing for display on the monitor. We propose a windowing progressive RoI technique to exploit this and investigate JPEG 2000 RoI based compression after applying a favorite or a default window setting on the original image. Subsequent requests for different RoIs and window settings would then be processed at the server. For the windowing progressive RoI mode, we report a 50% reduction in transmission time.
An Efficient Image Compressor for Charge Coupled Devices Camera
Li, Jin; Xing, Fei; You, Zheng
2014-01-01
Recently, the discrete wavelet transforms- (DWT-) based compressor, such as JPEG2000 and CCSDS-IDC, is widely seen as the state of the art compression scheme for charge coupled devices (CCD) camera. However, CCD images project on the DWT basis to produce a large number of large amplitude high-frequency coefficients because these images have a large number of complex texture and contour information, which are disadvantage for the later coding. In this paper, we proposed a low-complexity posttransform coupled with compressing sensing (PT-CS) compression approach for remote sensing image. First, the DWT is applied to the remote sensing image. Then, a pair base posttransform is applied to the DWT coefficients. The pair base are DCT base and Hadamard base, which can be used on the high and low bit-rate, respectively. The best posttransform is selected by the l p-norm-based approach. The posttransform is considered as the sparse representation stage of CS. The posttransform coefficients are resampled by sensing measurement matrix. Experimental results on on-board CCD camera images show that the proposed approach significantly outperforms the CCSDS-IDC-based coder, and its performance is comparable to that of the JPEG2000 at low bit rate and it does not have the high excessive implementation complexity of JPEG2000. PMID:25114977
Study and validation of tools interoperability in JPSEC
NASA Astrophysics Data System (ADS)
Conan, V.; Sadourny, Y.; Jean-Marie, K.; Chan, C.; Wee, S.; Apostolopoulos, J.
2005-08-01
Digital imagery is important in many applications today, and the security of digital imagery is important today and is likely to gain in importance in the near future. The emerging international standard ISO/IEC JPEG-2000 Security (JPSEC) is designed to provide security for digital imagery, and in particular digital imagery coded with the JPEG-2000 image coding standard. One of the primary goals of a standard is to ensure interoperability between creators and consumers produced by different manufacturers. The JPSEC standard, similar to the popular JPEG and MPEG family of standards, specifies only the bitstream syntax and the receiver's processing, and not how the bitstream is created or the details of how it is consumed. This paper examines the interoperability for the JPSEC standard, and presents an example JPSEC consumption process which can provide insights in the design of JPSEC consumers. Initial interoperability tests between different groups with independently created implementations of JPSEC creators and consumers have been successful in providing the JPSEC security services of confidentiality (via encryption) and authentication (via message authentication codes, or MACs). Further interoperability work is on-going.
Wavelet-based compression of M-FISH images.
Hua, Jianping; Xiong, Zixiang; Wu, Qiang; Castleman, Kenneth R
2005-05-01
Multiplex fluorescence in situ hybridization (M-FISH) is a recently developed technology that enables multi-color chromosome karyotyping for molecular cytogenetic analysis. Each M-FISH image set consists of a number of aligned images of the same chromosome specimen captured at different optical wavelength. This paper presents embedded M-FISH image coding (EMIC), where the foreground objects/chromosomes and the background objects/images are coded separately. We first apply critically sampled integer wavelet transforms to both the foreground and the background. We then use object-based bit-plane coding to compress each object and generate separate embedded bitstreams that allow continuous lossy-to-lossless compression of the foreground and the background. For efficient arithmetic coding of bit planes, we propose a method of designing an optimal context model that specifically exploits the statistical characteristics of M-FISH images in the wavelet domain. Our experiments show that EMIC achieves nearly twice as much compression as Lempel-Ziv-Welch coding. EMIC also performs much better than JPEG-LS and JPEG-2000 for lossless coding. The lossy performance of EMIC is significantly better than that of coding each M-FISH image with JPEG-2000.
Diagnostic accuracy of chest X-rays acquired using a digital camera for low-cost teleradiology.
Szot, Agnieszka; Jacobson, Francine L; Munn, Samson; Jazayeri, Darius; Nardell, Edward; Harrison, David; Drosten, Ralph; Ohno-Machado, Lucila; Smeaton, Laura M; Fraser, Hamish S F
2004-02-01
Store-and-forward telemedicine, using e-mail to send clinical data and digital images, offers a low-cost alternative for physicians in developing countries to obtain second opinions from specialists. To explore the potential usefulness of this technique, 91 chest X-ray images were photographed using a digital camera and a view box. Four independent readers (three radiologists and one pulmonologist) read two types of digital (JPEG and JPEG2000) and original film images and indicated their confidence in the presence of eight features known to be radiological indicators of tuberculosis (TB). The results were compared to a "gold standard" established by two different radiologists, and assessed using receiver operating characteristic (ROC) curve analysis. There was no statistical difference in the overall performance between the readings from the original films and both types of digital images. The size of JPEG2000 images was approximately 120KB, making this technique feasible for slow internet connections. Our preliminary results show the potential usefulness of this technique particularly for tuberculosis and lung disease, but further studies are required to refine its potential.
JPIC-Rad-Hard JPEG2000 Image Compression ASIC
NASA Astrophysics Data System (ADS)
Zervas, Nikos; Ginosar, Ran; Broyde, Amitai; Alon, Dov
2010-08-01
JPIC is a rad-hard high-performance image compression ASIC for the aerospace market. JPIC implements tier 1 of the ISO/IEC 15444-1 JPEG2000 (a.k.a. J2K) image compression standard [1] as well as the post compression rate-distortion algorithm, which is part of tier 2 coding. A modular architecture enables employing a single JPIC or multiple coordinated JPIC units. JPIC is designed to support wide data sources of imager in optical, panchromatic and multi-spectral space and airborne sensors. JPIC has been developed as a collaboration of Alma Technologies S.A. (Greece), MBT/IAI Ltd (Israel) and Ramon Chips Ltd (Israel). MBT IAI defined the system architecture requirements and interfaces, The JPEG2K-E IP core from Alma implements the compression algorithm [2]. Ramon Chips adds SERDES interfaces and host interfaces and integrates the ASIC. MBT has demonstrated the full chip on an FPGA board and created system boards employing multiple JPIC units. The ASIC implementation, based on Ramon Chips' 180nm CMOS RadSafe[TM] RH cell library enables superior radiation hardness.
Dynamic power scheduling system for JPEG2000 delivery over wireless networks
NASA Astrophysics Data System (ADS)
Martina, Maurizio; Vacca, Fabrizio
2003-06-01
Third generation mobile terminals diffusion is encouraging the development of new multimedia based applications. The reliable transmission of audiovisual content will gain major interest being one of the most valuable services. Nevertheless, mobile scenario is severely power constrained: high compression ratios and refined energy management strategies are highly advisable. JPEG2000 as the source encoding stage assures excellent performance with extremely good visual quality. However the limited power budged imposes to limit the computational effort in order to save as much power as possible. Starting from an error prone environment, as the wireless one, high error-resilience features need to be employed. This paper tries to investigate the trade-off between quality and power in such a challenging environment.
A new approach of objective quality evaluation on JPEG2000 lossy-compressed lung cancer CT images
NASA Astrophysics Data System (ADS)
Cai, Weihua; Tan, Yongqiang; Zhang, Jianguo
2007-03-01
Image compression has been used to increase the communication efficiency and storage capacity. JPEG 2000 compression, based on the wavelet transformation, has its advantages comparing to other compression methods, such as ROI coding, error resilience, adaptive binary arithmetic coding and embedded bit-stream. However it is still difficult to find an objective method to evaluate the image quality of lossy-compressed medical images so far. In this paper, we present an approach to evaluate the image quality by using a computer aided diagnosis (CAD) system. We selected 77 cases of CT images, bearing benign and malignant lung nodules with confirmed pathology, from our clinical Picture Archiving and Communication System (PACS). We have developed a prototype of CAD system to classify these images into benign ones and malignant ones, the performance of which was evaluated by the receiver operator characteristics (ROC) curves. We first used JPEG 2000 to compress these cases of images with different compression ratio from lossless to lossy, and used the CAD system to classify the cases with different compressed ratio, then compared the ROC curves from the CAD classification results. Support vector machine (SVM) and neural networks (NN) were used to classify the malignancy of input nodules. In each approach, we found that the area under ROC (AUC) decreases with the increment of compression ratio with small fluctuations.
NASA Technical Reports Server (NTRS)
2002-01-01
Full-size images June 17, 2001 (2.0 MB JPEG) June 14, 2000 (2.1 MB JPEG) Light snowfall in the winter of 2000-01 led to a dry summer in the Pacific Northwest. The drought led to a conflict between farmers and fishing communities in the Klamath River Basin over water rights, and a series of forest fires in Washington, Oregon, and Northern California. The pair of images above, both acquired by the Enhanced Thematic Mapper Plus (ETM+) aboard the Landsat 7 satellite, show the snowpack on Mt. Shasta in June 2000 and 2001. On June 14, 2000, the snow extends to the lower slopes of the 4,317-meter (14,162-foot) volcano. At nearly the same time this year (June 17, 2001) the snow had retreated well above the tree-line. The drought in the region was categorized as moderate to severe by the National Oceanographic and Atmospheric Administration (NOAA), and the United States Geological Survey (USGS) reported that streamflow during June was only about 25 percent of the average. Above and to the left of Mt. Shasta is Lake Shastina, a reservoir which is noticeably lower in the 2001 image than the 2000 image. Images courtesy USGS EROS Data Center and the Landsat 7 Science Team
Atmospheric Science Data Center
2014-05-15
article title: Los Alamos, New Mexico View Larger JPEG image ... kb) Multi-angle views of the Fire in Los Alamos, New Mexico, May 9, 2000. These true-color images covering north-central New Mexico ...
Parallel efficient rate control methods for JPEG 2000
NASA Astrophysics Data System (ADS)
Martínez-del-Amor, Miguel Á.; Bruns, Volker; Sparenberg, Heiko
2017-09-01
Since the introduction of JPEG 2000, several rate control methods have been proposed. Among them, post-compression rate-distortion optimization (PCRD-Opt) is the most widely used, and the one recommended by the standard. The approach followed by this method is to first compress the entire image split in code blocks, and subsequently, optimally truncate the set of generated bit streams according to the maximum target bit rate constraint. The literature proposes various strategies on how to estimate ahead of time where a block will get truncated in order to stop the execution prematurely and save time. However, none of them have been defined bearing in mind a parallel implementation. Today, multi-core and many-core architectures are becoming popular for JPEG 2000 codecs implementations. Therefore, in this paper, we analyze how some techniques for efficient rate control can be deployed in GPUs. In order to do that, the design of our GPU-based codec is extended, allowing stopping the process at a given point. This extension also harnesses a higher level of parallelism on the GPU, leading to up to 40% of speedup with 4K test material on a Titan X. In a second step, three selected rate control methods are adapted and implemented in our parallel encoder. A comparison is then carried out, and used to select the best candidate to be deployed in a GPU encoder, which gave an extra 40% of speedup in those situations where it was really employed.
Quality Scalability Aware Watermarking for Visual Content.
Bhowmik, Deepayan; Abhayaratne, Charith
2016-11-01
Scalable coding-based content adaptation poses serious challenges to traditional watermarking algorithms, which do not consider the scalable coding structure and hence cannot guarantee correct watermark extraction in media consumption chain. In this paper, we propose a novel concept of scalable blind watermarking that ensures more robust watermark extraction at various compression ratios while not effecting the visual quality of host media. The proposed algorithm generates scalable and robust watermarked image code-stream that allows the user to constrain embedding distortion for target content adaptations. The watermarked image code-stream consists of hierarchically nested joint distortion-robustness coding atoms. The code-stream is generated by proposing a new wavelet domain blind watermarking algorithm guided by a quantization based binary tree. The code-stream can be truncated at any distortion-robustness atom to generate the watermarked image with the desired distortion-robustness requirements. A blind extractor is capable of extracting watermark data from the watermarked images. The algorithm is further extended to incorporate a bit-plane discarding-based quantization model used in scalable coding-based content adaptation, e.g., JPEG2000. This improves the robustness against quality scalability of JPEG2000 compression. The simulation results verify the feasibility of the proposed concept, its applications, and its improved robustness against quality scalable content adaptation. Our proposed algorithm also outperforms existing methods showing 35% improvement. In terms of robustness to quality scalable video content adaptation using Motion JPEG2000 and wavelet-based scalable video coding, the proposed method shows major improvement for video watermarking.
Image quality (IQ) guided multispectral image compression
NASA Astrophysics Data System (ADS)
Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik
2016-05-01
Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.
Privacy enabling technology for video surveillance
NASA Astrophysics Data System (ADS)
Dufaux, Frédéric; Ouaret, Mourad; Abdeljaoued, Yousri; Navarro, Alfonso; Vergnenègre, Fabrice; Ebrahimi, Touradj
2006-05-01
In this paper, we address the problem privacy in video surveillance. We propose an efficient solution based on transformdomain scrambling of regions of interest in a video sequence. More specifically, the sign of selected transform coefficients is flipped during encoding. We address more specifically the case of Motion JPEG 2000. Simulation results show that the technique can be successfully applied to conceal information in regions of interest in the scene while providing with a good level of security. Furthermore, the scrambling is flexible and allows adjusting the amount of distortion introduced. This is achieved with a small impact on coding performance and negligible computational complexity increase. In the proposed video surveillance system, heterogeneous clients can remotely access the system through the Internet or 2G/3G mobile phone network. Thanks to the inherently scalable Motion JPEG 2000 codestream, the server is able to adapt the resolution and bandwidth of the delivered video depending on the usage environment of the client.
Multiple descriptions based on multirate coding for JPEG 2000 and H.264/AVC.
Tillo, Tammam; Baccaglini, Enrico; Olmo, Gabriella
2010-07-01
Multiple description coding (MDC) makes use of redundant representations of multimedia data to achieve resiliency. Descriptions should be generated so that the quality obtained when decoding a subset of them only depends on their number and not on the particular received subset. In this paper, we propose a method based on the principle of encoding the source at several rates, and properly blending the data encoded at different rates to generate the descriptions. The aim is to achieve efficient redundancy exploitation, and easy adaptation to different network scenarios by means of fine tuning of the encoder parameters. We apply this principle to both JPEG 2000 images and H.264/AVC video data. We consider as the reference scenario the distribution of contents on application-layer overlays with multiple-tree topology. The experimental results reveal that our method favorably compares with state-of-art MDC techniques.
Wavelet-Smoothed Interpolation of Masked Scientific Data for JPEG 2000 Compression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brislawn, Christopher M.
2012-08-13
How should we manage scientific data with 'holes'? Some applications, like JPEG 2000, expect logically rectangular data, but some sources, like the Parallel Ocean Program (POP), generate data that isn't defined on certain subsets. We refer to grid points that lack well-defined, scientifically meaningful sample values as 'masked' samples. Wavelet-smoothing is a highly scalable interpolation scheme for regions with complex boundaries on logically rectangular grids. Computation is based on forward/inverse discrete wavelet transforms, so runtime complexity and memory scale linearly with respect to sample count. Efficient state-of-the-art minimal realizations yield small constants (O(10)) for arithmetic complexity scaling, and in-situ implementationmore » techniques make optimal use of memory. Implementation in two dimensions using tensor product filter banks is straighsorward and should generalize routinely to higher dimensions. No hand-tuning required when the interpolation mask changes, making the method aeractive for problems with time-varying masks. Well-suited for interpolating undefined samples prior to JPEG 2000 encoding. The method outperforms global mean interpolation, as judged by both SNR rate-distortion performance and low-rate artifact mitigation, for data distributions whose histograms do not take the form of sharply peaked, symmetric, unimodal probability density functions. These performance advantages can hold even for data whose distribution differs only moderately from the peaked unimodal case, as demonstrated by POP salinity data. The interpolation method is very general and is not tied to any particular class of applications, could be used for more generic smooth interpolation.« less
Cornelissen, Frans; Cik, Miroslav; Gustin, Emmanuel
2012-04-01
High-content screening has brought new dimensions to cellular assays by generating rich data sets that characterize cell populations in great detail and detect subtle phenotypes. To derive relevant, reliable conclusions from these complex data, it is crucial to have informatics tools supporting quality control, data reduction, and data mining. These tools must reconcile the complexity of advanced analysis methods with the user-friendliness demanded by the user community. After review of existing applications, we realized the possibility of adding innovative new analysis options. Phaedra was developed to support workflows for drug screening and target discovery, interact with several laboratory information management systems, and process data generated by a range of techniques including high-content imaging, multicolor flow cytometry, and traditional high-throughput screening assays. The application is modular and flexible, with an interface that can be tuned to specific user roles. It offers user-friendly data visualization and reduction tools for HCS but also integrates Matlab for custom image analysis and the Konstanz Information Miner (KNIME) framework for data mining. Phaedra features efficient JPEG2000 compression and full drill-down functionality from dose-response curves down to individual cells, with exclusion and annotation options, cell classification, statistical quality controls, and reporting.
A multicenter observer performance study of 3D JPEG2000 compression of thin-slice CT.
Erickson, Bradley J; Krupinski, Elizabeth; Andriole, Katherine P
2010-10-01
The goal of this study was to determine the compression level at which 3D JPEG2000 compression of thin-slice CTs of the chest and abdomen-pelvis becomes visually perceptible. A secondary goal was to determine if residents in training and non-physicians are substantially different from experienced radiologists in their perception of compression-related changes. This study used multidetector computed tomography 3D datasets with 0.625-1-mm thickness slices of standard chest, abdomen, or pelvis, clipped to 12 bits. The Kakadu v5.2 JPEG2000 compression algorithm was used to compress and decompress the 80 examinations creating four sets of images: lossless, 1.5 bpp (8:1), 1 bpp (12:1), and 0.75 bpp (16:1). Two randomly selected slices from each examination were shown to observers using a flicker mode paradigm in which observers rapidly toggled between two images, the original and a compressed version, with the task of deciding whether differences between them could be detected. Six staff radiologists, four residents, and six PhDs experienced in medical imaging (from three institutions) served as observers. Overall, 77.46% of observers detected differences at 8:1, 94.75% at 12:1, and 98.59% at 16:1 compression levels. Across all compression levels, the staff radiologists noted differences 64.70% of the time, the resident's detected differences 71.91% of the time, and the PhDs detected differences 69.95% of the time. Even mild compression is perceptible with current technology. The ability to detect differences does not equate to diagnostic differences, although perception of compression artifacts could affect diagnostic decision making and diagnostic workflow.
Steganographic embedding in containers-images
NASA Astrophysics Data System (ADS)
Nikishova, A. V.; Omelchenko, T. A.; Makedonskij, S. A.
2018-05-01
Steganography is one of the approaches to ensuring the protection of information transmitted over the network. But a steganographic method should vary depending on a used container. According to statistics, the most widely used containers are images and the most common image format is JPEG. Authors propose a method of data embedding into a frequency area of images in format JPEG 2000. It is proposed to use the method of Benham-Memon- Yeo-Yeung, in which instead of discrete cosine transform, discrete wavelet transform is used. Two requirements for images are formulated. Structure similarity is chosen to obtain quality assessment of data embedding. Experiments confirm that requirements satisfaction allows achieving high quality assessment of data embedding.
Multi-Class Classification for Identifying JPEG Steganography Embedding Methods
2008-09-01
B.H. (2000). STEGANOGRAPHY: Hidden Images, A New Challenge in the Fight Against Child Porn . UPDATE, Volume 13, Number 2, pp. 1-4, Retrieved June 3...Other crimes involving the use of steganography include child pornography where the stego files are used to hide a predator’s location when posting
6 CFR 37.31 - Source document retention.
Code of Federal Regulations, 2014 CFR
2014-01-01
... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...
6 CFR 37.31 - Source document retention.
Code of Federal Regulations, 2012 CFR
2012-01-01
... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...
6 CFR 37.31 - Source document retention.
Code of Federal Regulations, 2010 CFR
2010-01-01
... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...
6 CFR 37.31 - Source document retention.
Code of Federal Regulations, 2011 CFR
2011-01-01
... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...
6 CFR 37.31 - Source document retention.
Code of Federal Regulations, 2013 CFR
2013-01-01
... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...
Energy and Quality-Aware Multimedia Signal Processing
NASA Astrophysics Data System (ADS)
Emre, Yunus
Today's mobile devices have to support computation-intensive multimedia applications with a limited energy budget. In this dissertation, we present architecture level and algorithm-level techniques that reduce energy consumption of these devices with minimal impact on system quality. First, we present novel techniques to mitigate the effects of SRAM memory failures in JPEG2000 implementations operating in scaled voltages. We investigate error control coding schemes and propose an unequal error protection scheme tailored for JPEG2000 that reduces overhead without affecting the performance. Furthermore, we propose algorithm-specific techniques for error compensation that exploit the fact that in JPEG2000 the discrete wavelet transform outputs have larger values for low frequency subband coefficients and smaller values for high frequency subband coefficients. Next, we present use of voltage overscaling to reduce the data-path power consumption of JPEG codecs. We propose an algorithm-specific technique which exploits the characteristics of the quantized coefficients after zig-zag scan to mitigate errors introduced by aggressive voltage scaling. Third, we investigate the effect of reducing dynamic range for datapath energy reduction. We analyze the effect of truncation error and propose a scheme that estimates the mean value of the truncation error during the pre-computation stage and compensates for this error. Such a scheme is very effective for reducing the noise power in applications that are dominated by additions and multiplications such as FIR filter and transform computation. We also present a novel sum of absolute difference (SAD) scheme that is based on most significant bit truncation. The proposed scheme exploits the fact that most of the absolute difference (AD) calculations result in small values, and most of the large AD values do not contribute to the SAD values of the blocks that are selected. Such a scheme is highly effective in reducing the energy consumption of motion estimation and intra-prediction kernels in video codecs. Finally, we present several hybrid energy-saving techniques based on combination of voltage scaling, computation reduction and dynamic range reduction that further reduce the energy consumption while keeping the performance degradation very low. For instance, a combination of computation reduction and dynamic range reduction for Discrete Cosine Transform shows on average, 33% to 46% reduction in energy consumption while incurring only 0.5dB to 1.5dB loss in PSNR.
Compression strategies for LiDAR waveform cube
NASA Astrophysics Data System (ADS)
Jóźków, Grzegorz; Toth, Charles; Quirk, Mihaela; Grejner-Brzezinska, Dorota
2015-01-01
Full-waveform LiDAR data (FWD) provide a wealth of information about the shape and materials of the surveyed areas. Unlike discrete data that retains only a few strong returns, FWD generally keeps the whole signal, at all times, regardless of the signal intensity. Hence, FWD will have an increasingly well-deserved role in mapping and beyond, in the much desired classification in the raw data format. Full-waveform systems currently perform only the recording of the waveform data at the acquisition stage; the return extraction is mostly deferred to post-processing. Although the full waveform preserves most of the details of the real data, it presents a serious practical challenge for a wide use: much larger datasets compared to those from the classical discrete return systems. Atop the need for more storage space, the acquisition speed of the FWD may also limit the pulse rate on most systems that cannot store data fast enough, and thus, reduces the perceived system performance. This work introduces a waveform cube model to compress waveforms in selected subsets of the cube, aimed at achieving decreased storage while maintaining the maximum pulse rate of FWD systems. In our experiments, the waveform cube is compressed using classical methods for 2D imagery that are further tested to assess the feasibility of the proposed solution. The spatial distribution of airborne waveform data is irregular; however, the manner of the FWD acquisition allows the organization of the waveforms in a regular 3D structure similar to familiar multi-component imagery, as those of hyper-spectral cubes or 3D volumetric tomography scans. This study presents the performance analysis of several lossy compression methods applied to the LiDAR waveform cube, including JPEG-1, JPEG-2000, and PCA-based techniques. Wide ranges of tests performed on real airborne datasets have demonstrated the benefits of the JPEG-2000 Standard where high compression rates incur fairly small data degradation. In addition, the JPEG-2000 Standard-compliant compression implementation can be fast and, thus, used in real-time systems, as compressed data sequences can be formed progressively during the waveform data collection. We conclude from our experiments that 2D image compression strategies are feasible and efficient approaches, thus they might be applied during the acquisition of the FWD sensors.
Impact of JPEG2000 compression on spatial-spectral endmember extraction from hyperspectral data
NASA Astrophysics Data System (ADS)
Martín, Gabriel; Ruiz, V. G.; Plaza, Antonio; Ortiz, Juan P.; García, Inmaculada
2009-08-01
Hyperspectral image compression has received considerable interest in recent years. However, an important issue that has not been investigated in the past is the impact of lossy compression on spectral mixture analysis applications, which characterize mixed pixels in terms of a suitable combination of spectrally pure spectral substances (called endmembers) weighted by their estimated fractional abundances. In this paper, we specifically investigate the impact of JPEG2000 compression of hyperspectral images on the quality of the endmembers extracted by algorithms that incorporate both the spectral and the spatial information (useful for incorporating contextual information in the spectral endmember search). The two considered algorithms are the automatic morphological endmember extraction (AMEE) and the spatial spectral endmember extraction (SSEE) techniques. Experimental results are conducted using a well-known data set collected by AVIRIS over the Cuprite mining district in Nevada and with detailed ground-truth information available from U. S. Geological Survey. Our experiments reveal some interesting findings that may be useful to specialists applying spatial-spectral endmember extraction algorithms to compressed hyperspectral imagery.
Edge-Based Image Compression with Homogeneous Diffusion
NASA Astrophysics Data System (ADS)
Mainberger, Markus; Weickert, Joachim
It is well-known that edges contain semantically important image information. In this paper we present a lossy compression method for cartoon-like images that exploits information at image edges. These edges are extracted with the Marr-Hildreth operator followed by hysteresis thresholding. Their locations are stored in a lossless way using JBIG. Moreover, we encode the grey or colour values at both sides of each edge by applying quantisation, subsampling and PAQ coding. In the decoding step, information outside these encoded data is recovered by solving the Laplace equation, i.e. we inpaint with the steady state of a homogeneous diffusion process. Our experiments show that the suggested method outperforms the widely-used JPEG standard and can even beat the advanced JPEG2000 standard for cartoon-like images.
JPEG2000 Image Compression on Solar EUV Images
NASA Astrophysics Data System (ADS)
Fischer, Catherine E.; Müller, Daniel; De Moortel, Ineke
2017-01-01
For future solar missions as well as ground-based telescopes, efficient ways to return and process data have become increasingly important. Solar Orbiter, which is the next ESA/NASA mission to explore the Sun and the heliosphere, is a deep-space mission, which implies a limited telemetry rate that makes efficient onboard data compression a necessity to achieve the mission science goals. Missions like the Solar Dynamics Observatory (SDO) and future ground-based telescopes such as the Daniel K. Inouye Solar Telescope, on the other hand, face the challenge of making petabyte-sized solar data archives accessible to the solar community. New image compression standards address these challenges by implementing efficient and flexible compression algorithms that can be tailored to user requirements. We analyse solar images from the Atmospheric Imaging Assembly (AIA) instrument onboard SDO to study the effect of lossy JPEG2000 (from the Joint Photographic Experts Group 2000) image compression at different bitrates. To assess the quality of compressed images, we use the mean structural similarity (MSSIM) index as well as the widely used peak signal-to-noise ratio (PSNR) as metrics and compare the two in the context of solar EUV images. In addition, we perform tests to validate the scientific use of the lossily compressed images by analysing examples of an on-disc and off-limb coronal-loop oscillation time-series observed by AIA/SDO.
The JPEG XT suite of standards: status and future plans
NASA Astrophysics Data System (ADS)
Richter, Thomas; Bruylants, Tim; Schelkens, Peter; Ebrahimi, Touradj
2015-09-01
The JPEG standard has known an enormous market adoption. Daily, billions of pictures are created, stored and exchanged in this format. The JPEG committee acknowledges this success and spends continued efforts in maintaining and expanding the standard specifications. JPEG XT is a standardization effort targeting the extension of the JPEG features by enabling support for high dynamic range imaging, lossless and near-lossless coding, and alpha channel coding, while also guaranteeing backward and forward compatibility with the JPEG legacy format. This paper gives an overview of the current status of the JPEG XT standards suite. It discusses the JPEG legacy specification, and details how higher dynamic range support is facilitated both for integer and floating-point color representations. The paper shows how JPEG XT's support for lossless and near-lossless coding of low and high dynamic range images is achieved in combination with backward compatibility to JPEG legacy. In addition, the extensible boxed-based JPEG XT file format on which all following and future extensions of JPEG will be based is introduced. This paper also details how the lossy and lossless representations of alpha channels are supported to allow coding transparency information and arbitrarily shaped images. Finally, we conclude by giving prospects on upcoming JPEG standardization initiative JPEG Privacy & Security, and a number of other possible extensions in JPEG XT.
An RBF-based compression method for image-based relighting.
Leung, Chi-Sing; Wong, Tien-Tsin; Lam, Ping-Man; Choy, Kwok-Hung
2006-04-01
In image-based relighting, a pixel is associated with a number of sampled radiance values. This paper presents a two-level compression method. In the first level, the plenoptic property of a pixel is approximated by a spherical radial basis function (SRBF) network. That means that the spherical plenoptic function of each pixel is represented by a number of SRBF weights. In the second level, we apply a wavelet-based method to compress these SRBF weights. To reduce the visual artifact due to quantization noise, we develop a constrained method for estimating the SRBF weights. Our proposed approach is superior to JPEG, JPEG2000, and MPEG. Compared with the spherical harmonics approach, our approach has a lower complexity, while the visual quality is comparable. The real-time rendering method for our SRBF representation is also discussed.
A software platform for the analysis of dermatology images
NASA Astrophysics Data System (ADS)
Vlassi, Maria; Mavraganis, Vlasios; Asvestas, Panteleimon
2017-11-01
The purpose of this paper is to present a software platform developed in Python programming environment that can be used for the processing and analysis of dermatology images. The platform provides the capability for reading a file that contains a dermatology image. The platform supports image formats such as Windows bitmaps, JPEG, JPEG2000, portable network graphics, TIFF. Furthermore, it provides suitable tools for selecting, either manually or automatically, a region of interest (ROI) on the image. The automated selection of a ROI includes filtering for smoothing the image and thresholding. The proposed software platform has a friendly and clear graphical user interface and could be a useful second-opinion tool to a dermatologist. Furthermore, it could be used to classify images including from other anatomical parts such as breast or lung, after proper re-training of the classification algorithms.
Song, Xiaoying; Huang, Qijun; Chang, Sheng; He, Jin; Wang, Hao
2018-06-01
To improve the compression rates for lossless compression of medical images, an efficient algorithm, based on irregular segmentation and region-based prediction, is proposed in this paper. Considering that the first step of a region-based compression algorithm is segmentation, this paper proposes a hybrid method by combining geometry-adaptive partitioning and quadtree partitioning to achieve adaptive irregular segmentation for medical images. Then, least square (LS)-based predictors are adaptively designed for each region (regular subblock or irregular subregion). The proposed adaptive algorithm not only exploits spatial correlation between pixels but it utilizes local structure similarity, resulting in efficient compression performance. Experimental results show that the average compression performance of the proposed algorithm is 10.48, 4.86, 3.58, and 0.10% better than that of JPEG 2000, CALIC, EDP, and JPEG-LS, respectively. Graphical abstract ᅟ.
Helioviewer.org: Browsing Very Large Image Archives Online Using JPEG 2000
NASA Astrophysics Data System (ADS)
Hughitt, V. K.; Ireland, J.; Mueller, D.; Dimitoglou, G.; Garcia Ortiz, J.; Schmidt, L.; Wamsler, B.; Beck, J.; Alexanderian, A.; Fleck, B.
2009-12-01
As the amount of solar data available to scientists continues to increase at faster and faster rates, it is important that there exist simple tools for navigating this data quickly with a minimal amount of effort. By combining heterogeneous solar physics datatypes such as full-disk images and coronagraphs, along with feature and event information, Helioviewer offers a simple and intuitive way to browse multiple datasets simultaneously. Images are stored in a repository using the JPEG 2000 format and tiled dynamically upon a client's request. By tiling images and serving only the portions of the image requested, it is possible for the client to work with very large images without having to fetch all of the data at once. In addition to a focus on intercommunication with other virtual observatories and browsers (VSO, HEK, etc), Helioviewer will offer a number of externally-available application programming interfaces (APIs) to enable easy third party use, adoption and extension. Recent efforts have resulted in increased performance, dynamic movie generation, and improved support for mobile web browsers. Future functionality will include: support for additional data-sources including RHESSI, SDO, STEREO, and TRACE, a navigable timeline of recorded solar events, social annotation, and basic client-side image processing.
JPEG2000-coded image error concealment exploiting convex sets projections.
Atzori, Luigi; Ginesu, Giaime; Raccis, Alessio
2005-04-01
Transmission errors in JPEG2000 can be grouped into three main classes, depending on the affected area: LL, high frequencies at the lower decomposition levels, and high frequencies at the higher decomposition levels. The first type of errors are the most annoying but can be concealed exploiting the signal spatial correlation like in a number of techniques proposed in the past; the second are less annoying but more difficult to address; the latter are often imperceptible. In this paper, we address the problem of concealing the second class or errors when high bit-planes are damaged by proposing a new approach based on the theory of projections onto convex sets. Accordingly, the error effects are masked by iteratively applying two procedures: low-pass (LP) filtering in the spatial domain and restoration of the uncorrupted wavelet coefficients in the transform domain. It has been observed that a uniform LP filtering brought to some undesired side effects that negatively compensated the advantages. This problem has been overcome by applying an adaptive solution, which exploits an edge map to choose the optimal filter mask size. Simulation results demonstrated the efficiency of the proposed approach.
An adaptable navigation strategy for Virtual Microscopy from mobile platforms.
Corredor, Germán; Romero, Eduardo; Iregui, Marcela
2015-04-01
Real integration of Virtual Microscopy with the pathologist service workflow requires the design of adaptable strategies for any hospital service to interact with a set of Whole Slide Images. Nowadays, mobile devices have the actual potential of supporting an online pervasive network of specialists working together. However, such devices are still very limited. This article introduces a novel highly adaptable strategy for streaming and visualizing WSI from mobile devices. The presented approach effectively exploits and extends the granularity of the JPEG2000 standard and integrates it with different strategies to achieve a lossless, loosely-coupled, decoder and platform independent implementation, adaptable to any interaction model. The performance was evaluated by two expert pathologists interacting with a set of 20 virtual slides. The method efficiently uses the available device resources: the memory usage did not exceed a 7% of the device capacity while the decoding times were smaller than the 200 ms per Region of Interest, i.e., a window of 256×256 pixels. This model is easily adaptable to other medical imaging scenarios. Copyright © 2015 Elsevier Inc. All rights reserved.
Confidential storage and transmission of medical image data.
Norcen, R; Podesser, M; Pommer, A; Schmidt, H-P; Uhl, A
2003-05-01
We discuss computationally efficient techniques for confidential storage and transmission of medical image data. Two types of partial encryption techniques based on AES are proposed. The first encrypts a subset of bitplanes of plain image data whereas the second encrypts parts of the JPEG2000 bitstream. We find that encrypting between 20% and 50% of the visual data is sufficient to provide high confidentiality.
2001-10-25
Table III. In spite of the same quality in ROI, it is decided that the images in the cases where QF is 1.3, 1.5 or 2.0 are not good for diagnosis. Of...but (b) is not good for diagnosis by decision of ultrasonographer. Results reveal that wavelet transform achieves higher quality of image compared
JHelioviewer. Time-dependent 3D visualisation of solar and heliospheric data
NASA Astrophysics Data System (ADS)
Müller, D.; Nicula, B.; Felix, S.; Verstringe, F.; Bourgoignie, B.; Csillaghy, A.; Berghmans, D.; Jiggens, P.; García-Ortiz, J. P.; Ireland, J.; Zahniy, S.; Fleck, B.
2017-09-01
Context. Solar observatories are providing the world-wide community with a wealth of data, covering wide time ranges (e.g. Solar and Heliospheric Observatory, SOHO), multiple viewpoints (Solar TErrestrial RElations Observatory, STEREO), and returning large amounts of data (Solar Dynamics Observatory, SDO). In particular, the large volume of SDO data presents challenges; the data are available only from a few repositories, and full-disk, full-cadence data for reasonable durations of scientific interest are difficult to download, due to their size and the download rates available to most users. From a scientist's perspective this poses three problems: accessing, browsing, and finding interesting data as efficiently as possible. Aims: To address these challenges, we have developed JHelioviewer, a visualisation tool for solar data based on the JPEG 2000 compression standard and part of the open source ESA/NASA Helioviewer Project. Since the first release of JHelioviewer in 2009, the scientific functionality of the software has been extended significantly, and the objective of this paper is to highlight these improvements. Methods: The JPEG 2000 standard offers useful new features that facilitate the dissemination and analysis of high-resolution image data and offers a solution to the challenge of efficiently browsing petabyte-scale image archives. The JHelioviewer software is open source, platform independent, and extendable via a plug-in architecture. Results: With JHelioviewer, users can visualise the Sun for any time period between September 1991 and today; they can perform basic image processing in real time, track features on the Sun, and interactively overlay magnetic field extrapolations. The software integrates solar event data and a timeline display. Once an interesting event has been identified, science quality data can be accessed for in-depth analysis. As a first step towards supporting science planning of the upcoming Solar Orbiter mission, JHelioviewer offers a virtual camera model that enables users to set the vantage point to the location of a spacecraft or celestial body at any given time.
JPEG 2000 in advanced ground station architectures
NASA Astrophysics Data System (ADS)
Chien, Alan T.; Brower, Bernard V.; Rajan, Sreekanth D.
2000-11-01
The integration and management of information from distributed and heterogeneous information producers and providers must be a key foundation of any developing imagery intelligence system. Historically, imagery providers acted as production agencies for imagery, imagery intelligence, and geospatial information. In the future, these imagery producers will be evolving to act more like e-business information brokers. The management of imagery and geospatial information-visible, spectral, infrared (IR), radar, elevation, or other feature and foundation data-is crucial from a quality and content perspective. By 2005, there will be significantly advanced collection systems and a myriad of storage devices. There will also be a number of automated and man-in-the-loop correlation, fusion, and exploitation capabilities. All of these new imagery collection and storage systems will result in a higher volume and greater variety of imagery being disseminated and archived in the future. This paper illustrates the importance-from a collection, storage, exploitation, and dissemination perspective-of the proper selection and implementation of standards-based compression technology for ground station and dissemination/archive networks. It specifically discusses the new compression capabilities featured in JPEG 2000 and how that commercially based technology can provide significant improvements to the overall imagery and geospatial enterprise both from an architectural perspective as well as from a user's prospective.
NASA Astrophysics Data System (ADS)
Martin, Gabriel; Gonzalez-Ruiz, Vicente; Plaza, Antonio; Ortiz, Juan P.; Garcia, Inmaculada
2010-07-01
Lossy hyperspectral image compression has received considerable interest in recent years due to the extremely high dimensionality of the data. However, the impact of lossy compression on spectral unmixing techniques has not been widely studied. These techniques characterize mixed pixels (resulting from insufficient spatial resolution) in terms of a suitable combination of spectrally pure substances (called endmembers) weighted by their estimated fractional abundances. This paper focuses on the impact of JPEG2000-based lossy compression of hyperspectral images on the quality of the endmembers extracted by different algorithms. The three considered algorithms are the orthogonal subspace projection (OSP), which uses only spatial information, and the automatic morphological endmember extraction (AMEE) and spatial spectral endmember extraction (SSEE), which integrate both spatial and spectral information in the search for endmembers. The impact of compression on the resulting abundance estimation based on the endmembers derived by different methods is also substantiated. Experimental results are conducted using a hyperspectral data set collected by NASA Jet Propulsion Laboratory over the Cuprite mining district in Nevada. The experimental results are quantitatively analyzed using reference information available from U.S. Geological Survey, resulting in recommendations to specialists interested in applying endmember extraction and unmixing algorithms to compressed hyperspectral data.
Image transmission system using adaptive joint source and channel decoding
NASA Astrophysics Data System (ADS)
Liu, Weiliang; Daut, David G.
2005-03-01
In this paper, an adaptive joint source and channel decoding method is designed to accelerate the convergence of the iterative log-dimain sum-product decoding procedure of LDPC codes as well as to improve the reconstructed image quality. Error resilience modes are used in the JPEG2000 source codec, which makes it possible to provide useful source decoded information to the channel decoder. After each iteration, a tentative decoding is made and the channel decoded bits are then sent to the JPEG2000 decoder. Due to the error resilience modes, some bits are known to be either correct or in error. The positions of these bits are then fed back to the channel decoder. The log-likelihood ratios (LLR) of these bits are then modified by a weighting factor for the next iteration. By observing the statistics of the decoding procedure, the weighting factor is designed as a function of the channel condition. That is, for lower channel SNR, a larger factor is assigned, and vice versa. Results show that the proposed joint decoding methods can greatly reduce the number of iterations, and thereby reduce the decoding delay considerably. At the same time, this method always outperforms the non-source controlled decoding method up to 5dB in terms of PSNR for various reconstructed images.
Concurrent access to a virtual microscope using a web service oriented architecture
NASA Astrophysics Data System (ADS)
Corredor, Germán.; Iregui, Marcela; Arias, Viviana; Romero, Eduardo
2013-11-01
Virtual microscopy (VM) facilitates visualization and deployment of histopathological virtual slides (VS), a useful tool for education, research and diagnosis. In recent years, it has become popular, yet its use is still limited basically because of the very large sizes of VS, typically of the order of gigabytes. Such volume of data requires efficacious and efficient strategies to access the VS content. In an educative or research scenario, several users may require to access and interact with VS at the same time, so, due to large data size, a very expensive and powerful infrastructure is usually required. This article introduces a novel JPEG2000-based service oriented architecture for streaming and visualizing very large images under scalable strategies, which in addition need not require very specialized infrastructure. Results suggest that the proposed architecture enables transmission and simultaneous visualization of large images, while it is efficient using resources and offering users proper response times.
NASA Astrophysics Data System (ADS)
2000-09-01
VLT YEPUN Joins ANTU, KUEYEN and MELIPAL It was a historical moment last night (September 3 - 4, 2000) in the VLT Control Room at the Paranal Observatory , after nearly 15 years of hard work. Finally, four teams of astronomers and engineers were sitting at the terminals - and each team with access to an 8.2-m telescope! From now on, the powerful "Paranal Quartet" will be observing night after night, with a combined mirror surface of more than 210 m 2. And beginning next year, some of them will be linked to form part of the unique VLT Interferometer with unparalleled sensitivity and image sharpness. YEPUN "First Light" Early in the evening, the fourth 8.2-m Unit Telescope, YEPUN , was pointed to the sky for the first time and successfully achieved "First Light". Following a few technical exposures, a series of "first light" photos was made of several astronomical objects with the VLT Test Camera. This instrument was also used for the three previous "First Light" events for ANTU ( May 1998 ), KUEYEN ( March 1999 ) and MELIPAL ( January 2000 ). These images served to evaluate provisionally the performance of the new telescope, mainly in terms of mechanical and optical quality. The ESO staff were very pleased with the results and pronounced YEPUN fit for the subsequent commissioning phase. When the name YEPUN was first given to the fourth VLT Unit Telescope, it was supposed to mean "Sirius" in the Mapuche language. However, doubts have since arisen about this translation and a detailed investigation now indicates that the correct meaning is "Venus" (as the Evening Star). For a detailed explanation, please consult the essay On the Meaning of "YEPUN" , now available at the ESO website. The first images At 21:39 hrs local time (01:39 UT), YEPUN was turned to point in the direction of a dense Milky Way field, near the border between the constellations Sagitta (The Arrow) and Aquila (The Eagle). A guide star was acquired and the active optics system quickly optimized the mirror system. At 21:44 hrs (01:44 UT), the Test Camera at the Cassegrain focus within the M1 mirror cell was opened for 30 seconds, with the planetary nebula Hen 2-428 in the field. The resulting "First Light" image was immediately read out and appeared on the computer screen at 21:45:53 hrs (01:45:53 UT). "Not bad! - "Very nice!" were the first, "business-as-usual"-like comments in the room. The zenith distance during this observation was 44° and the image quality was measured as 0.9 arcsec, exactly the same as that registered by the Seeing Monitoring Telescope outside the telescope building. There was some wind. ESO PR Photo 22a/00 ESO PR Photo 22a/00 [Preview - JPEG: 374 x 400 pix - 128k] [Normal - JPEG: 978 x 1046 pix - 728k] Caption : ESO PR Photo 22a/00 shows a colour composite of some of the first astronomical exposures obtained by YEPUN . The object is the planetary nebula Hen 2-428 that is located at a distance of 6,000-8,000 light-years and seen in a dense sky field, only 2° from the main plane of the Milky Way. As other planetary nebulae, it is caused by a dying star (the bluish object at the centre) that shreds its outer layers. The image is based on exposures through three optical filtres: B(lue) (10 min exposure, seeing 0.9 arcsec; here rendered as blue), V(isual) (5 min; 0.9 arcsec; green) and R(ed) (3 min; 0.9 arcsec; red). The field measures 88 x 78 arcsec 2 (1 pixel = 0.09 arcsec). North is to the lower right and East is to the lower left. The 5-day old Moon was about 90° away in the sky that was accordingly bright. The zenith angle was 44°. The ESO staff then proceeded to take a series of three photos with longer exposures through three different optical filtres. They have been combined to produce the image shown in ESO PR Photo 22a/00 . More astronomical images were obtained in sequence, first of the dwarf galaxy NGC 6822 in the Local Group (see PR Photo 22f/00 below) and then of the spiral galaxy NGC 7793 . All 8.2-m telescopes now in operation at Paranal The ESO Director General, Catherine Cesarsky , who was present on Paranal during this event, congratulated the ESO staff to the great achievement, herewith bringing a major phase of the VLT project to a successful end. She was particularly impressed by the excellent optical quality that was achieved at this early moment of the commissioning tests. A measurement showed that already now, 80% of the light is concentrated within 0.22 arcsec. The manager of the VLT project, Massimo Tarenghi , was very happy to reach this crucial project milestone, after nearly fifteen years of hard work. He also remarked that with the M2 mirror already now "in the active optics loop", the telescope was correctly compensating for the somewhat mediocre atmospheric conditions on this night. The next major step will be the "first light" for the VLT Interferometer (VLTI) , when the light from two Unit Telescopes is combined. This event is expected in the middle of next year. Impressions from the YEPUN "First Light" event First Light for YEPUN - ESO PR VC 06/00 ESO PR Video Clip 06/00 "First Light for YEPUN" (5650 frames/3:46 min) [MPEG Video+Audio; 160x120 pix; 7.7Mb] [MPEG Video+Audio; 320x240 pix; 25.7 Mb] [RealMedia; streaming; 34kps] [RealMedia; streaming; 200kps] ESO Video Clip 06/00 shows sequences from the Control Room at the Paranal Observatory, recorded with a fixed TV-camera in the evening of September 3 at about 23:00 hrs local time (03:00 UT), i.e., soon after the moment of "First Light" for YEPUN . The video sequences were transmitted via ESO's dedicated satellite communication link to the Headquarters in Garching for production of the clip. It begins at the moment a guide star is acquired to perform an automatic "active optics" correction of the mirrors; the associated explanation is given by Massimo Tarenghi (VLT Project Manager). The first astronomical observation is performed and the first image of the planetary nebula Hen 2-428 is discussed by the ESO Director General, Catherine Cesarsky . The next image, of the nearby dwarf galaxy NGC 6822 , arrives and is shown and commented on by the ESO Director General. Finally, Massimo Tarenghi talks about the next major step of the VLT Project. The combination of the lightbeams from two 8.2-m Unit Telescopes, planned for the summer of 2001, will mark the beginning of the VLT Interferometer. ESO Press Photo 22b/00 ESO Press Photo 22b/00 [Preview; JPEG: 400 x 300; 88k] [Full size; JPEG: 1600 x 1200; 408k] The enclosure for the fourth VLT 8.2-m Unit Telescope, YEPUN , photographed at sunset on September 3, 2000, immediately before "First Light" was successfully achieved. The upper part of the mostly subterranean Interferometric Laboratory for the VLTI is seen in front. (Digital Photo). ESO Press Photo 22c/00 ESO Press Photo 22c/00 [Preview; JPEG: 400 x 300; 112k] [Full size; JPEG: 1280 x 960; 184k] The initial tuning of the YEPUN optical system took place in the early evening of September 3, 2000, from the "observing hut" on the floor of the telescope enclosure. From left to right: Krister Wirenstrand who is responsible for the VLT Control Software, Jason Spyromilio - Head of the Commissioning Team, and Massimo Tarenghi , VLT Manager. (Digital Photo). ESO Press Photo 22d/00 ESO Press Photo 22d/00 [Preview; JPEG: 400 x 300; 112k] [Full size; JPEG: 1280 x 960; 184k] "Mission Accomplished" - The ESO Director General, Catherine Cesarsky , and the Paranal Director, Roberto Gilmozzi , face the VLT Manager, Massimo Tarenghi at the YEPUN Control Station, right after successful "First Light" for this telescope. (Digital Photo). An aerial image of YEPUN in its enclosure is available as ESO PR Photo 43a/99. The mechanical structure of YEPUN was first pre-assembled at the Ansaldo factory in Milan (Italy) where it served for tests while the other telescopes were erected at Paranal. An early photo ( ESO PR Photo 37/95 ) is available that was obtained during the visit of the ESO Council to Milan in December 1995, cf. ESO PR 18/95. Paranal at sunset ESO Press Photo 22e/00 ESO Press Photo 22e/00 [Preview; JPEG: 400 x 200; 14kb] [Normal; JPEG: 800 x 400; 84kb] [High-Res; JPEG: 4000 x 2000; 4.0Mb] Wide-angle view of the Paranal Observatory at sunset. The last rays of the sun illuminate the telescope enclosures at the top of the mountain and some of the buildings at the Base Camp. The new "residencia" that will provide living space for the Paranal staff and visitors from next year is being constructed to the left. The "First Light" observations with YEPUN began soon after sunset. This photo was obtained in March 2000. Additional photos (September 6, 2000) ESO PR Photo 22f/00 ESO PR Photo 22f/00 [Preview - JPEG: 400 x 487 pix - 224k] [Normal - JPEG: 992 x 1208 pix - 1.3Mb] Caption : ESO PR Photo 22f/00 shows a colour composite of three exposures of a field in the dwarf galaxy NGC 6822 , a member of the Local Group of Galaxies at a distance of about 2 million light-years. They were obtained by YEPUN and the VLT Test Camera at about 23:00 hrs local time on September 3 (03:00 UT on September 4), 2000. The image is based on exposures through three optical filtres: B(lue) (10 min exposure; here rendered as blue), V(isual) (5 min; green) and R(ed) (5 min; red); the seeing was 0.9 - 1.0 arcsec. Individual stars of many different colours (temperatures) are seen. The field measures about 1.5 x 1.5 arcmin 2. Another image of this galaxy was obtained earlier with ANTU and FORS1 , cf. PR Photo 10b/99. ESO Press Photo 22g/00 ESO Press Photo 22g/00 [Preview; JPEG: 400 x 300; 136k] [Full size; JPEG: 1280 x 960; 224k] Most of the crew that put together YEPUN is here photographed after the installation of the M1 mirror cell at the bottom of the mechanical structure (on July 30, 2000). Back row (left to right): Erich Bugueno (Mechanical Supervisor), Erito Flores (Maintenance Technician); front row (left to right) Peter Gray (Mechanical Engineer), German Ehrenfeld (Mechanical Engineer), Mario Tapia (Mechanical Engineer), Christian Juica (kneeling - Mechanical Technician), Nelson Montano (Maintenance Engineer), Hansel Sepulveda (Mechanical Technican) and Roberto Tamai (Mechanical Engineer). (Digital Photo). ESO PR Photos may be reproduced, if credit is given to the European Southern Observatory. The ESO PR Video Clips service to visitors to the ESO website provides "animated" illustrations of the ongoing work and events at the European Southern Observatory. The most recent clip was: ESO PR Video Clip 05/00 ("Portugal to Accede to ESO (27 June 2000). Information is also available on the web about other ESO videos.
A generalized Benford's law for JPEG coefficients and its applications in image forensics
NASA Astrophysics Data System (ADS)
Fu, Dongdong; Shi, Yun Q.; Su, Wei
2007-02-01
In this paper, a novel statistical model based on Benford's law for the probability distributions of the first digits of the block-DCT and quantized JPEG coefficients is presented. A parametric logarithmic law, i.e., the generalized Benford's law, is formulated. Furthermore, some potential applications of this model in image forensics are discussed in this paper, which include the detection of JPEG compression for images in bitmap format, the estimation of JPEG compression Qfactor for JPEG compressed bitmap image, and the detection of double compressed JPEG image. The results of our extensive experiments demonstrate the effectiveness of the proposed statistical model.
Modeling of video compression effects on target acquisition performance
NASA Astrophysics Data System (ADS)
Cha, Jae H.; Preece, Bradley; Espinola, Richard L.
2009-05-01
The effect of video compression on image quality was investigated from the perspective of target acquisition performance modeling. Human perception tests were conducted recently at the U.S. Army RDECOM CERDEC NVESD, measuring identification (ID) performance on simulated military vehicle targets at various ranges. These videos were compressed with different quality and/or quantization levels utilizing motion JPEG, motion JPEG2000, and MPEG-4 encoding. To model the degradation on task performance, the loss in image quality is fit to an equivalent Gaussian MTF scaled by the Structural Similarity Image Metric (SSIM). Residual compression artifacts are treated as 3-D spatio-temporal noise. This 3-D noise is found by taking the difference of the uncompressed frame, with the estimated equivalent blur applied, and the corresponding compressed frame. Results show good agreement between the experimental data and the model prediction. This method has led to a predictive performance model for video compression by correlating various compression levels to particular blur and noise input parameters for NVESD target acquisition performance model suite.
Analysis-Preserving Video Microscopy Compression via Correlation and Mathematical Morphology
Shao, Chong; Zhong, Alfred; Cribb, Jeremy; Osborne, Lukas D.; O’Brien, E. Timothy; Superfine, Richard; Mayer-Patel, Ketan; Taylor, Russell M.
2015-01-01
The large amount video data produced by multi-channel, high-resolution microscopy system drives the need for a new high-performance domain-specific video compression technique. We describe a novel compression method for video microscopy data. The method is based on Pearson's correlation and mathematical morphology. The method makes use of the point-spread function (PSF) in the microscopy video acquisition phase. We compare our method to other lossless compression methods and to lossy JPEG, JPEG2000 and H.264 compression for various kinds of video microscopy data including fluorescence video and brightfield video. We find that for certain data sets, the new method compresses much better than lossless compression with no impact on analysis results. It achieved a best compressed size of 0.77% of the original size, 25× smaller than the best lossless technique (which yields 20% for the same video). The compressed size scales with the video's scientific data content. Further testing showed that existing lossy algorithms greatly impacted data analysis at similar compression sizes. PMID:26435032
Interactive Courseware Standards
1992-07-01
music industry standard provides data formats and transmission specifications for musical notation. Joint Photographic Experts Group (JPEG). This...has been used in the music industry for several years, especially for electronically programmable keyboards and 16 instruments. The video compression
Compressed domain ECG biometric with two-lead features
NASA Astrophysics Data System (ADS)
Lee, Wan-Jou; Chang, Wen-Whei
2016-07-01
This study presents a new method to combine ECG biometrics with data compression within a common JPEG2000 framework. We target the two-lead ECG configuration that is routinely used in long-term heart monitoring. Incorporation of compressed-domain biometric techniques enables faster person identification as it by-passes the full decompression. Experiments on public ECG databases demonstrate the validity of the proposed method for biometric identification with high accuracies on both healthy and diseased subjects.
An effective and efficient compression algorithm for ECG signals with irregular periods.
Chou, Hsiao-Hsuan; Chen, Ying-Jui; Shiau, Yu-Chien; Kuo, Te-Son
2006-06-01
This paper presents an effective and efficient preprocessing algorithm for two-dimensional (2-D) electrocardiogram (ECG) compression to better compress irregular ECG signals by exploiting their inter- and intra-beat correlations. To better reveal the correlation structure, we first convert the ECG signal into a proper 2-D representation, or image. This involves a few steps including QRS detection and alignment, period sorting, and length equalization. The resulting 2-D ECG representation is then ready to be compressed by an appropriate image compression algorithm. We choose the state-of-the-art JPEG2000 for its high efficiency and flexibility. In this way, the proposed algorithm is shown to outperform some existing arts in the literature by simultaneously achieving high compression ratio (CR), low percent root mean squared difference (PRD), low maximum error (MaxErr), and low standard derivation of errors (StdErr). In particular, because the proposed period sorting method rearranges the detected heartbeats into a smoother image that is easier to compress, this algorithm is insensitive to irregular ECG periods. Thus either the irregular ECG signals or the QRS false-detection cases can be better compressed. This is a significant improvement over existing 2-D ECG compression methods. Moreover, this algorithm is not tied exclusively to JPEG2000. It can also be combined with other 2-D preprocessing methods or appropriate codecs to enhance the compression performance in irregular ECG cases.
Jaferzadeh, Keyvan; Gholami, Samaneh; Moon, Inkyu
2016-12-20
In this paper, we evaluate lossless and lossy compression techniques to compress quantitative phase images of red blood cells (RBCs) obtained by an off-axis digital holographic microscopy (DHM). The RBC phase images are numerically reconstructed from their digital holograms and are stored in 16-bit unsigned integer format. In the case of lossless compression, predictive coding of JPEG lossless (JPEG-LS), JPEG2000, and JP3D are evaluated, and compression ratio (CR) and complexity (compression time) are compared against each other. It turns out that JP2k can outperform other methods by having the best CR. In the lossy case, JP2k and JP3D with different CRs are examined. Because some data is lost in a lossy way, the degradation level is measured by comparing different morphological and biochemical parameters of RBC before and after compression. Morphological parameters are volume, surface area, RBC diameter, sphericity index, and the biochemical cell parameter is mean corpuscular hemoglobin (MCH). Experimental results show that JP2k outperforms JP3D not only in terms of mean square error (MSE) when CR increases, but also in compression time in the lossy compression way. In addition, our compression results with both algorithms demonstrate that with high CR values the three-dimensional profile of RBC can be preserved and morphological and biochemical parameters can still be within the range of reported values.
2015-03-26
Fourier Analysis and Applications, vol. 14, pp. 838–858, 2008. 11. D. J. Cooke, “A discrete X - ray transform for chromotomographic hyperspectral imaging ... medical imaging , e.g., magnetic resonance imaging (MRI). Since the early 1980s, MRI has granted doctors the ability to distinguish between healthy tissue...i.e., at most K entries of x are nonzero. In many settings, this is a valid signal model; for example, JPEG2000 exploits the fact that natural images
JPEG XS call for proposals subjective evaluations
NASA Astrophysics Data System (ADS)
McNally, David; Bruylants, Tim; Willème, Alexandre; Ebrahimi, Touradj; Schelkens, Peter; Macq, Benoit
2017-09-01
In March 2016 the Joint Photographic Experts Group (JPEG), formally known as ISO/IEC SC29 WG1, issued a call for proposals soliciting compression technologies for a low-latency, lightweight and visually transparent video compression scheme. Within the JPEG family of standards, this scheme was denominated JPEG XS. The subjective evaluation of visually lossless compressed video sequences at high resolutions and bit depths poses particular challenges. This paper describes the adopted procedures, the subjective evaluation setup, the evaluation process and summarizes the obtained results which were achieved in the context of the JPEG XS standardization process.
JHelioviewer: Open-Source Software for Discovery and Image Access in the Petabyte Age (Invited)
NASA Astrophysics Data System (ADS)
Mueller, D.; Dimitoglou, G.; Langenberg, M.; Pagel, S.; Dau, A.; Nuhn, M.; Garcia Ortiz, J. P.; Dietert, H.; Schmidt, L.; Hughitt, V. K.; Ireland, J.; Fleck, B.
2010-12-01
The unprecedented torrent of data returned by the Solar Dynamics Observatory is both a blessing and a barrier: a blessing for making available data with significantly higher spatial and temporal resolution, but a barrier for scientists to access, browse and analyze them. With such staggering data volume, the data is bound to be accessible only from a few repositories and users will have to deal with data sets effectively immobile and practically difficult to download. From a scientist's perspective this poses three challenges: accessing, browsing and finding interesting data while avoiding the proverbial search for a needle in a haystack. To address these challenges, we have developed JHelioviewer, an open-source visualization software that lets users browse large data volumes both as still images and movies. We did so by deploying an efficient image encoding, storage, and dissemination solution using the JPEG 2000 standard. This solution enables users to access remote images at different resolution levels as a single data stream. Users can view, manipulate, pan, zoom, and overlay JPEG 2000 compressed data quickly, without severe network bandwidth penalties. Besides viewing data, the browser provides third-party metadata and event catalog integration to quickly locate data of interest, as well as an interface to the Virtual Solar Observatory to download science-quality data. As part of the Helioviewer Project, JHelioviewer offers intuitive ways to browse large amounts of heterogeneous data remotely and provides an extensible and customizable open-source platform for the scientific community.
Reversible Watermarking Surviving JPEG Compression.
Zain, J; Clarke, M
2005-01-01
This paper will discuss the properties of watermarking medical images. We will also discuss the possibility of such images being compressed by JPEG and give an overview of JPEG compression. We will then propose a watermarking scheme that is reversible and robust to JPEG compression. The purpose is to verify the integrity and authenticity of medical images. We used 800x600x8 bits ultrasound (US) images in our experiment. SHA-256 of the image is then embedded in the Least significant bits (LSB) of an 8x8 block in the Region of Non Interest (RONI). The image is then compressed using JPEG and decompressed using Photoshop 6.0. If the image has not been altered, the watermark extracted will match the hash (SHA256) of the original image. The result shown that the embedded watermark is robust to JPEG compression up to image quality 60 (~91% compressed).
High bit depth infrared image compression via low bit depth codecs
NASA Astrophysics Data System (ADS)
Belyaev, Evgeny; Mantel, Claire; Forchhammer, Søren
2017-08-01
Future infrared remote sensing systems, such as monitoring of the Earth's environment by satellites, infrastructure inspection by unmanned airborne vehicles etc., will require 16 bit depth infrared images to be compressed and stored or transmitted for further analysis. Such systems are equipped with low power embedded platforms where image or video data is compressed by a hardware block called the video processing unit (VPU). However, in many cases using two 8-bit VPUs can provide advantages compared with using higher bit depth image compression directly. We propose to compress 16 bit depth images via 8 bit depth codecs in the following way. First, an input 16 bit depth image is mapped into 8 bit depth images, e.g., the first image contains only the most significant bytes (MSB image) and the second one contains only the least significant bytes (LSB image). Then each image is compressed by an image or video codec with 8 bits per pixel input format. We analyze how the compression parameters for both MSB and LSB images should be chosen to provide the maximum objective quality for a given compression ratio. Finally, we apply the proposed infrared image compression method utilizing JPEG and H.264/AVC codecs, which are usually available in efficient implementations, and compare their rate-distortion performance with JPEG2000, JPEG-XT and H.265/HEVC codecs supporting direct compression of infrared images in 16 bit depth format. A preliminary result shows that two 8 bit H.264/AVC codecs can achieve similar result as 16 bit HEVC codec.
Optimized atom position and coefficient coding for matching pursuit-based image compression.
Shoa, Alireza; Shirani, Shahram
2009-12-01
In this paper, we propose a new encoding algorithm for matching pursuit image coding. We show that coding performance is improved when correlations between atom positions and atom coefficients are both used in encoding. We find the optimum tradeoff between efficient atom position coding and efficient atom coefficient coding and optimize the encoder parameters. Our proposed algorithm outperforms the existing coding algorithms designed for matching pursuit image coding. Additionally, we show that our algorithm results in better rate distortion performance than JPEG 2000 at low bit rates.
Steganalysis based on JPEG compatibility
NASA Astrophysics Data System (ADS)
Fridrich, Jessica; Goljan, Miroslav; Du, Rui
2001-11-01
In this paper, we introduce a new forensic tool that can reliably detect modifications in digital images, such as distortion due to steganography and watermarking, in images that were originally stored in the JPEG format. The JPEG compression leave unique fingerprints and serves as a fragile watermark enabling us to detect changes as small as modifying the LSB of one randomly chosen pixel. The detection of changes is based on investigating the compatibility of 8x8 blocks of pixels with JPEG compression with a given quantization matrix. The proposed steganalytic method is applicable to virtually all steganongraphic and watermarking algorithms with the exception of those that embed message bits into the quantized JPEG DCT coefficients. The method can also be used to estimate the size of the secret message and identify the pixels that carry message bits. As a consequence of our steganalysis, we strongly recommend avoiding using images that have been originally stored in the JPEG format as cover-images for spatial-domain steganography.
Performance of the JPEG Estimated Spectrum Adaptive Postfilter (JPEG-ESAP) for Low Bit Rates
NASA Technical Reports Server (NTRS)
Linares, Irving (Inventor)
2016-01-01
Frequency-based, pixel-adaptive filtering using the JPEG-ESAP algorithm for low bit rate JPEG formatted color images may allow for more compressed images while maintaining equivalent quality at a smaller file size or bitrate. For RGB, an image is decomposed into three color bands--red, green, and blue. The JPEG-ESAP algorithm is then applied to each band (e.g., once for red, once for green, and once for blue) and the output of each application of the algorithm is rebuilt as a single color image. The ESAP algorithm may be repeatedly applied to MPEG-2 video frames to reduce their bit rate by a factor of 2 or 3, while maintaining equivalent video quality, both perceptually, and objectively, as recorded in the computed PSNR values.
Visualization of JPEG Metadata
NASA Astrophysics Data System (ADS)
Malik Mohamad, Kamaruddin; Deris, Mustafa Mat
There are a lot of information embedded in JPEG image than just graphics. Visualization of its metadata would benefit digital forensic investigator to view embedded data including corrupted image where no graphics can be displayed in order to assist in evidence collection for cases such as child pornography or steganography. There are already available tools such as metadata readers, editors and extraction tools but mostly focusing on visualizing attribute information of JPEG Exif. However, none have been done to visualize metadata by consolidating markers summary, header structure, Huffman table and quantization table in a single program. In this paper, metadata visualization is done by developing a program that able to summarize all existing markers, header structure, Huffman table and quantization table in JPEG. The result shows that visualization of metadata helps viewing the hidden information within JPEG more easily.
Design of a motion JPEG (M/JPEG) adapter card
NASA Astrophysics Data System (ADS)
Lee, D. H.; Sudharsanan, Subramania I.
1994-05-01
In this paper we describe a design of a high performance JPEG (Joint Photographic Experts Group) Micro Channel adapter card. The card, tested on a range of PS/2 platforms (models 50 to 95), can complete JPEG operations on a 640 by 240 pixel image within 1/60 of a second, thus enabling real-time capture and display of high quality digital video. The card accepts digital pixels for either a YUV 4:2:2 or an RGB 4:4:4 pixel bus and has been shown to handle up to 2.05 MBytes/second of compressed data. The compressed data is transmitted to a host memory area by Direct Memory Access operations. The card uses a single C-Cube's CL550 JPEG processor that complies with the baseline JPEG. We give broad descriptions of the hardware that controls the video interface, CL550, and the system interface. Some critical design points that enhance the overall performance of the M/JPEG systems are pointed out. The control of the adapter card is achieved by an interrupt driven software that runs under DOS. The software performs a variety of tasks that include change of color space (RGB or YUV), change of quantization and Huffman tables, odd and even field control and some diagnostic operations.
NASA Astrophysics Data System (ADS)
2001-04-01
A Window towards the Distant Universe Summary The Osservatorio Astronomico Capodimonte Deep Field (OACDF) is a multi-colour imaging survey project that is opening a new window towards the distant universe. It is conducted with the ESO Wide Field Imager (WFI) , a 67-million pixel advanced camera attached to the MPG/ESO 2.2-m telescope at the La Silla Observatory (Chile). As a pilot project at the Osservatorio Astronomico di Capodimonte (OAC) [1], the OACDF aims at providing a large photometric database for deep extragalactic studies, with important by-products for galactic and planetary research. Moreover, it also serves to gather experience in the proper and efficient handling of very large data sets, preparing for the arrival of the VLT Survey Telescope (VST) with the 1 x 1 degree 2 OmegaCam facility. PR Photo 15a/01 : Colour composite of the OACDF2 field . PR Photo 15b/01 : Interacting galaxies in the OACDF2 field. PR Photo 15c/01 : Spiral galaxy and nebulous object in the OACDF2 field. PR Photo 15d/01 : A galaxy cluster in the OACDF2 field. PR Photo 15e/01 : Another galaxy cluster in the OACDF2 field. PR Photo 15f/01 : An elliptical galaxy in the OACDF2 field. The Capodimonte Deep Field ESO PR Photo 15a/01 ESO PR Photo 15a/01 [Preview - JPEG: 400 x 426 pix - 73k] [Normal - JPEG: 800 x 851 pix - 736k] [Hi-Res - JPEG: 3000 x 3190 pix - 7.3M] Caption : This three-colour image of about 1/4 of the Capodimonte Deep Field (OACDF) was obtained with the Wide-Field Imager (WFI) on the MPG/ESO 2.2-m telescope at the la Silla Observatory. It covers "OACDF Subfield no. 2 (OACDF2)" with an area of about 35 x 32 arcmin 2 (about the size of the full moon), and it is one of the "deepest" wide-field images ever obtained. Technical information about this photo is available below. With the comparatively few large telescopes available in the world, it is not possible to study the Universe to its outmost limits in all directions. Instead, astronomers try to obtain the most detailed information possible in selected viewing directions, assuming that what they find there is representative for the Universe as a whole. This is the philosophy behind the so-called "deep-field" projects that subject small areas of the sky to intensive observations with different telescopes and methods. The astronomers determine the properties of the objects seen, as well as their distances and are then able to obtain a map of the space within the corresponding cone-of-view (the "pencil beam"). Recent, successful examples of this technique are the "Hubble Deep Field" (cf. ESO PR Photo 26/98 ) and the "Chandra Deep Field" ( ESO PR 05/01 ). In this context, the Capodimonte Deep Field (OACDF) is a pilot research project, now underway at the Osservatorio Astronomico di Capodimonte (OAC) in Napoli (Italy). It is a multi-colour imaging survey performed with the Wide Field Imager (WFI) , a 67-million pixel (8k x 8k) digital camera that is installed at the 2.2-m MPG/ESO Telescope at ESO's La Silla Observatory in Chile. The scientific goal of the OACDF is to provide an important database for subsequent extragalactic, galactic and planetary studies. It will allow the astronomers at OAC - who are involved in the VLT Survey Telescope (VST) project - to gain insight into the processing (and use) of the large data flow from a camera similar to, but four times smaller than the OmegaCam wide-field camera that will be installed at the VST. The field selection for the OACDF was based on the following criteria: * There must be no stars brighter than about 9th magnitude in the field, in order to avoid saturation of the CCD detector and effects from straylight in the telescope and camera. No Solar System planets should be near the field during the observations; * It must be located far from the Milky Way plane (at high galactic latitude) in order to reduce the number of galactic stars seen in this direction; * It must be located in the southern sky in order to optimize observing conditions (in particular, the altitude of the field above the horizon), as seen from the La Silla and Paranal sites; * There should be little interstellar material in this direction that may obscure the view towards the distant Universe; * Observations in this field should have been made with the Hubble Space Telescope (HST) that may serve for comparison and calibration purposes. Based on these criteria, the astronomers selected a field measuring about 1 x 1 deg 2 in the southern constellation of Corvus (The Raven). This is now known as the Capodimonte Deep Field (OACDF) . The above photo ( PR Photo 15a/01 ) covers one-quarter of the full field (Subfield No. 2 - OACDF2) - some of the objects seen in this area are shown below in more detail. More than 35,000 objects have been found in this area; the faintest are nearly 100 million fainter than what can be perceived with the unaided eye in the dark sky. Selected objects in the Capodimonte Deep Field ESO PR Photo 15b/01 ESO PR Photo 15b/01 [Preview - JPEG: 400 x 435 pix - 60k] [Normal - JPEG: 800 x 870 pix - 738k] [Hi-Res - JPEG: 3000 x 3261 pix - 5.1M] Caption : Enlargement of the interacting galaxies that are seen in the upper left corner of the OACDF2 field shown in PR Photo 15a/01 . The enlargement covers 1250 x 1130 WFI pixels (1 pixel = 0.24 arcsec), or about 5.0 x 4.5 arcmin 2 in the sky. The lower spiral is itself an interactive double. ESO PR Photo 15c/01 ESO PR Photo 15c/01 [Preview - JPEG: 557 x 400 pix - 93k] [Normal - JPEG: 1113 x 800 pix - 937k] [Hi-Res - JPEG: 3000 x 2156 pix - 4.0M] Caption : Enlargement of a spiral galaxy and a nebulous object in this area. The field shown covers 1250 x 750 pixels, or about 5 x 3 arcmin 2 in the sky. Note the very red objects next to the two bright stars in the lower-right corner. The colours of these objects are consistent with those of spheroidal galaxies at intermediate distances (redshifts). ESO PR Photo 15d/01 ESO PR Photo 15d/01 [Preview - JPEG: 400 x 530 pix - 68k] [Normal - JPEG: 800 x 1060 pix - 870k] [Hi-Res - JPEG: 2768 x 3668 pix - 6.2M] Caption : A further enlargement of a galaxy cluster of which most members are located in the north-east quadrant (upper left) and have a reddish colour. The nebulous object to the upper left is a dwarf galaxy of spheroidal shape. The red object, located near the centre of the field and resembling a double star, is very likely a gravitational lens [2]. Some of the very red, point-like objects in the field may be distant quasars, very-low mass stars or, possibly, relatively nearby brown dwarf stars. The field shown covers 1380 x 1630 pixels, or 5.5 x 6.5 arcmin 2. ESO PR Photo 15e/01 ESO PR Photo 15e/01 [Preview - JPEG: 400 x 418 pix - 56k] [Normal - JPEG: 800 x 835 pix - 700k] [Hi-Res - JPEG: 3000 x 3131 pix - 5.0M] Caption : Enlargement of a moderately distant galaxy cluster in the south-east quadrant (lower left) of the OACDF2 field. The field measures 1380 x 1260 pixels, or about 5.5 x 5.0 arcmin 2 in the sky. ESO PR Photo 15f/01 ESO PR Photo 15f/01 [Preview - JPEG: 449 x 400 pix - 68k] [Normal - JPEG: 897 x 800 pix - 799k] [Hi-Res - JPEG: 3000 x 2675 pix - 5.6M] Caption : Enlargement of the elliptical galaxy that is located to the west (right) in the OACDF2 field. The numerous tiny objects surrounding the galaxy may be globular clusters. The fuzzy object on the right edge of the field may be a dwarf spheroidal galaxy. The size of the field is about 6 x 5 arcmin 2. Technical Information about the OACDF Survey The observations for the OACDF project were performed in three different ESO periods (18-22 April 1999, 7-12 March 2000 and 26-30 April 2000). Some 100 Gbyte of raw data were collected during each of the three observing runs. The first OACDF run was done just after the commissioning of the ESO-WFI. The observational strategy was to perform a 1 x 1 deg 2 short-exposure ("shallow") survey and then a 0.5 x 1 deg 2 "deep" survey. The shallow survey was performed in the B, V, R and I broad-band filters. Four adjacent 30 x 30 arcmin 2 fields, together covering a 1 x 1 deg 2 field in the sky, were observed for the shallow survey. Two of these fields were chosen for the 0.5 x 1 deg 2 deep survey; OACDF2 shown above is one of these. The deep survey was performed in the B, V, R broad-bands and in other intermediate-band filters. The OACDF data are fully reduced and the catalogue extraction has started. A two-processor (500 Mhz each) DS20 machine with 100 Gbyte of hard disk, specifically acquired at the OAC for WFI data reduction, was used. The detailed guidelines of the data reduction, as well as the catalogue extraction, are reported in a research paper that will appear in the European research journal Astronomy & Astrophysics . Notes [1]: The team members are: Massimo Capaccioli, Juan M. Alcala', Roberto Silvotti, Magda Arnaboldi, Vincenzo Ripepi, Emanuella Puddu, Massimo Dall'Ora, Giuseppe Longo and Roberto Scaramella . [2]: This is a preliminary result by Juan Alcala', Massimo Capaccioli, Giuseppe Longo, Mikhail Sazhin, Roberto Silvotti and Vincenzo Testa , based on recent observations with the Telescopio Nazionale Galileo (TNG) which show that the spectra of the two objects are identical. Technical information about the photos PR Photo 15a/01 has been obtained by the combination of the B, V, and R stacked images of the OACDF2 field. The total exposure times in the three bands are 2 hours in B and V (12 ditherings of 10 min each were stacked to produce the B and V images) and 3 hours in R (13 ditherings of 15 min each). The mosaic images in the B and V bands were aligned relative to the R-band image and adjusted to a logarithmic intensity scale prior to the combination. The typical seeing was of the order of 1 arcsec in each of the three bands. Preliminary estimates of the three-sigma limiting magnitudes in B, V and R indicate 25.5, 25.0 and 25.0, respectively. More than 35,000 objects are detected above the three-sigma level. PR Photos 15b-f/01 display selected areas of the field shown in PR Photo 15a/01 at the original WFI scale, hereby also demonstrating the enormous amount of information contained in these wide-field images. In all photos, North is up and East is left.
NASA Astrophysics Data System (ADS)
Yang, Keon Ho; Jung, Haijo; Kang, Won-Suk; Jang, Bong Mun; Kim, Joong Il; Han, Dong Hoon; Yoo, Sun-Kook; Yoo, Hyung-Sik; Kim, Hee-Joung
2006-03-01
The wireless mobile service with a high bit rate using CDMA-1X EVDO is now widely used in Korea. Mobile devices are also increasingly being used as the conventional communication mechanism. We have developed a web-based mobile system that communicates patient information and images, using CDMA-1X EVDO for emergency diagnosis. It is composed of a Mobile web application system using the Microsoft Windows 2003 server and an internet information service. Also, a mobile web PACS used for a database managing patient information and images was developed by using Microsoft access 2003. A wireless mobile emergency patient information and imaging communication system is developed by using Microsoft Visual Studio.NET, and JPEG 2000 ActiveX control for PDA phone was developed by using the Microsoft Embedded Visual C++. Also, the CDMA-1X EVDO is used for connections between mobile web servers and the PDA phone. This system allows fast access to the patient information database, storing both medical images and patient information anytime and anywhere. Especially, images were compressed into a JPEG2000 format and transmitted from a mobile web PACS inside the hospital to the radiologist using a PDA phone located outside the hospital. Also, this system shows radiological images as well as physiological signal data, including blood pressure, vital signs and so on, in the web browser of the PDA phone so radiologists can diagnose more effectively. Also, we acquired good results using an RW-6100 PDA phone used in the university hospital system of the Sinchon Severance Hospital in Korea.
A comparison of the fractal and JPEG algorithms
NASA Technical Reports Server (NTRS)
Cheung, K.-M.; Shahshahani, M.
1991-01-01
A proprietary fractal image compression algorithm and the Joint Photographic Experts Group (JPEG) industry standard algorithm for image compression are compared. In every case, the JPEG algorithm was superior to the fractal method at a given compression ratio according to a root mean square criterion and a peak signal to noise criterion.
Non-parametric adaptative JPEG fragments carving
NASA Astrophysics Data System (ADS)
Amrouche, Sabrina Cherifa; Salamani, Dalila
2018-04-01
The most challenging JPEG recovery tasks arise when the file header is missing. In this paper we propose to use a two layer machine learning model to restore headerless JPEG images. We first build a classifier able to identify the structural properties of the images/fragments and then use an AutoEncoder (AE) to learn the fragment features for the header prediction. We define a JPEG universal header and the remaining free image parameters (Height, Width) are predicted with a Gradient Boosting Classifier. Our approach resulted in 90% accuracy using the manually defined features and 78% accuracy using the AE features.
Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain
Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo
2012-01-01
An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality. PMID:23049544
Medical image compression based on vector quantization with variable block sizes in wavelet domain.
Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo
2012-01-01
An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.
Mixed raster content (MRC) model for compound image compression
NASA Astrophysics Data System (ADS)
de Queiroz, Ricardo L.; Buckley, Robert R.; Xu, Ming
1998-12-01
This paper will describe the Mixed Raster Content (MRC) method for compressing compound images, containing both binary test and continuous-tone images. A single compression algorithm that simultaneously meets the requirements for both text and image compression has been elusive. MRC takes a different approach. Rather than using a single algorithm, MRC uses a multi-layered imaging model for representing the results of multiple compression algorithms, including ones developed specifically for text and for images. As a result, MRC can combine the best of existing or new compression algorithms and offer different quality-compression ratio tradeoffs. The algorithms used by MRC set the lower bound on its compression performance. Compared to existing algorithms, MRC has some image-processing overhead to manage multiple algorithms and the imaging model. This paper will develop the rationale for the MRC approach by describing the multi-layered imaging model in light of a rate-distortion trade-off. Results will be presented comparing images compressed using MRC, JPEG and state-of-the-art wavelet algorithms such as SPIHT. MRC has been approved or proposed as an architectural model for several standards, including ITU Color Fax, IETF Internet Fax, and JPEG 2000.
The effect of JPEG compression on automated detection of microaneurysms in retinal images
NASA Astrophysics Data System (ADS)
Cree, M. J.; Jelinek, H. F.
2008-02-01
As JPEG compression at source is ubiquitous in retinal imaging, and the block artefacts introduced are known to be of similar size to microaneurysms (an important indicator of diabetic retinopathy) it is prudent to evaluate the effect of JPEG compression on automated detection of retinal pathology. Retinal images were acquired at high quality and then compressed to various lower qualities. An automated microaneurysm detector was run on the retinal images of various qualities of JPEG compression and the ability to predict the presence of diabetic retinopathy based on the detected presence of microaneurysms was evaluated with receiver operating characteristic (ROC) methodology. The negative effect of JPEG compression on automated detection was observed even at levels of compression sometimes used in retinal eye-screening programmes and these may have important clinical implications for deciding on acceptable levels of compression for a fully automated eye-screening programme.
Detection of shifted double JPEG compression by an adaptive DCT coefficient model
NASA Astrophysics Data System (ADS)
Wang, Shi-Lin; Liew, Alan Wee-Chung; Li, Sheng-Hong; Zhang, Yu-Jin; Li, Jian-Hua
2014-12-01
In many JPEG image splicing forgeries, the tampered image patch has been JPEG-compressed twice with different block alignments. Such phenomenon in JPEG image forgeries is called the shifted double JPEG (SDJPEG) compression effect. Detection of SDJPEG-compressed patches could help in detecting and locating the tampered region. However, the current SDJPEG detection methods do not provide satisfactory results especially when the tampered region is small. In this paper, we propose a new SDJPEG detection method based on an adaptive discrete cosine transform (DCT) coefficient model. DCT coefficient distributions for SDJPEG and non-SDJPEG patches have been analyzed and a discriminative feature has been proposed to perform the two-class classification. An adaptive approach is employed to select the most discriminative DCT modes for SDJPEG detection. The experimental results show that the proposed approach can achieve much better results compared with some existing approaches in SDJPEG patch detection especially when the patch size is small.
A block-based JPEG-LS compression technique with lossless region of interest
NASA Astrophysics Data System (ADS)
Deng, Lihua; Huang, Zhenghua; Yao, Shoukui
2018-03-01
JPEG-LS lossless compression algorithm is used in many specialized applications that emphasize on the attainment of high fidelity for its lower complexity and better compression ratios than the lossless JPEG standard. But it cannot prevent error diffusion because of the context dependence of the algorithm, and have low compression rate when compared to lossy compression. In this paper, we firstly divide the image into two parts: ROI regions and non-ROI regions. Then we adopt a block-based image compression technique to decrease the range of error diffusion. We provide JPEG-LS lossless compression for the image blocks which include the whole or part region of interest (ROI) and JPEG-LS near lossless compression for the image blocks which are included in the non-ROI (unimportant) regions. Finally, a set of experiments are designed to assess the effectiveness of the proposed compression method.
Estimation of color filter array data from JPEG images for improved demosaicking
NASA Astrophysics Data System (ADS)
Feng, Wei; Reeves, Stanley J.
2006-02-01
On-camera demosaicking algorithms are necessarily simple and therefore do not yield the best possible images. However, off-camera demosaicking algorithms face the additional challenge that the data has been compressed and therefore corrupted by quantization noise. We propose a method to estimate the original color filter array (CFA) data from JPEG-compressed images so that more sophisticated (and better) demosaicking schemes can be applied to get higher-quality images. The JPEG image formation process, including simple demosaicking, color space transformation, chrominance channel decimation and DCT, is modeled as a series of matrix operations followed by quantization on the CFA data, which is estimated by least squares. An iterative method is used to conserve memory and speed computation. Our experiments show that the mean square error (MSE) with respect to the original CFA data is reduced significantly using our algorithm, compared to that of unprocessed JPEG and deblocked JPEG data.
Mobile healthcare information management utilizing Cloud Computing and Android OS.
Doukas, Charalampos; Pliakas, Thomas; Maglogiannis, Ilias
2010-01-01
Cloud Computing provides functionality for managing information data in a distributed, ubiquitous and pervasive manner supporting several platforms, systems and applications. This work presents the implementation of a mobile system that enables electronic healthcare data storage, update and retrieval using Cloud Computing. The mobile application is developed using Google's Android operating system and provides management of patient health records and medical images (supporting DICOM format and JPEG2000 coding). The developed system has been evaluated using the Amazon's S3 cloud service. This article summarizes the implementation details and presents initial results of the system in practice.
Oblivious image watermarking combined with JPEG compression
NASA Astrophysics Data System (ADS)
Chen, Qing; Maitre, Henri; Pesquet-Popescu, Beatrice
2003-06-01
For most data hiding applications, the main source of concern is the effect of lossy compression on hidden information. The objective of watermarking is fundamentally in conflict with lossy compression. The latter attempts to remove all irrelevant and redundant information from a signal, while the former uses the irrelevant information to mask the presence of hidden data. Compression on a watermarked image can significantly affect the retrieval of the watermark. Past investigations of this problem have heavily relied on simulation. It is desirable not only to measure the effect of compression on embedded watermark, but also to control the embedding process to survive lossy compression. In this paper, we focus on oblivious watermarking by assuming that the watermarked image inevitably undergoes JPEG compression prior to watermark extraction. We propose an image-adaptive watermarking scheme where the watermarking algorithm and the JPEG compression standard are jointly considered. Watermark embedding takes into consideration the JPEG compression quality factor and exploits an HVS model to adaptively attain a proper trade-off among transparency, hiding data rate, and robustness to JPEG compression. The scheme estimates the image-dependent payload under JPEG compression to achieve the watermarking bit allocation in a determinate way, while maintaining consistent watermark retrieval performance.
A modified JPEG-LS lossless compression method for remote sensing images
NASA Astrophysics Data System (ADS)
Deng, Lihua; Huang, Zhenghua
2015-12-01
As many variable length source coders, JPEG-LS is highly vulnerable to channel errors which occur in the transmission of remote sensing images. The error diffusion is one of the important factors which infect its robustness. The common method of improving the error resilience of JPEG-LS is dividing the image into many strips or blocks, and then coding each of them independently, but this method reduces the coding efficiency. In this paper, a block based JPEP-LS lossless compression method with an adaptive parameter is proposed. In the modified scheme, the threshold parameter RESET is adapted to an image and the compression efficiency is close to that of the conventional JPEG-LS.
JHelioviewer: Open-Source Software for Discovery and Image Access in the Petabyte Age
NASA Astrophysics Data System (ADS)
Mueller, D.; Dimitoglou, G.; Garcia Ortiz, J.; Langenberg, M.; Nuhn, M.; Dau, A.; Pagel, S.; Schmidt, L.; Hughitt, V. K.; Ireland, J.; Fleck, B.
2011-12-01
The unprecedented torrent of data returned by the Solar Dynamics Observatory is both a blessing and a barrier: a blessing for making available data with significantly higher spatial and temporal resolution, but a barrier for scientists to access, browse and analyze them. With such staggering data volume, the data is accessible only from a few repositories and users have to deal with data sets effectively immobile and practically difficult to download. From a scientist's perspective this poses three challenges: accessing, browsing and finding interesting data while avoiding the proverbial search for a needle in a haystack. To address these challenges, we have developed JHelioviewer, an open-source visualization software that lets users browse large data volumes both as still images and movies. We did so by deploying an efficient image encoding, storage, and dissemination solution using the JPEG 2000 standard. This solution enables users to access remote images at different resolution levels as a single data stream. Users can view, manipulate, pan, zoom, and overlay JPEG 2000 compressed data quickly, without severe network bandwidth penalties. Besides viewing data, the browser provides third-party metadata and event catalog integration to quickly locate data of interest, as well as an interface to the Virtual Solar Observatory to download science-quality data. As part of the ESA/NASA Helioviewer Project, JHelioviewer offers intuitive ways to browse large amounts of heterogeneous data remotely and provides an extensible and customizable open-source platform for the scientific community. In addition, the easy-to-use graphical user interface enables the general public and educators to access, enjoy and reuse data from space missions without barriers.
Kim, Bohyoung; Lee, Kyoung Ho; Kim, Kil Joong; Mantiuk, Rafal; Kim, Hye-ri; Kim, Young Hoon
2008-06-01
The objective of our study was to assess the effects of compressing source thin-section abdominal CT images on final transverse average-intensity-projection (AIP) images. At reversible, 4:1, 6:1, 8:1, 10:1, and 15:1 Joint Photographic Experts Group (JPEG) 2000 compressions, we compared the artifacts in 20 matching compressed thin sections (0.67 mm), compressed thick sections (5 mm), and AIP images (5 mm) reformatted from the compressed thin sections. The artifacts were quantitatively measured with peak signal-to-noise ratio (PSNR) and a perceptual quality metric (High Dynamic Range Visual Difference Predictor [HDR-VDP]). By comparing the compressed and original images, three radiologists independently graded the artifacts as 0 (none, indistinguishable), 1 (barely perceptible), 2 (subtle), or 3 (significant). Friedman tests and exact tests for paired proportions were used. At irreversible compressions, the artifacts tended to increase in the order of AIP, thick-section, and thin-section images in terms of PSNR (p < 0.0001), HDR-VDP (p < 0.0001), and the readers' grading (p < 0.01 at 6:1 or higher compressions). At 6:1 and 8:1, distinguishable pairs (grades 1-3) tended to increase in the order of AIP, thick-section, and thin-section images. Visually lossless threshold for the compression varied between images but decreased in the order of AIP, thick-section, and thin-section images (p < 0.0001). Compression artifacts in thin sections are significantly attenuated in AIP images. On the premise that thin sections are typically reviewed using an AIP technique, it is justifiable to compress them to a compression level currently accepted for thick sections.
Calderon, Karynna; Dadisman, S.V.; Kindinger, J.L.; Flocks, J.G.; Wiese, D.S.; Kulp, Mark; Penland, Shea; Britsch, L.D.; Brooks, G.R.
2003-01-01
This archive consists of two-dimensional marine seismic reflection profile data collected in the Barataria Basin of southern Louisiana. These data were acquired in May, June, and July of 2000 aboard the R/V G.K. Gilbert. Included here are data in a variety of formats including binary, American Standard Code for Information Interchange (ASCII), Hyper-Text Markup Language (HTML), shapefiles, and Graphics Interchange Format (GIF) and Joint Photographic Experts Group (JPEG) images. Binary data are in Society of Exploration Geophysicists (SEG) SEG-Y format and may be downloaded for further processing or display. Reference maps and GIF images of the profiles may be viewed with a web browser. The Geographic Information Systems (GIS) information provided here is compatible with Environmental Systems Research Institute (ESRI) GIS software.
Wavelet-based scalable L-infinity-oriented compression.
Alecu, Alin; Munteanu, Adrian; Cornelis, Jan P H; Schelkens, Peter
2006-09-01
Among the different classes of coding techniques proposed in literature, predictive schemes have proven their outstanding performance in near-lossless compression. However, these schemes are incapable of providing embedded L(infinity)-oriented compression, or, at most, provide a very limited number of potential L(infinity) bit-stream truncation points. We propose a new multidimensional wavelet-based L(infinity)-constrained scalable coding framework that generates a fully embedded L(infinity)-oriented bit stream and that retains the coding performance and all the scalability options of state-of-the-art L2-oriented wavelet codecs. Moreover, our codec instantiation of the proposed framework clearly outperforms JPEG2000 in L(infinity) coding sense.
A Novel Image Compression Algorithm for High Resolution 3D Reconstruction
NASA Astrophysics Data System (ADS)
Siddeq, M. M.; Rodrigues, M. A.
2014-06-01
This research presents a novel algorithm to compress high-resolution images for accurate structured light 3D reconstruction. Structured light images contain a pattern of light and shadows projected on the surface of the object, which are captured by the sensor at very high resolutions. Our algorithm is concerned with compressing such images to a high degree with minimum loss without adversely affecting 3D reconstruction. The Compression Algorithm starts with a single level discrete wavelet transform (DWT) for decomposing an image into four sub-bands. The sub-band LL is transformed by DCT yielding a DC-matrix and an AC-matrix. The Minimize-Matrix-Size Algorithm is used to compress the AC-matrix while a DWT is applied again to the DC-matrix resulting in LL2, HL2, LH2 and HH2 sub-bands. The LL2 sub-band is transformed by DCT, while the Minimize-Matrix-Size Algorithm is applied to the other sub-bands. The proposed algorithm has been tested with images of different sizes within a 3D reconstruction scenario. The algorithm is demonstrated to be more effective than JPEG2000 and JPEG concerning higher compression rates with equivalent perceived quality and the ability to more accurately reconstruct the 3D models.
NASA Astrophysics Data System (ADS)
Karam, Lina J.; Zhu, Tong
2015-03-01
The varying quality of face images is an important challenge that limits the effectiveness of face recognition technology when applied in real-world applications. Existing face image databases do not consider the effect of distortions that commonly occur in real-world environments. This database (QLFW) represents an initial attempt to provide a set of labeled face images spanning the wide range of quality, from no perceived impairment to strong perceived impairment for face detection and face recognition applications. Types of impairment include JPEG2000 compression, JPEG compression, additive white noise, Gaussian blur and contrast change. Subjective experiments are conducted to assess the perceived visual quality of faces under different levels and types of distortions and also to assess the human recognition performance under the considered distortions. One goal of this work is to enable automated performance evaluation of face recognition technologies in the presence of different types and levels of visual distortions. This will consequently enable the development of face recognition systems that can operate reliably on real-world visual content in the presence of real-world visual distortions. Another goal is to enable the development and assessment of visual quality metrics for face images and for face detection and recognition applications.
JPEG and wavelet compression of ophthalmic images
NASA Astrophysics Data System (ADS)
Eikelboom, Robert H.; Yogesan, Kanagasingam; Constable, Ian J.; Barry, Christopher J.
1999-05-01
This study was designed to determine the degree and methods of digital image compression to produce ophthalmic imags of sufficient quality for transmission and diagnosis. The photographs of 15 subjects, which inclined eyes with normal, subtle and distinct pathologies, were digitized to produce 1.54MB images and compressed to five different methods: (i) objectively by calculating the RMS error between the uncompressed and compressed images, (ii) semi-subjectively by assessing the visibility of blood vessels, and (iii) subjectively by asking a number of experienced observers to assess the images for quality and clinical interpretation. Results showed that as a function of compressed image size, wavelet compressed images produced less RMS error than JPEG compressed images. Blood vessel branching could be observed to a greater extent after Wavelet compression compared to JPEG compression produced better images then a JPEG compression for a given image size. Overall, it was shown that images had to be compressed to below 2.5 percent for JPEG and 1.7 percent for Wavelet compression before fine detail was lost, or when image quality was too poor to make a reliable diagnosis.
Generalised Category Attack—Improving Histogram-Based Attack on JPEG LSB Embedding
NASA Astrophysics Data System (ADS)
Lee, Kwangsoo; Westfeld, Andreas; Lee, Sangjin
We present a generalised and improved version of the category attack on LSB steganography in JPEG images with straddled embedding path. It detects more reliably low embedding rates and is also less disturbed by double compressed images. The proposed methods are evaluated on several thousand images. The results are compared to both recent blind and specific attacks for JPEG embedding. The proposed attack permits a more reliable detection, although it is based on first order statistics only. Its simple structure makes it very fast.
ALMA On the Move - ESO Awards Important Contract for the ALMA Project
NASA Astrophysics Data System (ADS)
2005-12-01
Only two weeks after awarding its largest-ever contract for the procurement of antennas for the Atacama Large Millimeter Array project (ALMA), ESO has signed a contract with Scheuerle Fahrzeugfabrik GmbH, a world-leader in the design and production of custom-built heavy-duty transporters, for the provision of two antenna transporting vehicles. These vehicles are of crucial importance for ALMA. ESO PR Photo 41a/05 ESO PR Photo 41a/05 The ALMA Transporter (Artist's Impression) [Preview - JPEG: 400 x 756 pix - 234k] [Normal - JPEG: 800 x 1512 pix - 700k] [Full Res - JPEG: 1768 x 3265 pix - 2.3M] Caption: Each of the ALMA transporters will be 10 m wide, 4.5 m high and 16 m long. "The timely awarding of this contract is most important to ensure that science operations can commence as planned," said ESO Director General Catherine Cesarsky. "This contract thus marks a further step towards the realization of the ALMA project." "These vehicles will operate in a most unusual environment and must live up to very strict demands regarding performance, reliability and safety. Meeting these requirements is a challenge for us, and we are proud to have been selected by ESO for this task," commented Hans-Jörg Habernegg, President of Scheuerle GmbH. ESO PR Photo 41b/05 ESO PR Photo 41b/05 Signing the Contract [Preview - JPEG: 400 x 572 pix - 234k] [Normal - JPEG: 800 x 1143 pix - 700k] [HiRes - JPEG: 4368 x 3056 pix - 2.3M] Caption: (left to right) Mr Thomas Riek, Vice-President of Scheuerle GmbH, Dr Catherine Cesarsky, ESO Director General and Mr Hans-Jörg Habernegg, President of Scheuerle GmbH. When completed on the high-altitude Chajnantor site in Chile, ALMA is expected to comprise more than 60 antennas, which can be placed in different locations on the plateau but which work together as one giant telescope. Changing the relative positions of the antennas and thus also the configuration of the array allows for different observing modes, comparable to using a zoom lens, offering different degrees of resolution and sky coverage as needed by the astronomers. The ALMA Antenna Transporters allow for moving the antennas between the different pre-defined antenna positions. They will also be used for transporting antennas between the maintenance area at 2900 m elevation and the "high site" at 5000 m above sea level, where the observations are carried out. Given their important functions, both for the scientific work and in transporting high-tech antennas with the required care, the vehicles must live up to very demanding operational requirements. Each transporter has a mass of 150 tonnes and is able to lift and transport antennas of 110 tonnes. They must be able to place the antennas on the docking pads with millimetric precision. At the same time, they must be powerful enough to climb 2000 m reliably and safely with their heavy and valuable load, putting extraordinary demands on the 500 kW diesel engines. This means negotiating a 28 km long high-altitude road with an average slope of 7 %. Finally, as they will be operated at an altitude with significantly reduced oxygen levels, a range of redundant safety devices protect both personnel and equipment from possible mishaps or accidents. The first transporter is scheduled to be delivered in the summer of 2007 to match the delivery of the first antennas to Chajnantor. The ESO contract has a value of approx. 5.5 m Euros.
Adaptive image coding based on cubic-spline interpolation
NASA Astrophysics Data System (ADS)
Jiang, Jian-Xing; Hong, Shao-Hua; Lin, Tsung-Ching; Wang, Lin; Truong, Trieu-Kien
2014-09-01
It has been investigated that at low bit rates, downsampling prior to coding and upsampling after decoding can achieve better compression performance than standard coding algorithms, e.g., JPEG and H. 264/AVC. However, at high bit rates, the sampling-based schemes generate more distortion. Additionally, the maximum bit rate for the sampling-based scheme to outperform the standard algorithm is image-dependent. In this paper, a practical adaptive image coding algorithm based on the cubic-spline interpolation (CSI) is proposed. This proposed algorithm adaptively selects the image coding method from CSI-based modified JPEG and standard JPEG under a given target bit rate utilizing the so called ρ-domain analysis. The experimental results indicate that compared with the standard JPEG, the proposed algorithm can show better performance at low bit rates and maintain the same performance at high bit rates.
A threshold-based fixed predictor for JPEG-LS image compression
NASA Astrophysics Data System (ADS)
Deng, Lihua; Huang, Zhenghua; Yao, Shoukui
2018-03-01
In JPEG-LS, fixed predictor based on median edge detector (MED) only detect horizontal and vertical edges, and thus produces large prediction errors in the locality of diagonal edges. In this paper, we propose a threshold-based edge detection scheme for the fixed predictor. The proposed scheme can detect not only the horizontal and vertical edges, but also diagonal edges. For some certain thresholds, the proposed scheme can be simplified to other existing schemes. So, it can also be regarded as the integration of these existing schemes. For a suitable threshold, the accuracy of horizontal and vertical edges detection is higher than the existing median edge detection in JPEG-LS. Thus, the proposed fixed predictor outperforms the existing JPEG-LS predictors for all images tested, while the complexity of the overall algorithm is maintained at a similar level.
Report about the Solar Eclipse on August 11, 1999
NASA Astrophysics Data System (ADS)
1999-08-01
This webpage provides information about the total eclipse on Wednesday, August 11, 1999, as it was seen by ESO staff, mostly at or near the ESO Headquarters in Garching (Bavaria, Germany). The zone of totality was about 108 km wide and the ESO HQ were located only 8 km south of the line of maximum totality. The duration of the phase of totality was about 2 min 17 sec. The weather was quite troublesome in this geographical area. Heavy clouds moved across the sky during the entire event, but there were also some holes in between. Consequently, sites that were only a few kilometres from each other had very different viewing conditions. Some photos and spectra of the eclipsed Sun are displayed below, with short texts about the circumstances under which they were made. Please note that reproduction of pictures on this webpage is only permitted, if the author is mentioned as source. Information made available before the eclipse is available here. Eclipse Impressions at the ESO HQ Photo by Eddy Pomaroli Preparing for the Eclipse Photo: Eddy Pomaroli [JEG: 400 x 239 pix - 116k] [JPEG: 800 x 477 pix - 481k] [JPEG: 3000 x 1789 pix - 3.9M] Photo by Eddy Pomaroli During the 1st Partial Phase Photo: Eddy Pomaroli [JPEG: 400 x 275 pix - 135k] [JPEG: 800 x 549 pix - 434k] [JPEG: 2908 x 1997 pix - 5.9M] Photo by Hamid Mehrgan Heavy Clouds Above Digital Photo: Hamid Mehrgan [JPEG: 400 x 320 pix - 140k] [JPEG: 800 x 640 pix - 540k] [JPEG: 1280 x 1024 pix - 631k] Photo by Olaf Iwert Totality Approaching Digital Photo: Olaf Iwert [JPEG: 400 x 320 pix - 149k] [JPEG: 800 x 640 pix - 380k] [JPEG: 1280 x 1024 pix - 536k] Photo by Olaf Iwert Beginning of Totality Digital Photo: Olaf Iwert [JPEG: 400 x 236 pix - 86k] [JPEG: 800 x 471 pix - 184k] [JPEG: 1280 x 753 pix - 217k] Photo by Olaf Iwert A Happy Eclipse Watcher Digital Photo: Olaf Iwert [JPEG: 400 x 311 pix - 144k] [JPEG: 800 x 622 pix - 333k] [JPEG: 1280 x 995 pix - 644k] ESO HQ Eclipse Video Clip [MPEG-version] ESO HQ Eclipse Video Clip (2425 frames/01:37 min) [MPEG Video; 160x120 pix; 2.2M] [MPEG Video; 320x240 pix; 4.4Mb] [RealMedia; streaming; 33kps] [RealMedia; streaming; 200kps] This Video Clip was prepared from a "reportage" of the event at the ESO HQ that was transmitted in real-time to ESO-Chile via ESO's satellite link. It begins with some sequences of the first partial phase and the eclipse watchers. Clouds move over and the landscape darkens as the phase of totality approaches. The Sun is again visible at the very moment this phase ends. Some further sequences from the second partial phase follow. Produced by Herbert Zodet. Dire Forecasts The weather predictions in the days before the eclipse were not good for Munich and surroundings. A heavy front with rain and thick clouds that completely covered the sky moved across Bavaria the day before and the meteorologists predicted a 20% chance of seeing anything at all. On August 10, it seemed that the chances were best in France and in the western parts of Germany, and much less close to the Alps. This changed to the opposite during the night before the eclipse. Now the main concern in Munich was a weather front approaching from the west - would it reach this area before the eclipse? The better chances were then further east, nearer the Austrian border. Many people travelled back and forth along the German highways, many of which quickly became heavily congested. Preparations About 500 persons, mostly ESO staff with their families and friends, were present at the ESO HQ in the morning of August 11. Prior to the eclipse, they received information about the various aspects of solar eclipses and about the specific conditions of this one in the auditorium. Protective glasses were handed out and it was the idea that they would then follow the eclipse from outside. In view of the pessimistic weather forecasts, TV sets had been set up in two large rooms, but in the end most chose to watch the eclipse from the terasse in front of the cafeteria and from the area south of the building. Several telescopes were set up among the trees and on the adjoining field (just harvested). Clouds and Holes It was an unusual solar eclipse experience. Heavy clouds were passing by with sudden rainshowers, but fortunately there were also some holes with blue sky in between. While much of the first partial phase was visible through these, some really heavy clouds moved in a few minutes before the total phase, when the light had begun to fade. They drifted slowly - too slowly! - towards the east and the corona was never seen from the ESO HQ site. From here, the view towards the eclipsed Sun only cleared at the very instant of the second "diamond ring" phenomenon. This was beautiful, however, and evidently took most of the photographers by surprise, so very few, if any, photos were made of this memorable moment. Temperature Curve by Benoit Pirenne Temperature Curve on August 11 [JPEG: 646 x 395 pix - 35k] Measured by Benoit Pirenne - see also his meteorological webpage Nevertheless, the entire experience was fantastic - there were all the expected effects, the darkness, the cool air, the wind and the silence. It was very impressive indeed! And it was certainly a unique day in ESO history! Carolyn Collins Petersen from "Sky & Telescope" participated in the conference at ESO in the days before and watched the eclipse from the "Bürgerplatz" in Garching, about 1.5 km south of the ESO HQ. She managed to see part of the totality phase and filed some dramatic reports at the S&T Eclipse Expedition website. They describe very well the feelings of those in this area! Eclipse Photos Several members of the ESO staff went elsewhere and had more luck with the weather, especially at the moment of totality. Below are some of their impressive pictures. Eclipse Photo by Philippe Duhoux First "Diamond Ring" [JPEG: 400 x 292 pix - 34k] [JPEG: 800 x 583 pix - 144k] [JPEG: 2531 x 1846 pix - 1.3M] Eclipse Photo by Philippe Duhoux Totality [JPEG: 400 x 306 pix - 49k] [JPEG: 800 x 612 pix - 262k] [JPEG: 3039 x 1846 pix - 3.6M] Eclipse Photo by Philippe Duhoux Second "Diamond Ring" [JPEG: 400 x 301 pix - 34k] [JPEG: 800 x 601 pix - 163k] [JPEG: 2905 x 2181 pix - 2.0M] The Corona (Philippe Duhoux) "For the observation of the eclipse, I chose a field on a hill offering a wide view towards the western horizon and located about 10 kilometers north west of Garching." "While the partial phase was mostly cloudy, the sky went clear 3 minutes before the totality and remained so for about 15 minutes. Enough to enjoy the event!" "The images were taken on Agfa CT100 colour slide film with an Olympus OM-20 at the focus of a Maksutov telescope (f = 1000 mm, f/D = 10). The exposure times were automatically set by the camera. During the partial phase, I used an off-axis mask of 40 mm diameter with a mylar filter ND = 3.6, which I removed for the diamond rings and the corona." Note in particular the strong, detached protuberances to the right of the rim, particularly noticeable in the last photo. Eclipse Photo by Cyril Cavadore Totality [JPEG: 400 x 360 pix - 45k] [JPEG: 800 x 719 pix - 144k] [JPEG: 908 x 816 pix - 207k] The Corona (Cyril Cavadore) "We (C.Cavadore from ESO and L. Bernasconi and B. Gaillard from Obs. de la Cote d'Azur) took this photo in France at Vouzier (Champagne-Ardennes), between Reims and Nancy. A large blue opening developed in the sky at 10 o'clock and we decided to set up the telescope and the camera at that time. During the partial phase, a lot of clouds passed over, making it hard to focus properly. Nevertheless, 5 min before totality, a deep blue sky opened above us, allowing us to watch it and to take this picture. 5-10 Minutes after the totality, the sky was almost overcast up to the 4th contact". "The image was taken with a 2x2K (14 µm pixels) Thomson "homemade" CCD camera mounted on a CN212 Takahashi (200 mm diameter telescope) with a 1/10.000 neutral filter. The acquisition software set exposure time (2 sec) and took images in a complete automated way, allowing us to observe the eclipse by naked eye or with binoculars. To get as many images as possible during totality, we use binning 2x2 to reduce the readout time to 19 sec. Afterward, one of the best image was flat-fielded and processed with a special algorithm that modelled a fit the continuous component of the corona and then subtracted from the original image. The remaining details were enhanced by unsharp masking and added to the original image. Finally, gaussian histogram equalization was applied". Eclipse Photo by Eddy Pomaroli Second "Diamond Ring" [JPEG: 400 x 438 pix - 129k] [JPEG: 731 x 800 pix - 277k] [JPEG: 1940 x 2123 pix - 2.3M] Diamond Ring at ESO HQ (Eddy Pomaroli) "Despite the clouds, we saw the second "diamond ring" from the ESO HQ. In a sense, we were quite lucky, since the clouds were very heavy during the total phase and we might easily have missed it all!". "I used an old Minolta SRT-101 camera and a teleobjective (450 mm; f/8). The exposure was 1/125 sec on Kodak Elite 100 (pushed to 200 ASA). I had the feeling that the Sun would become visible and had the camera pointed, by good luck in the correct direction, as soon as the cloud moved away". Eclipse Photo by Roland Reiss First Partial Phase [JPEG: 400 x 330 pix - 94k] [JPEG: 800 x 660 pix - 492k] [JPEG: 3000 x 2475 pix - 4.5M] End of First Partial Phase (Roland Reiss) "I observed the eclipse from my home in Garching. The clouds kept moving and this was the last photo I was able to obtain during the first partial phase, before they blocked everything". "The photo is interesting, because it shows two more images of the eclipsed Sun, below the overexposed central part. In one of them, the remaining, narrow crescent is particularly well visible. They are caused by reflections in the camera. I used a Minolta camera and a Fuji colour slide film". Eclipse Spectra Some ESO people went a step further and obtained spectra of the Sun at the time of the eclipse. Eclipse Spectrum by Roland Reiss Coronal Spectrum [JPEG: 400 x 273 pix - 94k] [JPEG: 800 x 546 pix - 492k] [JPEG: 3000 x 2046 pix - 4.5M] Coronal Spectrum (CAOS Group) The Club of Amateurs in Optical Spectroscopy (with Carlos Guirao Sanchez, Gerardo Avila and Jesus Rodriguez) obtained a spectrum of the solar corona from a site in Garching, about 2 km south of the ESO HQ. "This is a plot of the spectrum and the corresponding CCD image that we took during the total eclipse. The main coronal lines are well visible and have been identified in the figure. Note in particular one at 6374 Angstrom that was first ascribed to the mysterious substance "Coronium". We now know that it is emitted by iron atoms that have lost nine electrons (Fe X)". The equipment was: * Telescope: Schmidt Cassegrain F/6.3; Diameter: 250 mm * FIASCO Spectrograph: Fibre: 135 micron core diameter F = 100 mm collimator, f = 80 mm camera; Grating: 1300 gr/mm blazed at 500 nm; SBIG ST8E CCD camera; Exposure time was 20 sec. Eclipse Spectrum by Bob Fosbury Chromospheric Spectrum [JPEG: 120 x 549 pix - 20k] Chromospheric and Coronal Spectra (Bob Fosbury) "The 11 August 1999 total solar eclipse was seen from a small farm complex called Wolfersberg in open fields some 20km ESE of the centre of Munich. It was chosen to be within the 2min band of totality but likely to be relatively unpopulated". "There were intermittent views of the Sun between first and second contact with quite a heavy rainshower which stopped 9min before totality. A large clear patch of sky revealed a perfect view of the Sun just 2min before second contact and it remained clear for at least half an hour after third contact". "The principal project was to photograph the spectrum of the chromosphere during totality using a transmission grating in front of a moderate telephoto lens. The desire to do this was stimulated by a view of the 1976 eclipse in Australia when I held the same grating up to the eclipsed Sun and was thrilled by the view of the emission line spectrum. The trick now was to get the exposure right!". "A sequence of 13 H-alpha images was combined into a looping movie. The exposure times were different, but some attempt has been made to equalise the intensities. The last two frames show the low chromosphere and then the photosphere emerging at 3rd contact. The [FeX] coronal line can be seen on the left in the middle of the sequence. I used a Hasselblad camera and Agfa slide film (RSX II 100)".
Boccardi, Marina; Ganzola, Rossana; Bocchetta, Martina; Pievani, Michela; Redolfi, Alberto; Bartzokis, George; Camicioli, Richard; Csernansky, John G; de Leon, Mony J; deToledo-Morrell, Leyla; Killiany, Ronald J; Lehéricy, Stéphane; Pantel, Johannes; Pruessner, Jens C; Soininen, H; Watson, Craig; Duchesne, Simon; Jack, Clifford R; Frisoni, Giovanni B
2011-01-01
Manual segmentation from magnetic resonance imaging (MR) is the gold standard for evaluating hippocampal atrophy in Alzheimer's disease (AD). Nonetheless, different segmentation protocols provide up to 2.5-fold volume differences. Here we surveyed the most frequently used segmentation protocols in the AD literature as a preliminary step for international harmonization. The anatomical landmarks (anteriormost and posteriormost slices, superior, inferior, medial, and lateral borders) were identified from 12 published protocols for hippocampal manual segmentation ([Abbreviation] first author, publication year: [B] Bartzokis, 1998; [C] Convit, 1997; [dTM] deToledo-Morrell, 2004; [H] Haller, 1997; [J] Jack, 1994; [K] Killiany, 1993; [L] Lehericy, 1994; [M] Malykhin, 2007; [Pa] Pantel, 2000; [Pr] Pruessner, 2000; [S] Soininen, 1994; [W] Watson, 1992). The hippocampi of one healthy control and one AD patient taken from the 1.5T MR ADNI database were segmented by a single rater according to each protocol. The accuracy of the protocols' interpretation and translation into practice was checked with lead authors of protocols through individual interactive web conferences. Semantically harmonized landmarks and differences were then extracted, regarding: (a) the posteriormost slice, protocol [B] being the most restrictive, and [H, M, Pa, Pr, S] the most inclusive; (b) inclusion [C, dTM, J, L, M, Pr, W] or exclusion [B, H, K, Pa, S] of alveus/fimbria; (c) separation from the parahippocampal gyrus, [C] being the most restrictive, [B, dTM, H, J, Pa, S] the most inclusive. There were no substantial differences in the definition of the anteriormost slice. This survey will allow us to operationalize differences among protocols into tracing units, measure their impact on the repeatability and diagnostic accuracy of manual hippocampal segmentation, and finally develop a harmonized protocol.
NASA Astrophysics Data System (ADS)
Sablik, Thomas; Velten, Jörg; Kummert, Anton
2015-03-01
An novel system for automatic privacy protection in digital media based on spectral domain watermarking and JPEG compression is described in the present paper. In a first step private areas are detected. Therefore a detection method is presented. The implemented method uses Haar cascades to detects faces. Integral images are used to speed up calculations and the detection. Multiple detections of one face are combined. Succeeding steps comprise embedding the data into the image as part of JPEG compression using spectral domain methods and protecting the area of privacy. The embedding process is integrated into and adapted to JPEG compression. A Spread Spectrum Watermarking method is used to embed the size and position of the private areas into the cover image. Different methods for embedding regarding their robustness are compared. Moreover the performance of the method concerning tampered images is presented.
Camera-Model Identification Using Markovian Transition Probability Matrix
NASA Astrophysics Data System (ADS)
Xu, Guanshuo; Gao, Shang; Shi, Yun Qing; Hu, Ruimin; Su, Wei
Detecting the (brands and) models of digital cameras from given digital images has become a popular research topic in the field of digital forensics. As most of images are JPEG compressed before they are output from cameras, we propose to use an effective image statistical model to characterize the difference JPEG 2-D arrays of Y and Cb components from the JPEG images taken by various camera models. Specifically, the transition probability matrices derived from four different directional Markov processes applied to the image difference JPEG 2-D arrays are used to identify statistical difference caused by image formation pipelines inside different camera models. All elements of the transition probability matrices, after a thresholding technique, are directly used as features for classification purpose. Multi-class support vector machines (SVM) are used as the classification tool. The effectiveness of our proposed statistical model is demonstrated by large-scale experimental results.
Toward privacy-preserving JPEG image retrieval
NASA Astrophysics Data System (ADS)
Cheng, Hang; Wang, Jingyue; Wang, Meiqing; Zhong, Shangping
2017-07-01
This paper proposes a privacy-preserving retrieval scheme for JPEG images based on local variance. Three parties are involved in the scheme: the content owner, the server, and the authorized user. The content owner encrypts JPEG images for privacy protection by jointly using permutation cipher and stream cipher, and then, the encrypted versions are uploaded to the server. With an encrypted query image provided by an authorized user, the server may extract blockwise local variances in different directions without knowing the plaintext content. After that, it can calculate the similarity between the encrypted query image and each encrypted database image by a local variance-based feature comparison mechanism. The authorized user with the encryption key can decrypt the returned encrypted images with plaintext content similar to the query image. The experimental results show that the proposed scheme not only provides effective privacy-preserving retrieval service but also ensures both format compliance and file size preservation for encrypted JPEG images.
Costa, Marcus V C; Carvalho, Joao L A; Berger, Pedro A; Zaghetto, Alexandre; da Rocha, Adson F; Nascimento, Francisco A O
2009-01-01
We present a new preprocessing technique for two-dimensional compression of surface electromyographic (S-EMG) signals, based on correlation sorting. We show that the JPEG2000 coding system (originally designed for compression of still images) and the H.264/AVC encoder (video compression algorithm operating in intraframe mode) can be used for compression of S-EMG signals. We compare the performance of these two off-the-shelf image compression algorithms for S-EMG compression, with and without the proposed preprocessing step. Compression of both isotonic and isometric contraction S-EMG signals is evaluated. The proposed methods were compared with other S-EMG compression algorithms from the literature.
1995-02-01
modification of existing JPEG compression and decompression software available from Independent JPEG Users Group to process CIELAB color images and to use...externally specificed Huffman tables. In addition a conversion program was written to convert CIELAB color space images to red, green, blue color space
Cloud Optimized Image Format and Compression
NASA Astrophysics Data System (ADS)
Becker, P.; Plesea, L.; Maurer, T.
2015-04-01
Cloud based image storage and processing requires revaluation of formats and processing methods. For the true value of the massive volumes of earth observation data to be realized, the image data needs to be accessible from the cloud. Traditional file formats such as TIF and NITF were developed in the hay day of the desktop and assumed fast low latency file access. Other formats such as JPEG2000 provide for streaming protocols for pixel data, but still require a server to have file access. These concepts no longer truly hold in cloud based elastic storage and computation environments. This paper will provide details of a newly evolving image storage format (MRF) and compression that is optimized for cloud environments. Although the cost of storage continues to fall for large data volumes, there is still significant value in compression. For imagery data to be used in analysis and exploit the extended dynamic range of the new sensors, lossless or controlled lossy compression is of high value. Compression decreases the data volumes stored and reduces the data transferred, but the reduced data size must be balanced with the CPU required to decompress. The paper also outlines a new compression algorithm (LERC) for imagery and elevation data that optimizes this balance. Advantages of the compression include its simple to implement algorithm that enables it to be efficiently accessed using JavaScript. Combing this new cloud based image storage format and compression will help resolve some of the challenges of big image data on the internet.
Boccardi, Marina; Ganzola, Rossana; Bocchetta, Martina; Pievani, Michela; Redolfi, Alberto; Bartzokis, George; Camicioli, Richard; Csernansky, John G.; de Leon, Mony J.; deToledo-Morrell, Leyla; Killiany, Ronald J.; Lehéricy, Stéphane; Pantel, Johannes; Pruessner, Jens C.; Soininen, H.; Watson, Craig; Duchesne, Simon; Jack, Clifford R.; Frisoni, Giovanni B.
2013-01-01
Manual segmentation from magnetic resonance imaging (MR) is the gold standard for evaluating hippocampal atrophy in Alzheimer’s disease (AD). Nonetheless, different segmentation protocols provide up to 2.5-fold volume differences. Here we surveyed the most frequently used segmentation protocols in the AD literature as a preliminary step for international harmonization. The anatomical landmarks (anteriormost and posteriormost slices, superior, inferior, medial, and lateral borders) were identified from 12 published protocols for hippocampal manual segmentation ([Abbreviation] first author, publication year: [B] Bartzokis, 1998; [C] Convit, 1997; [dTM] deToledo-Morrell, 2004; [H] Haller, 1997; [J] Jack, 1994; [K] Killiany, 1993; [L] Lehericy, 1994; [M] Malykhin, 2007; [Pa] Pantel, 2000; [Pr] Pruessner, 2000; [S] Soininen, 1994; [W] Watson, 1992). The hippocampi of one healthy control and one AD patient taken from the 1.5T MR ADNI database were segmented by a single rater according to each protocol. The accuracy of the protocols’ interpretation and translation into practice was checked with lead authors of protocols through individual interactive web conferences. Semantically harmonized landmarks and differences were then extracted, regarding: (a) the posteriormost slice, protocol [B] being the most restrictive, and [H, M, Pa, Pr, S] the most inclusive; (b) inclusion [C, dTM, J, L, M, Pr, W] or exclusion [B, H, K, Pa, S] of alveus/fimbria; (c) separation from the parahippocampal gyrus, [C] being the most restrictive, [B, dTM, H, J, Pa, S] the most inclusive. There were no substantial differences in the definition of the anteriormost slice. This survey will allow us to operationalize differences among protocols into tracing units, measure their impact on the repeatability and diagnostic accuracy of manual hippocampal segmentation, and finally develop a harmonized protocol. PMID:21971451
Research on lossless compression of true color RGB image with low time and space complexity
NASA Astrophysics Data System (ADS)
Pan, ShuLin; Xie, ChengJun; Xu, Lin
2008-12-01
Eliminating correlated redundancy of space and energy by using a DWT lifting scheme and reducing the complexity of the image by using an algebraic transform among the RGB components. An improved Rice Coding algorithm, in which presents an enumerating DWT lifting scheme that fits any size images by image renormalization has been proposed in this paper. This algorithm has a coding and decoding process without backtracking for dealing with the pixels of an image. It support LOCO-I and it can also be applied to Coder / Decoder. Simulation analysis indicates that the proposed method can achieve a high image compression. Compare with Lossless-JPG, PNG(Microsoft), PNG(Rene), PNG(Photoshop), PNG(Anix PicViewer), PNG(ACDSee), PNG(Ulead photo Explorer), JPEG2000, PNG(KoDa Inc), SPIHT and JPEG-LS, the lossless image compression ratio improved 45%, 29%, 25%, 21%, 19%, 17%, 16%, 15%, 11%, 10.5%, 10% separately with 24 pieces of RGB image provided by KoDa Inc. Accessing the main memory in Pentium IV,CPU2.20GHZ and 256MRAM, the coding speed of the proposed coder can be increased about 21 times than the SPIHT and the efficiency of the performance can be increased 166% or so, the decoder's coding speed can be increased about 17 times than the SPIHT and the efficiency of the performance can be increased 128% or so.
ERIC Educational Resources Information Center
Huppert, Jonathan D.; Barlow, David H.; Gorman, Jack M.; Shear, M. Katherine; Woods, Scott W.
2006-01-01
This report is a post-hoc, exploratory examination of the relationships among patient motivation, therapist protocol adherence, and panic disorder outcome in patients treated with cognitive behavioral therapy within the context of a randomized clinical trial for the treatment of panic disorder (Barlow, Gorman, Shear, & Woods, 2000). Results…
Estimated spectrum adaptive postfilter and the iterative prepost filtering algirighms
NASA Technical Reports Server (NTRS)
Linares, Irving (Inventor)
2004-01-01
The invention presents The Estimated Spectrum Adaptive Postfilter (ESAP) and the Iterative Prepost Filter (IPF) algorithms. These algorithms model a number of image-adaptive post-filtering and pre-post filtering methods. They are designed to minimize Discrete Cosine Transform (DCT) blocking distortion caused when images are highly compressed with the Joint Photographic Expert Group (JPEG) standard. The ESAP and the IPF techniques of the present invention minimize the mean square error (MSE) to improve the objective and subjective quality of low-bit-rate JPEG gray-scale images while simultaneously enhancing perceptual visual quality with respect to baseline JPEG images.
An efficient multiple exposure image fusion in JPEG domain
NASA Astrophysics Data System (ADS)
Hebbalaguppe, Ramya; Kakarala, Ramakrishna
2012-01-01
In this paper, we describe a method to fuse multiple images taken with varying exposure times in the JPEG domain. The proposed algorithm finds its application in HDR image acquisition and image stabilization for hand-held devices like mobile phones, music players with cameras, digital cameras etc. Image acquisition at low light typically results in blurry and noisy images for hand-held camera's. Altering camera settings like ISO sensitivity, exposure times and aperture for low light image capture results in noise amplification, motion blur and reduction of depth-of-field respectively. The purpose of fusing multiple exposures is to combine the sharp details of the shorter exposure images with high signal-to-noise-ratio (SNR) of the longer exposure images. The algorithm requires only a single pass over all images, making it efficient. It comprises of - sigmoidal boosting of shorter exposed images, image fusion, artifact removal and saturation detection. Algorithm does not need more memory than a single JPEG macro block to be kept in memory making it feasible to be implemented as the part of a digital cameras hardware image processing engine. The Artifact removal step reuses the JPEGs built-in frequency analysis and hence benefits from the considerable optimization and design experience that is available for JPEG.
& Legislation Links Discussion Lists Quick Links AAPT eMentoring ComPADRE Review of High School Take Physics" Poster Why Physics Poster Thumbnail Download normal resolution JPEG Download high resolution JPEG Download Spanish Version Recruiting Physics Students in High School (FED newsletter article
Enabling Near Real-Time Remote Search for Fast Transient Events with Lossy Data Compression
NASA Astrophysics Data System (ADS)
Vohl, Dany; Pritchard, Tyler; Andreoni, Igor; Cooke, Jeffrey; Meade, Bernard
2017-09-01
We present a systematic evaluation of JPEG2000 (ISO/IEC 15444) as a transport data format to enable rapid remote searches for fast transient events as part of the Deeper Wider Faster programme. Deeper Wider Faster programme uses 20 telescopes from radio to gamma rays to perform simultaneous and rapid-response follow-up searches for fast transient events on millisecond-to-hours timescales. Deeper Wider Faster programme search demands have a set of constraints that is becoming common amongst large collaborations. Here, we focus on the rapid optical data component of Deeper Wider Faster programme led by the Dark Energy Camera at Cerro Tololo Inter-American Observatory. Each Dark Energy Camera image has 70 total coupled-charged devices saved as a 1.2 gigabyte FITS file. Near real-time data processing and fast transient candidate identifications-in minutes for rapid follow-up triggers on other telescopes-requires computational power exceeding what is currently available on-site at Cerro Tololo Inter-American Observatory. In this context, data files need to be transmitted rapidly to a foreign location for supercomputing post-processing, source finding, visualisation and analysis. This step in the search process poses a major bottleneck, and reducing the data size helps accommodate faster data transmission. To maximise our gain in transfer time and still achieve our science goals, we opt for lossy data compression-keeping in mind that raw data is archived and can be evaluated at a later time. We evaluate how lossy JPEG2000 compression affects the process of finding transients, and find only a negligible effect for compression ratios up to 25:1. We also find a linear relation between compression ratio and the mean estimated data transmission speed-up factor. Adding highly customised compression and decompression steps to the science pipeline considerably reduces the transmission time-validating its introduction to the Deeper Wider Faster programme science pipeline and enabling science that was otherwise too difficult with current technology.
Overview of the JPEG XS objective evaluation procedures
NASA Astrophysics Data System (ADS)
Willème, Alexandre; Richter, Thomas; Rosewarne, Chris; Macq, Benoit
2017-09-01
JPEG XS is a standardization activity conducted by the Joint Photographic Experts Group (JPEG), formally known as ISO/IEC SC29 WG1 group that aims at standardizing a low-latency, lightweight and visually lossless video compression scheme. This codec is intended to be used in applications where image sequences would otherwise be transmitted or stored in uncompressed form, such as in live production (through SDI or IP transport), display links, or frame buffers. Support for compression ratios ranging from 2:1 to 6:1 allows significant bandwidth and power reduction for signal propagation. This paper describes the objective quality assessment procedures conducted as part of the JPEG XS standardization activity. Firstly, this paper discusses the objective part of the experiments that led to the technology selection during the 73th WG1 meeting in late 2016. This assessment consists of PSNR measurements after a single and multiple compression decompression cycles at various compression ratios. After this assessment phase, two proposals among the six responses to the CfP were selected and merged to form the first JPEG XS test model (XSM). Later, this paper describes the core experiments (CEs) conducted so far on the XSM. These experiments are intended to evaluate its performance in more challenging scenarios, such as insertion of picture overlays, robustness to frame editing, assess the impact of the different algorithmic choices, and also to measure the XSM performance using the HDR VDP metric.
Applications of the JPEG standard in a medical environment
NASA Astrophysics Data System (ADS)
Wittenberg, Ulrich
1993-10-01
JPEG is a very versatile image coding and compression standard for single images. Medical images make a higher demand on image quality and precision than the usual 'pretty pictures'. In this paper the potential applications of the various JPEG coding modes in a medical environment are evaluated. Due to legal reasons the lossless modes are especially interesting. The spatial modes are equally important because medical data may well exceed the maximum of 12 bit precision allowed for the DCT modes. The performance of the spatial predictors is investigated. From the users point of view the progressive modes, which provide a fast but coarse approximation of the final image, reduce the subjective time one has to wait for it, so they also reduce the user's frustration. Even the lossy modes will find some applications, but they have to be handled with care, because repeated lossy coding and decoding leads to a degradation of the image quality. The amount of this degradation is investigated. The JPEG standard alone is not sufficient for a PACS because it does not store enough additional data such as creation data or details of the imaging modality. Therefore it will be an imbedded coding format in standards like TIFF or ACR/NEMA. It is concluded that the JPEG standard is versatile enough to match the requirements of the medical community.
Improved JPEG anti-forensics with better image visual quality and forensic undetectability.
Singh, Gurinder; Singh, Kulbir
2017-08-01
There is an immediate need to validate the authenticity of digital images due to the availability of powerful image processing tools that can easily manipulate the digital image information without leaving any traces. The digital image forensics most often employs the tampering detectors based on JPEG compression. Therefore, to evaluate the competency of the JPEG forensic detectors, an anti-forensic technique is required. In this paper, two improved JPEG anti-forensic techniques are proposed to remove the blocking artifacts left by the JPEG compression in both spatial and DCT domain. In the proposed framework, the grainy noise left by the perceptual histogram smoothing in DCT domain can be reduced significantly by applying the proposed de-noising operation. Two types of denoising algorithms are proposed, one is based on the constrained minimization problem of total variation of energy and other on the normalized weighted function. Subsequently, an improved TV based deblocking operation is proposed to eliminate the blocking artifacts in the spatial domain. Then, a decalibration operation is applied to bring the processed image statistics back to its standard position. The experimental results show that the proposed anti-forensic approaches outperform the existing state-of-the-art techniques in achieving enhanced tradeoff between image visual quality and forensic undetectability, but with high computational cost. Copyright © 2017 Elsevier B.V. All rights reserved.
Switching theory-based steganographic system for JPEG images
NASA Astrophysics Data System (ADS)
Cherukuri, Ravindranath C.; Agaian, Sos S.
2007-04-01
Cellular communications constitute a significant portion of the global telecommunications market. Therefore, the need for secured communication over a mobile platform has increased exponentially. Steganography is an art of hiding critical data into an innocuous signal, which provide answers to the above needs. The JPEG is one of commonly used format for storing and transmitting images on the web. In addition, the pictures captured using mobile cameras are in mostly in JPEG format. In this article, we introduce a switching theory based steganographic system for JPEG images which is applicable for mobile and computer platforms. The proposed algorithm uses the fact that energy distribution among the quantized AC coefficients varies from block to block and coefficient to coefficient. Existing approaches are effective with a part of these coefficients but when employed over all the coefficients they show there ineffectiveness. Therefore, we propose an approach that works each set of AC coefficients with different frame work thus enhancing the performance of the approach. The proposed system offers a high capacity and embedding efficiency simultaneously withstanding to simple statistical attacks. In addition, the embedded information could be retrieved without prior knowledge of the cover image. Based on simulation results, the proposed method demonstrates an improved embedding capacity over existing algorithms while maintaining a high embedding efficiency and preserving the statistics of the JPEG image after hiding information.
Providing Internet Access to High-Resolution Lunar Images
NASA Technical Reports Server (NTRS)
Plesea, Lucian
2008-01-01
The OnMoon server is a computer program that provides Internet access to high-resolution Lunar images, maps, and elevation data, all suitable for use in geographical information system (GIS) software for generating images, maps, and computational models of the Moon. The OnMoon server implements the Open Geospatial Consortium (OGC) Web Map Service (WMS) server protocol and supports Moon-specific extensions. Unlike other Internet map servers that provide Lunar data using an Earth coordinate system, the OnMoon server supports encoding of data in Moon-specific coordinate systems. The OnMoon server offers access to most of the available high-resolution Lunar image and elevation data. This server can generate image and map files in the tagged image file format (TIFF) or the Joint Photographic Experts Group (JPEG), 8- or 16-bit Portable Network Graphics (PNG), or Keyhole Markup Language (KML) format. Image control is provided by use of the OGC Style Layer Descriptor (SLD) protocol. Full-precision spectral arithmetic processing is also available, by use of a custom SLD extension. This server can dynamically add shaded relief based on the Lunar elevation to any image layer. This server also implements tiled WMS protocol and super-overlay KML for high-performance client application programs.
Grid-based implementation of XDS-I as part of image-enabled EHR for regional healthcare in Shanghai.
Zhang, Jianguo; Zhang, Kai; Yang, Yuanyuan; Sun, Jianyong; Ling, Tonghui; Wang, Guangrong; Ling, Yun; Peng, Derong
2011-03-01
Due to the rapid growth of Shanghai city to 20 million residents, the balance between healthcare supply and demand has become an important issue. The local government hopes to ameliorate this problem by developing an image-enabled electronic healthcare record (EHR) sharing mechanism between certain hospitals. This system is designed to enable healthcare collaboration and reduce healthcare costs by allowing review of prior examination data obtained at other hospitals. Here, we present a design method and implementation solution of image-enabled EHRs (i-EHRs) and describe the implementation of i-EHRs in four hospitals and one regional healthcare information center, as well as their preliminary operating results. We designed the i-EHRs with service-oriented architecture (SOA) and combined the grid-based image management and distribution capability, which are compliant with IHE XDS-I integration profile. There are seven major components and common services included in the i-EHRs. In order to achieve quick response for image retrieving in low-bandwidth network environments, we use a JPEG2000 interactive protocol and progressive display technique to transmit images from a Grid Agent as Imaging Source Actor to the PACS workstation as Imaging Consumer Actor. The first phase of pilot testing of our image-enabled EHR was implemented in the Zhabei district of Shanghai for imaging document sharing and collaborative diagnostic purposes. The pilot testing began in October 2009; there have been more than 50 examinations daily transferred between the City North Hospital and the three community hospitals for collaborative diagnosis. The feedback from users at all hospitals is very positive, with respondents stating the system to be easy to use and reporting no interference with their normal radiology diagnostic operation. The i-EHR system can provide event-driven automatic image delivery for collaborative imaging diagnosis across multiple hospitals based on work flow requirements. This project demonstrated that the grid-based implementation of IHE XDS-I for image-enabled EHR could scale effectively to serve a regional healthcare solution with collaborative imaging services. The feedback from users of community hospitals and large hospital is very positive.
77 FR 59692 - 2014 Diversity Immigrant Visa Program
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-28
... the E-DV system. The entry will not be accepted and must be resubmitted. Group or family photographs... must be in the Joint Photographic Experts Group (JPEG) format. Image File Size: The maximum file size...). Image File Format: The image must be in the Joint Photographic Experts Group (JPEG) format. Image File...
History of the Universe Poster
History of the Universe Poster You are free to use these images if you give credit to: Particle Data Group at Lawrence Berkeley National Lab. New Version (2014) History of the Universe Poster Download: JPEG version PDF version Old Version (2013) History of the Universe Poster Download: JPEG version
Image Size Variation Influence on Corrupted and Non-viewable BMP Image
NASA Astrophysics Data System (ADS)
Azmi, Tengku Norsuhaila T.; Azma Abdullah, Nurul; Rahman, Nurul Hidayah Ab; Hamid, Isredza Rahmi A.; Chai Wen, Chuah
2017-08-01
Image is one of the evidence component seek in digital forensics. Joint Photographic Experts Group (JPEG) format is most popular used in the Internet because JPEG files are very lossy and easy to compress that can speed up Internet transmitting processes. However, corrupted JPEG images are hard to recover due to the complexities of determining corruption point. Nowadays Bitmap (BMP) images are preferred in image processing compared to another formats because BMP image contain all the image information in a simple format. Therefore, in order to investigate the corruption point in JPEG, the file is required to be converted into BMP format. Nevertheless, there are many things that can influence the corrupting of BMP image such as the changes of image size that make the file non-viewable. In this paper, the experiment indicates that the size of BMP file influences the changes in the image itself through three conditions, deleting, replacing and insertion. From the experiment, we learnt by correcting the file size, it can able to produce a viewable file though partially. Then, it can be investigated further to identify the corruption point.
Clunie, David A; Gebow, Dan
2015-01-01
Deidentification of medical images requires attention to both header information as well as the pixel data itself, in which burned-in text may be present. If the pixel data to be deidentified is stored in a compressed form, traditionally it is decompressed, identifying text is redacted, and if necessary, pixel data are recompressed. Decompression without recompression may result in images of excessive or intractable size. Recompression with an irreversible scheme is undesirable because it may cause additional loss in the diagnostically relevant regions of the images. The irreversible (lossy) JPEG compression scheme works on small blocks of the image independently, hence, redaction can selectively be confined only to those blocks containing identifying text, leaving all other blocks unchanged. An open source implementation of selective redaction and a demonstration of its applicability to multiframe color ultrasound images is described. The process can be applied either to standalone JPEG images or JPEG bit streams encapsulated in other formats, which in the case of medical images, is usually DICOM.
An FPGA-Based People Detection System
NASA Astrophysics Data System (ADS)
Nair, Vinod; Laprise, Pierre-Olivier; Clark, James J.
2005-12-01
This paper presents an FPGA-based system for detecting people from video. The system is designed to use JPEG-compressed frames from a network camera. Unlike previous approaches that use techniques such as background subtraction and motion detection, we use a machine-learning-based approach to train an accurate detector. We address the hardware design challenges involved in implementing such a detector, along with JPEG decompression, on an FPGA. We also present an algorithm that efficiently combines JPEG decompression with the detection process. This algorithm carries out the inverse DCT step of JPEG decompression only partially. Therefore, it is computationally more efficient and simpler to implement, and it takes up less space on the chip than the full inverse DCT algorithm. The system is demonstrated on an automated video surveillance application and the performance of both hardware and software implementations is analyzed. The results show that the system can detect people accurately at a rate of about[InlineEquation not available: see fulltext.] frames per second on a Virtex-II 2V1000 using a MicroBlaze processor running at[InlineEquation not available: see fulltext.], communicating with dedicated hardware over FSL links.
Embedding intensity image into a binary hologram with strong noise resistant capability
NASA Astrophysics Data System (ADS)
Zhuang, Zhaoyong; Jiao, Shuming; Zou, Wenbin; Li, Xia
2017-11-01
A digital hologram can be employed as a host image for image watermarking applications to protect information security. Past research demonstrates that a gray level intensity image can be embedded into a binary Fresnel hologram by error diffusion method or bit truncation coding method. However, the fidelity of the retrieved watermark image from binary hologram is generally not satisfactory, especially when the binary hologram is contaminated with noise. To address this problem, we propose a JPEG-BCH encoding method in this paper. First, we employ the JPEG standard to compress the intensity image into a binary bit stream. Next, we encode the binary bit stream with BCH code to obtain error correction capability. Finally, the JPEG-BCH code is embedded into the binary hologram. By this way, the intensity image can be retrieved with high fidelity by a BCH-JPEG decoder even if the binary hologram suffers from serious noise contamination. Numerical simulation results show that the image quality of retrieved intensity image with our proposed method is superior to the state-of-the-art work reported.
High-quality JPEG compression history detection for fake uncompressed images
NASA Astrophysics Data System (ADS)
Zhang, Rong; Wang, Rang-Ding; Guo, Li-Jun; Jiang, Bao-Chuan
2017-05-01
Authenticity is one of the most important evaluation factors of images for photography competitions or journalism. Unusual compression history of an image often implies the illicit intent of its author. Our work aims at distinguishing real uncompressed images from fake uncompressed images that are saved in uncompressed formats but have been previously compressed. To detect the potential image JPEG compression, we analyze the JPEG compression artifacts based on the tetrolet covering, which corresponds to the local image geometrical structure. Since the compression can alter the structure information, the tetrolet covering indexes may be changed if a compression is performed on the test image. Such changes can provide valuable clues about the image compression history. To be specific, the test image is first compressed with different quality factors to generate a set of temporary images. Then, the test image is compared with each temporary image block-by-block to investigate whether the tetrolet covering index of each 4×4 block is different between them. The percentages of the changed tetrolet covering indexes corresponding to the quality factors (from low to high) are computed and used to form the p-curve, the local minimum of which may indicate the potential compression. Our experimental results demonstrate the advantage of our method to detect JPEG compressions of high quality, even the highest quality factors such as 98, 99, or 100 of the standard JPEG compression, from uncompressed-format images. At the same time, our detection algorithm can accurately identify the corresponding compression quality factor.
Kim, J H; Kang, S W; Kim, J-r; Chang, Y S
2014-01-01
Purpose To evaluate the effect of image compression of spectral-domain optical coherence tomography (OCT) images in the examination of eyes with exudative age-related macular degeneration (AMD). Methods Thirty eyes from 30 patients who were diagnosed with exudative AMD were included in this retrospective observational case series. The horizontal OCT scans centered at the center of the fovea were conducted using spectral-domain OCT. The images were exported to Tag Image File Format (TIFF) and 100, 75, 50, 25 and 10% quality of Joint Photographic Experts Group (JPEG) format. OCT images were taken before and after intravitreal ranibizumab injections, and after relapse. The prevalence of subretinal and intraretinal fluids was determined. Differences in choroidal thickness between the TIFF and JPEG images were compared with the intra-observer variability. Results The prevalence of subretinal and intraretinal fluids was comparable regardless of the degree of compression. However, the chorio–scleral interface was not clearly identified in many images with a high degree of compression. In images with 25 and 10% quality of JPEG, the difference in choroidal thickness between the TIFF images and the respective JPEG images was significantly greater than the intra-observer variability of the TIFF images (P=0.029 and P=0.024, respectively). Conclusions In OCT images of eyes with AMD, 50% of the quality of the JPEG format would be an optimal degree of compression for efficient data storage and transfer without sacrificing image quality. PMID:24788012
An interactive Bayesian geostatistical inverse protocol for hydraulic tomography
Fienen, Michael N.; Clemo, Tom; Kitanidis, Peter K.
2008-01-01
Hydraulic tomography is a powerful technique for characterizing heterogeneous hydrogeologic parameters. An explicit trade-off between characterization based on measurement misfit and subjective characterization using prior information is presented. We apply a Bayesian geostatistical inverse approach that is well suited to accommodate a flexible model with the level of complexity driven by the data and explicitly considering uncertainty. Prior information is incorporated through the selection of a parameter covariance model characterizing continuity and providing stability. Often, discontinuities in the parameter field, typically caused by geologic contacts between contrasting lithologic units, necessitate subdivision into zones across which there is no correlation among hydraulic parameters. We propose an interactive protocol in which zonation candidates are implied from the data and are evaluated using cross validation and expert knowledge. Uncertainty introduced by limited knowledge of dynamic regional conditions is mitigated by using drawdown rather than native head values. An adjoint state formulation of MODFLOW-2000 is used to calculate sensitivities which are used both for the solution to the inverse problem and to guide protocol decisions. The protocol is tested using synthetic two-dimensional steady state examples in which the wells are located at the edge of the region of interest.
Fragmentation Point Detection of JPEG Images at DHT Using Validator
NASA Astrophysics Data System (ADS)
Mohamad, Kamaruddin Malik; Deris, Mustafa Mat
File carving is an important, practical technique for data recovery in digital forensics investigation and is particularly useful when filesystem metadata is unavailable or damaged. The research on reassembly of JPEG files with RST markers, fragmented within the scan area have been done before. However, fragmentation within Define Huffman Table (DHT) segment is yet to be resolved. This paper analyzes the fragmentation within the DHT area and list out all the fragmentation possibilities. Two main contributions are made in this paper. Firstly, three fragmentation points within DHT area are listed. Secondly, few novel validators are proposed to detect these fragmentations. The result obtained from tests done on manually fragmented JPEG files, showed that all three fragmentation points within DHT are successfully detected using validators.
IIPImage: Large-image visualization
NASA Astrophysics Data System (ADS)
Pillay, Ruven
2014-08-01
IIPImage is an advanced high-performance feature-rich image server system that enables online access to full resolution floating point (as well as other bit depth) images at terabyte scales. Paired with the VisiOmatic (ascl:1408.010) celestial image viewer, the system can comfortably handle gigapixel size images as well as advanced image features such as both 8, 16 and 32 bit depths, CIELAB colorimetric images and scientific imagery such as multispectral images. Streaming is tile-based, which enables viewing, navigating and zooming in real-time around gigapixel size images. Source images can be in either TIFF or JPEG2000 format. Whole images or regions within images can also be rapidly and dynamically resized and exported by the server from a single source image without the need to store multiple files in various sizes.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-01
... need to submit a photo for a child who is already a U.S. citizen or a Legal Permanent Resident. Group... Joint Photographic Experts Group (JPEG) format; it must have a maximum image file size of two hundred... (dpi); the image file format in Joint Photographic Experts Group (JPEG) format; the maximum image file...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-27
... already a U.S. citizen or a Lawful Permanent Resident, but you will not be penalized if you do. Group... specifications: Image File Format: The miage must be in the Joint Photographic Experts Group (JPEG) format. Image... in the Joint Photographic Experts Group (JPEG) format. Image File Size: The maximum image file size...
Building a Steganography Program Including How to Load, Process, and Save JPEG and PNG Files in Java
ERIC Educational Resources Information Center
Courtney, Mary F.; Stix, Allen
2006-01-01
Instructors teaching beginning programming classes are often interested in exercises that involve processing photographs (i.e., files stored as .jpeg). They may wish to offer activities such as color inversion, the color manipulation effects archived with pixel thresholding, or steganography, all of which Stevenson et al. [4] assert are sought by…
Toward objective image quality metrics: the AIC Eval Program of the JPEG
NASA Astrophysics Data System (ADS)
Richter, Thomas; Larabi, Chaker
2008-08-01
Objective quality assessment of lossy image compression codecs is an important part of the recent call of the JPEG for Advanced Image Coding. The target of the AIC ad-hoc group is twofold: First, to receive state-of-the-art still image codecs and to propose suitable technology for standardization; and second, to study objective image quality metrics to evaluate the performance of such codes. Even tthough the performance of an objective metric is defined by how well it predicts the outcome of a subjective assessment, one can also study the usefulness of a metric in a non-traditional way indirectly, namely by measuring the subjective quality improvement of a codec that has been optimized for a specific objective metric. This approach shall be demonstrated here on the recently proposed HDPhoto format14 introduced by Microsoft and a SSIM-tuned17 version of it by one of the authors. We compare these two implementations with JPEG1 in two variations and a visual and PSNR optimal JPEG200013 implementation. To this end, we use subjective and objective tests based on the multiscale SSIM and a new DCT based metric.
Interband coding extension of the new lossless JPEG standard
NASA Astrophysics Data System (ADS)
Memon, Nasir D.; Wu, Xiaolin; Sippy, V.; Miller, G.
1997-01-01
Due to the perceived inadequacy of current standards for lossless image compression, the JPEG committee of the International Standards Organization (ISO) has been developing a new standard. A baseline algorithm, called JPEG-LS, has already been completed and is awaiting approval by national bodies. The JPEG-LS baseline algorithm despite being simple is surprisingly efficient, and provides compression performance that is within a few percent of the best and more sophisticated techniques reported in the literature. Extensive experimentations performed by the authors seem to indicate that an overall improvement by more than 10 percent in compression performance will be difficult to obtain even at the cost of great complexity; at least not with traditional approaches to lossless image compression. However, if we allow inter-band decorrelation and modeling in the baseline algorithm, nearly 30 percent improvement in compression gains for specific images in the test set become possible with a modest computational cost. In this paper we propose and investigate a few techniques for exploiting inter-band correlations in multi-band images. These techniques have been designed within the framework of the baseline algorithm, and require minimal changes to the basic architecture of the baseline, retaining its essential simplicity.
Parallel design of JPEG-LS encoder on graphics processing units
NASA Astrophysics Data System (ADS)
Duan, Hao; Fang, Yong; Huang, Bormin
2012-01-01
With recent technical advances in graphic processing units (GPUs), GPUs have outperformed CPUs in terms of compute capability and memory bandwidth. Many successful GPU applications to high performance computing have been reported. JPEG-LS is an ISO/IEC standard for lossless image compression which utilizes adaptive context modeling and run-length coding to improve compression ratio. However, adaptive context modeling causes data dependency among adjacent pixels and the run-length coding has to be performed in a sequential way. Hence, using JPEG-LS to compress large-volume hyperspectral image data is quite time-consuming. We implement an efficient parallel JPEG-LS encoder for lossless hyperspectral compression on a NVIDIA GPU using the computer unified device architecture (CUDA) programming technology. We use the block parallel strategy, as well as such CUDA techniques as coalesced global memory access, parallel prefix sum, and asynchronous data transfer. We also show the relation between GPU speedup and AVIRIS block size, as well as the relation between compression ratio and AVIRIS block size. When AVIRIS images are divided into blocks, each with 64×64 pixels, we gain the best GPU performance with 26.3x speedup over its original CPU code.
Tampered Region Localization of Digital Color Images Based on JPEG Compression Noise
NASA Astrophysics Data System (ADS)
Wang, Wei; Dong, Jing; Tan, Tieniu
With the availability of various digital image edit tools, seeing is no longer believing. In this paper, we focus on tampered region localization for image forensics. We propose an algorithm which can locate tampered region(s) in a lossless compressed tampered image when its unchanged region is output of JPEG decompressor. We find the tampered region and the unchanged region have different responses for JPEG compression. The tampered region has stronger high frequency quantization noise than the unchanged region. We employ PCA to separate different spatial frequencies quantization noises, i.e. low, medium and high frequency quantization noise, and extract high frequency quantization noise for tampered region localization. Post-processing is involved to get final localization result. The experimental results prove the effectiveness of our proposed method.
SEMG signal compression based on two-dimensional techniques.
de Melo, Wheidima Carneiro; de Lima Filho, Eddie Batista; da Silva Júnior, Waldir Sabino
2016-04-18
Recently, two-dimensional techniques have been successfully employed for compressing surface electromyographic (SEMG) records as images, through the use of image and video encoders. Such schemes usually provide specific compressors, which are tuned for SEMG data, or employ preprocessing techniques, before the two-dimensional encoding procedure, in order to provide a suitable data organization, whose correlations can be better exploited by off-the-shelf encoders. Besides preprocessing input matrices, one may also depart from those approaches and employ an adaptive framework, which is able to directly tackle SEMG signals reassembled as images. This paper proposes a new two-dimensional approach for SEMG signal compression, which is based on a recurrent pattern matching algorithm called multidimensional multiscale parser (MMP). The mentioned encoder was modified, in order to efficiently work with SEMG signals and exploit their inherent redundancies. Moreover, a new preprocessing technique, named as segmentation by similarity (SbS), which has the potential to enhance the exploitation of intra- and intersegment correlations, is introduced, the percentage difference sorting (PDS) algorithm is employed, with different image compressors, and results with the high efficiency video coding (HEVC), H.264/AVC, and JPEG2000 encoders are presented. Experiments were carried out with real isometric and dynamic records, acquired in laboratory. Dynamic signals compressed with H.264/AVC and HEVC, when combined with preprocessing techniques, resulted in good percent root-mean-square difference [Formula: see text] compression factor figures, for low and high compression factors, respectively. Besides, regarding isometric signals, the modified two-dimensional MMP algorithm outperformed state-of-the-art schemes, for low compression factors, the combination between SbS and HEVC proved to be competitive, for high compression factors, and JPEG2000, combined with PDS, provided good performance allied to low computational complexity, all in terms of percent root-mean-square difference [Formula: see text] compression factor. The proposed schemes are effective and, specifically, the modified MMP algorithm can be considered as an interesting alternative for isometric signals, regarding traditional SEMG encoders. Besides, the approach based on off-the-shelf image encoders has the potential of fast implementation and dissemination, given that many embedded systems may already have such encoders available, in the underlying hardware/software architecture.
NASA Astrophysics Data System (ADS)
Lopez, Alejandro; Noe, Miquel; Fernandez, Gabriel
2004-10-01
The GMF4iTV project (Generic Media Framework for Interactive Television) is an IST European project that consists of an end-to-end broadcasting platform providing interactivity on heterogeneous multimedia devices such as Set-Top-Boxes and PCs according to the Multimedia Home Platform (MHP) standard from DVB. This platform allows the content providers to create enhanced audiovisual contents with a degree of interactivity at moving object level or shot change from a video. The end user is then able to interact with moving objects from the video or individual shots allowing the enjoyment of additional contents associated to them (MHP applications, HTML pages, JPEG, MPEG4 files...). This paper focus the attention to the issues related to metadata and content transmission, synchronization, signaling and bitrate allocation of the GMF4iTV project.
Adaptive intercolor error prediction coder for lossless color (rgb) picutre compression
NASA Astrophysics Data System (ADS)
Mann, Y.; Peretz, Y.; Mitchell, Harvey B.
2001-09-01
Most of the current lossless compression algorithms, including the new international baseline JPEG-LS algorithm, do not exploit the interspectral correlations that exist between the color planes in an input color picture. To improve the compression performance (i.e., lower the bit rate) it is necessary to exploit these correlations. A major concern is to find efficient methods for exploiting the correlations that, at the same time, are compatible with and can be incorporated into the JPEG-LS algorithm. One such algorithm is the method of intercolor error prediction (IEP), which when used with the JPEG-LS algorithm, results on average in a reduction of 8% in the overall bit rate. We show how the IEP algorithm can be simply modified and that it nearly doubles the size of the reduction in bit rate to 15%.
The traditional maximal lactate steady state test versus the 5 × 2000 m test.
Legaz-Arrese, A; Carranza-García, L E; Serrano-Ostáriz, E; González-Ravé, J M; Terrados, N
2011-11-01
Here, we compared the maximal lactate steady state velocity (vMLSS) estimated from a single-visit protocol (v5×2000) to the traditional multi-day protocol (vMLSS). Furthermore, we determined whether there was a lactate steady state during the time limits (Tlim) at vMLSS or v5×2000. Eight runners completed a half marathon (HM), the traditional protocol to determine the vMLSS and the 5×2000 m test in a randomised order, and a Tlim at vMLSS and at v5×2000 in a randomised order. The vMLSS (13.56±0.90 km·h - 1) was higher than the v5×2000 (12.93±0.90 km·h - 1, p=0.001) and comparable to the vHM (13.34±0.75 km·h - 1). The vMLSS (r=0.83) and the v5×2000 (r=0.91) were associated with the vHM but were not indicative of the competition pace. The Tlim at vMLSS (64±15 min) was lower than the Tlim at v5×2000 (94±21 min) and the HM time (95±5 min). In both Tlim, lactate was lower at 45 min than upon finishing the effort and was predictive of its duration (p<0.05). Our results indicate that the 5×2000 m test can be equally useful to assess runners as the traditional MLSS protocol and that there is no lactate steady state during the Tlim at vMLSS or at v5×2000. © Georg Thieme Verlag KG Stuttgart · New York.
Digital Semaphore: Technical Feasibility of QR Code Optical Signaling for Fleet Communications
2013-06-01
Standards (http://www.iso.org) JIS Japanese Industrial Standard JPEG Joint Photographic Experts Group (digital image format; http://www.jpeg.org) LED...Denso Wave corporation in the 1990s for the Japanese automotive manufacturing industry. See Appendix A for full details. Reed-Solomon Error...eliminates camera blur induced by the shutter, providing clear images at extremely high frame rates. Thusly, digital cinema cameras are more suitable
A new JPEG-based steganographic algorithm for mobile devices
NASA Astrophysics Data System (ADS)
Agaian, Sos S.; Cherukuri, Ravindranath C.; Schneider, Erik C.; White, Gregory B.
2006-05-01
Currently, cellular phones constitute a significant portion of the global telecommunications market. Modern cellular phones offer sophisticated features such as Internet access, on-board cameras, and expandable memory which provide these devices with excellent multimedia capabilities. Because of the high volume of cellular traffic, as well as the ability of these devices to transmit nearly all forms of data. The need for an increased level of security in wireless communications is becoming a growing concern. Steganography could provide a solution to this important problem. In this article, we present a new algorithm for JPEG-compressed images which is applicable to mobile platforms. This algorithm embeds sensitive information into quantized discrete cosine transform coefficients obtained from the cover JPEG. These coefficients are rearranged based on certain statistical properties and the inherent processing and memory constraints of mobile devices. Based on the energy variation and block characteristics of the cover image, the sensitive data is hidden by using a switching embedding technique proposed in this article. The proposed system offers high capacity while simultaneously withstanding visual and statistical attacks. Based on simulation results, the proposed method demonstrates an improved retention of first-order statistics when compared to existing JPEG-based steganographic algorithms, while maintaining a capacity which is comparable to F5 for certain cover images.
Johnson, Jeffrey P; Krupinski, Elizabeth A; Yan, Michelle; Roehrig, Hans; Graham, Anna R; Weinstein, Ronald S
2011-02-01
A major issue in telepathology is the extremely large and growing size of digitized "virtual" slides, which can require several gigabytes of storage and cause significant delays in data transmission for remote image interpretation and interactive visualization by pathologists. Compression can reduce this massive amount of virtual slide data, but reversible (lossless) methods limit data reduction to less than 50%, while lossy compression can degrade image quality and diagnostic accuracy. "Visually lossless" compression offers the potential for using higher compression levels without noticeable artifacts, but requires a rate-control strategy that adapts to image content and loss visibility. We investigated the utility of a visual discrimination model (VDM) and other distortion metrics for predicting JPEG 2000 bit rates corresponding to visually lossless compression of virtual slides for breast biopsy specimens. Threshold bit rates were determined experimentally with human observers for a variety of tissue regions cropped from virtual slides. For test images compressed to their visually lossless thresholds, just-noticeable difference (JND) metrics computed by the VDM were nearly constant at the 95th percentile level or higher, and were significantly less variable than peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) metrics. Our results suggest that VDM metrics could be used to guide the compression of virtual slides to achieve visually lossless compression while providing 5-12 times the data reduction of reversible methods.
View compensated compression of volume rendered images for remote visualization.
Lalgudi, Hariharan G; Marcellin, Michael W; Bilgin, Ali; Oh, Han; Nadar, Mariappan S
2009-07-01
Remote visualization of volumetric images has gained importance over the past few years in medical and industrial applications. Volume visualization is a computationally intensive process, often requiring hardware acceleration to achieve a real time viewing experience. One remote visualization model that can accomplish this would transmit rendered images from a server, based on viewpoint requests from a client. For constrained server-client bandwidth, an efficient compression scheme is vital for transmitting high quality rendered images. In this paper, we present a new view compensation scheme that utilizes the geometric relationship between viewpoints to exploit the correlation between successive rendered images. The proposed method obviates motion estimation between rendered images, enabling significant reduction to the complexity of a compressor. Additionally, the view compensation scheme, in conjunction with JPEG2000 performs better than AVC, the state of the art video compression standard.
Vulnerability Analysis of HD Photo Image Viewer Applications
2007-09-01
the successor to the ubiquitous JPEG image format, as well as the eventual de facto standard in the digital photography market. With massive efforts...renamed to HD Photo in November of 2006, is being touted as the successor to the ubiquitous JPEG image format, as well as the eventual de facto standard...associated state-of-the-art compression algorithm “specifically designed [for] all types of continuous tone photographic” images [HDPhotoFeatureSpec
A new security solution to JPEG using hyper-chaotic system and modified zigzag scan coding
NASA Astrophysics Data System (ADS)
Ji, Xiao-yong; Bai, Sen; Guo, Yu; Guo, Hui
2015-05-01
Though JPEG is an excellent compression standard of images, it does not provide any security performance. Thus, a security solution to JPEG was proposed in Zhang et al. (2014). But there are some flaws in Zhang's scheme and in this paper we propose a new scheme based on discrete hyper-chaotic system and modified zigzag scan coding. By shuffling the identifiers of zigzag scan encoded sequence with hyper-chaotic sequence and accurately encrypting the certain coefficients which have little relationship with the correlation of the plain image in zigzag scan encoded domain, we achieve high compression performance and robust security simultaneously. Meanwhile we present and analyze the flaws in Zhang's scheme through theoretical analysis and experimental verification, and give the comparisons between our scheme and Zhang's. Simulation results verify that our method has better performance in security and efficiency.
Perceptually-Based Adaptive JPEG Coding
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Rosenholtz, Ruth; Null, Cynthia H. (Technical Monitor)
1996-01-01
An extension to the JPEG standard (ISO/IEC DIS 10918-3) allows spatial adaptive coding of still images. As with baseline JPEG coding, one quantization matrix applies to an entire image channel, but in addition the user may specify a multiplier for each 8 x 8 block, which multiplies the quantization matrix, yielding the new matrix for the block. MPEG 1 and 2 use much the same scheme, except there the multiplier changes only on macroblock boundaries. We propose a method for perceptual optimization of the set of multipliers. We compute the perceptual error for each block based upon DCT quantization error adjusted according to contrast sensitivity, light adaptation, and contrast masking, and pick the set of multipliers which yield maximally flat perceptual error over the blocks of the image. We investigate the bitrate savings due to this adaptive coding scheme and the relative importance of the different sorts of masking on adaptive coding.
Lossless Compression of JPEG Coded Photo Collections.
Wu, Hao; Sun, Xiaoyan; Yang, Jingyu; Zeng, Wenjun; Wu, Feng
2016-04-06
The explosion of digital photos has posed a significant challenge to photo storage and transmission for both personal devices and cloud platforms. In this paper, we propose a novel lossless compression method to further reduce the size of a set of JPEG coded correlated images without any loss of information. The proposed method jointly removes inter/intra image redundancy in the feature, spatial, and frequency domains. For each collection, we first organize the images into a pseudo video by minimizing the global prediction cost in the feature domain. We then present a hybrid disparity compensation method to better exploit both the global and local correlations among the images in the spatial domain. Furthermore, the redundancy between each compensated signal and the corresponding target image is adaptively reduced in the frequency domain. Experimental results demonstrate the effectiveness of the proposed lossless compression method. Compared to the JPEG coded image collections, our method achieves average bit savings of more than 31%.
High-speed low-complexity video coding with EDiCTius: a DCT coding proposal for JPEG XS
NASA Astrophysics Data System (ADS)
Richter, Thomas; Fößel, Siegfried; Keinert, Joachim; Scherl, Christian
2017-09-01
In its 71th meeting, the JPEG committee issued a call for low complexity, high speed image coding, designed to address the needs of low-cost video-over-ip applications. As an answer to this call, Fraunhofer IIS and the Computing Center of the University of Stuttgart jointly developed an embedded DCT image codec requiring only minimal resources while maximizing throughput on FPGA and GPU implementations. Objective and subjective tests performed for the 73rd meeting confirmed its excellent performance and suitability for its purpose, and it was selected as one of the two key contributions for the development of a joined test model. In this paper, its authors describe the design principles of the codec, provide a high-level overview of the encoder and decoder chain and provide evaluation results on the test corpus selected by the JPEG committee.
Next VLT Instrument Ready for the Astronomers
NASA Astrophysics Data System (ADS)
2000-02-01
FORS2 Commissioning Period Successfully Terminated The commissioning of the FORS2 multi-mode astronomical instrument at KUEYEN , the second FOcal Reducer/low dispersion Spectrograph at the ESO Very Large Telescope, was successfully finished today. This important work - that may be likened with the test driving of a new car model - took place during two periods, from October 22 to November 21, 1999, and January 22 to February 8, 2000. The overall goal was to thoroughly test the functioning of the new instrument, its conformity to specifications and to optimize its operation at the telescope. FORS2 is now ready to be handed over to the astronomers on April 1, 2000. Observing time for a six-month period until October 1 has already been allocated to a large number of research programmes. Two of the images that were obtained with FORS2 during the commissioning period are shown here. An early report about this instrument is available as ESO PR 17/99. The many modes of FORS2 The FORS Commissioning Team carried out a comprehensive test programme for all observing modes. These tests were done with "observation blocks (OBs)" that describe the set-up of the instrument and telescope for each exposure in all details, e.g., position in the sky of the object to be observed, filters, exposure time, etc.. Whenever an OB is "activated" from the control console, the corresponding observation is automatically performed. Additional information about the VLT Data Flow System is available in ESO PR 10/99. The FORS2 observing modes include direct imaging, long-slit and multi-object spectroscopy, exactly as in its twin, FORS1 at ANTU . In addition, FORS2 contains the "Mask Exchange Unit" , a motorized magazine that holds 10 masks made of thin metal plates into which the slits are cut by means of a laser. The advantage of this particular observing method is that more spectra (of more objects) can be taken with a single exposure (up to approximately 80) and that the shape of the slits can be adapted to the shape of the objects, thus increasing the scientific return. Results obtained so far look very promising. To increase further the scientific power of the FORS2 instrument in the spectroscopic mode, a number of new optical dispersion elements ("grisms", i.e., a combination of a grating and a glass prism) have been added. They give the scientists a greater choice of spectral resolution and wavelength range. Another mode that is new to FORS2 is the high time resolution mode. It was demonstrated with the Crab pulsar, cf. ESO PR 17/99 and promises very interesting scientific returns. Images from the FORS2 Commissioning Phase The two composite images shown below were obtained during the FORS2 commissioning work. They are based on three exposures through different optical broadband filtres (B: 429 nm central wavelength; 88 nm FWHM (Full Width at Half Maximum), V: 554/111 nm, R: 655/165 nm). All were taken with the 2048 x 2048 pixel 2 CCD detector with a field of view of 6.8 x 6.8 arcmin 2 ; each pixel measures 24 µm square. They were flatfield corrected and bias subtracted, scaled in intensity and some cosmetic cleaning was performed, e.g. removal of bad columns on the CCD. North is up and East is left. Tarantula Nebula in the Large Magellanic Cloud ESO Press Photo 05a/00 ESO Press Photo 05a/00 [Preview; JPEG: 400 x 452; 52k] [Normal; JPEG: 800 x 903; 142k] [Full-Res; JPEG: 2048 x 2311; 2.0Mb] The Tarantula Nebula in the Large Magellanic Cloud , as obtained with FORS2 at KUEYEN during the recent Commissioning period. It was taken during the night of January 31 - February 1, 2000. It is a composite of three exposures in B (30 sec exposure, image quality 0.75 arcsec; here rendered in blue colour), V (15 sec, 0.70 arcsec; green) and R (10 sec, 0.60 arcsec; red). The full-resolution version of this photo retains the orginal pixels. 30 Doradus , also known as the Tarantula Nebula , or NGC 2070 , is located in the Large Magellanic Cloud (LMC) , some 170,000 light-years away. It is one of the largest known star-forming regions in the Local Group of Galaxies. It was first catalogued as a star, but then recognized to be a nebula by the French astronomer A. Lacaille in 1751-52. The Tarantula Nebula is the only extra-galactic nebula which can be seen with the unaided eye. It contains in the centre the open stellar cluster R 136 with many of the largest, hottest, and most massive stars known. Radio Galaxy Centaurus A ESO Press Photo 05b/00 ESO Press Photo 05b/00 [Preview; JPEG: 400 x 448; 40k] [Normal; JPEG: 800 x 896; 110k] [Full-Res; JPEG: 2048 x 2293; 2.0Mb] The radio galaxy Centarus A , as obtained with FORS2 at KUEYEN during the recent Commissioning period. It was taken during the night of January 31 - February 1, 2000. It is a composite of three exposures in B (300 sec exposure, image quality 0.60 arcsec; here rendered in blue colour), V (240 sec, 0.60 arcsec; green) and R (240 sec, 0.55 arcsec; red). The full-resolution version of this photo retains the orginal pixels. ESO Press Photo 05c/00 ESO Press Photo 05c/00 [Preview; JPEG: 400 x 446; 52k] [Normal; JPEG: 801 x 894; 112k] An area, north-west of the centre of Centaurus A with a detailed view of the dust lane and clusters of luminous blue stars. The normal version of this photo retains the orginal pixels. The new FORS2 image of Centaurus A , also known as NGC 5128 , is an example of how frontier science can be combined with esthetic aspects. This galaxy is a most interesting object for the present attempts to understand active galaxies . It is being investigated by means of observations in all spectral regions, from radio via infrared and optical wavelengths to X- and gamma-rays. It is one of the most extensively studied objects in the southern sky. FORS2 , with its large field-of-view and excellent optical resolution, makes it possible to study the global context of the active region in Centaurus A in great detail. Note for instance the great number of massive and luminous blue stars that are well resolved individually, in the upper right and lower left in PR Photo 05b/00 . Centaurus A is one of the foremost examples of a radio-loud active galactic nucleus (AGN) . On images obtained at optical wavelengths, thick dust layers almost completely obscure the galaxy's centre. This structure was first reported by Sir John Herschel in 1847. Until 1949, NGC 5128 was thought to be a strange object in the Milky Way, but it was then identified as a powerful radio galaxy and designated Centaurus A . The distance is about 10-13 million light-years (3-4 Mpc) and the apparent visual magnitude is about 8, or 5 times too faint to be seen with the unaided eye. There is strong evidence that Centaurus A is a merger of an elliptical with a spiral galaxy, since elliptical galaxies would not have had enough dust and gas to form the young, blue stars seen along the edges of the dust lane. The core of Centaurus A is the smallest known extragalactic radio source, only 10 light-days across. A jet of high energy particles from this centre is observed in radio and X-ray images. The core probably contains a supermassive black hole with a mass of about 100 million solar masses. This is the caption to ESO PR Photos 05a-c/00 . They may be reproduced, if credit is given to the European Southern Observatory..
NASA Astrophysics Data System (ADS)
2000-01-01
VLT MELIPAL Achieves Successful "First Light" in Record Time This was a night to remember at the ESO Paranal Observatory! For the first time, three 8.2-m VLT telescopes were observing in parallel, with a combined mirror surface of nearly 160 m 2. In the evening of January 26, the third 8.2-m Unit Telescope, MELIPAL ("The Southern Cross" in the Mapuche language), was pointed to the sky for the first time and successfully achieved "First Light". During this night, a number of astronomical exposures were made that served to evaluate provisionally the performance of the new telescope. The ESO staff expressed great satisfaction with MELIPAL and there were broad smiles all over the mountain. The first images ESO PR Photo 04a/00 ESO PR Photo 04a/00 [Preview - JPEG: 400 x 352 pix - 95k] [Normal - JPEG: 800 x 688 pix - 110k] Caption : ESO PR Photo 04a/00 shows the "very first light" image for MELIPAL . It is that of a relatively bright star, as recorded by the Guide Probe at about 21:50 hrs local time on January 26, 2000. It is a 0.1 sec exposure, obtained after preliminary adjustment of the optics during a few iterations with the computer controlled "active optics" system. The image quality is measured as 0.46 arcsec FWHM (Full-Width at Half Maximum). ESO PR Photo 04b/00 ESO PR Photo 04b/00 [Preview - JPEG: 400 x 429 pix - 39k] [Normal - JPEG: 885 x 949 pix - 766k] Caption : ESO PR Photo 04b/00 shows the central region of the Crab Nebula, the famous supernova remnant in the constellation Taurus (The Bull). It was obtained early in the night of "First Light" with the third 8.2-m VLT Unit Telescope, MELIPAL . It is a composite of several 30-sec exposures with the VLT Test Camera in three broad-band filters, B (here rendered as blue; most synchrotron emission), V (green) and R (red; mostly emission from hydrogen atoms). The Crab Pulsar is visible to the left; it is the lower of the two brightest stars near each other. The image quality is about 0.9 arcsec, and is completely determined by the external seeing caused by the atmospheric turbulence above the telescope at the time of the observation. The coloured, vertical lines to the left are artifacts of a "bad column" of the CCD. The field measures about 1.3 x 1.3 arcmin 2. This image may be compared with that of the same area that was recently obtained with the FORS2 instrument at KUEYEN ( PR Photo 40g/99 ). Following two days of preliminary adjustments after the installation of the secondary mirror, cf. ESO PR Photos 03a-n/00 , MELIPAL was pointed to the sky above Paranal for the first time, soon after sunset in the evening of January 26. The light of a bright star was directed towards the Guide Probe camera, and the VLT Commissioning Team, headed by Dr. Jason Spyromilio , initiated the active optics procedure . This adjusts the 150 computer-controlled supports under the main 8.2-m Zerodur mirror as well as the position of the secondary 1.1-m Beryllium mirror. After just a few iterations, the optical quality of the recorded stellar image was measured as 0.46 arcsec ( PR Photo 04a/00 ), a truly excellent value, especially at this stage! Immediately thereafter, at 22:16 hrs local time (i.e., at 01:16 hrs UT on January 27), the shutter of the VLT Test Camera at the Cassegrain focus was opened. A 1-min exposure was made through a R(ed) optical filter of a distant star cluster in the constellation Eridanus (The River). The light from its faint stars was recorded by the CCD at the focal plane and the resulting frame was read into the computer. Despite the comparatively short exposure time, myriads of stars were seen when this "first frame" was displayed on the computer screen. Moreover, the sizes of these images were found to be virtually identical to the 0.6 arcsec seeing measured simultaneously with a monitor telescope, outside the telescope enclosure. This confirmed that MELIPAL was in very good shape. Nevertheless, these very first images were still slightly elongated and further optical adjustments and tests were therefore made to eliminate this unwanted effect. It is a tribute to the extensive experience and fine skills of the ESO staff that within only 1 hour, a 30 sec exposure of the central region of the Crab Nebula in Taurus with round images was obtained, cf. PR Photo 04b/00 . The ESO Director General, Dr. Catherine Cesarsky , who assumed her function in September 1999, was present in the Control Room during these operations. She expressed great satisfaction with the excellent result and warmly congratulated the ESO staff to this achievement. She was particularly impressed with the apparent ease with which a completely new telescope of this size could be adjusted in such a short time. A part of her statement on this occasion was recorded on ESO PR Video Clip 02/00 that accompanies this Press Release. Three telescopes now in operation at Paranal At 02:30 UT on January 27, 2000, three VLT Unit Telescopes were observing in parallel, with measured seeing values of 0.6 arcsec ( ANTU - "The Sun"), 0.7 arcsec ( KUEYEN -"The Moon") and 0.7 arcsec ( MELIPAL ). MELIPAL has now joined ANTU and KUEYEN that had "First Light" in May 1998 and March 1999, respectively. The fourth VLT Unit Telescope, YEPUN ("Sirius") will become operational later this year. While normal scientific observations continue with ANTU , the UVES and FORS2 astronomical instruments are now being commissioned at KUEYEN , before this telescope will be handed over to the astronomers on April 1, 2000. The telescope commissioning period will now start for MELIPAL , after which its first instrument, VIMOS will be installed later this year. Impressions from the MELIPAL "First Light" event First Light for MELIPAL ESO PR Video Clip 02/00 "First Light for MELIPAL" (3350 frames/2:14 min) [MPEG Video+Audio; 160x120 pix; 3.1Mb] [MPEG Video+Audio; 320x240 pix; 9.4 Mb] [RealMedia; streaming; 34kps] [RealMedia; streaming; 200kps] ESO Video Clip 02/00 shows sequences from the Control Room at the Paranal Observatory, recorded with a fixed TV-camera on January 27 at 03:00 UT, soon after the moment of "First Light" with the third 8.2-m VLT Unit Telescope ( MELIPAL ). The video sequences were transmitted via ESO's dedicated satellite communication link to the Headquarters in Garching for production of the Clip. It begins with a statement by the Manager of the VLT Project, Dr. Massimo Tarenghi , as exposures of the Crab Nebula are obtained with the telescope and the raw frames are successively displayed on the monitor screen. In a following sequence, ESO's Director General, Dr. Catherine Cesarsky , briefly relates the moment of "First Light" for MELIPAL , as she experienced it at the telescope controls. ESO Press Photo 04c/00 ESO Press Photo 04c/00 [Preview; JPEG: 400 x 300; 44k] [Full size; JPEG: 1600 x 1200; 241k] The computer screen with the image of a bright star, as recorded by the Guide Probe in the early evening of January 26; see also PR Photo 04a/00. This image was used for the initial adjustments by means of the active optics system. (Digital Photo). ESO Press Photo 04d/00 ESO Press Photo 04d/00 [Preview; JPEG: 400 x 314; 49k] [Full size; JPEG: 1528 x 1200; 189k] ESO staff at the moment of "First Light" for MELIPAL in the evening of January 26. The photo was made in the wooden hut on the telescope observing floor from where the telescope was controlled during the first hours. (Digital Photo). ESO PR Photos may be reproduced, if credit is given to the European Southern Observatory. The ESO PR Video Clips service to visitors to the ESO website provides "animated" illustrations of the ongoing work and events at the European Southern Observatory. The most recent clip was: ESO PR Video Clip 01/00 with aerial sequences from Paranal (12 January 2000). Information is also available on the web about other ESO videos.
Providing Internet Access to High-Resolution Mars Images
NASA Technical Reports Server (NTRS)
Plesea, Lucian
2008-01-01
The OnMars server is a computer program that provides Internet access to high-resolution Mars images, maps, and elevation data, all suitable for use in geographical information system (GIS) software for generating images, maps, and computational models of Mars. The OnMars server is an implementation of the Open Geospatial Consortium (OGC) Web Map Service (WMS) server. Unlike other Mars Internet map servers that provide Martian data using an Earth coordinate system, the OnMars WMS server supports encoding of data in Mars-specific coordinate systems. The OnMars server offers access to most of the available high-resolution Martian image and elevation data, including an 8-meter-per-pixel uncontrolled mosaic of most of the Mars Global Surveyor (MGS) Mars Observer Camera Narrow Angle (MOCNA) image collection, which is not available elsewhere. This server can generate image and map files in the tagged image file format (TIFF), Joint Photographic Experts Group (JPEG), 8- or 16-bit Portable Network Graphics (PNG), or Keyhole Markup Language (KML) format. Image control is provided by use of the OGC Style Layer Descriptor (SLD) protocol. The OnMars server also implements tiled WMS protocol and super-overlay KML for high-performance client application programs.
NASA Astrophysics Data System (ADS)
2004-12-01
On December 9-10, 2004, the ESO Paranal Observatory was honoured with an overnight visit by His Excellency the President of the Republic of Chile, Ricardo Lagos and his wife, Mrs. Luisa Duran de Lagos. The high guests were welcomed by the ESO Director General, Dr. Catherine Cesarsky, ESO's representative in Chile, Mr. Daniel Hofstadt, and Prof. Maria Teresa Ruiz, Head of the Astronomy Department at the Universidad de Chile, as well as numerous ESO staff members working at the VLT site. The visit was characterised as private, and the President spent a considerable time in pleasant company with the Paranal staff, talking with and getting explanations from everybody. The distinguished visitors were shown the various high-tech installations at the observatory, including the Interferometric Tunnel with the VLTI delay lines and the first Auxiliary Telescope. Explanations were given by ESO astronomers and engineers and the President, a keen amateur astronomer, gained a good impression of the wide range of exciting research programmes that are carried out with the VLT. President Lagos showed a deep interest and impressed everyone present with many, highly relevant questions. Having enjoyed the spectacular sunset over the Pacific Ocean from the Residence terrace, the President met informally with the Paranal employees who had gathered for this unique occasion. Later, President Lagos visited the VLT Control Room from where the four 8.2-m Unit Telescopes and the VLT Interferometer (VLTI) are operated. Here, the President took part in an observing sequence of the spiral galaxy NGC 1097 (see PR Photo 35d/04) from the console of the MELIPAL telescope. After one more visit to the telescope platform at the top of Paranal, the President and his wife left the Observatory in the morning of December 10, 2004, flying back to Santiago. ESO PR Photo 35e/04 ESO PR Photo 35e/04 President Lagos Meets with ESO Staff at the Paranal Residencia [Preview - JPEG: 400 x 267pix - 144k] [Normal - JPEG: 640 x 427 pix - 240k] ESO PR Photo 35f/04 ESO PR Photo 35f/04 The Presidential Couple with Professor Maria Teresa Ruiz and the ESO Director General [Preview - JPEG: 500 x 400 pix - 224k] [Normal - JPEG: 1000 x 800 pix - 656k] [FullRes - JPEG: 1575 x 1260 pix - 1.0M] ESO PR Photo 35g/04 ESO PR Photo 35g/04 President Lagos with ESO Staff [Preview - JPEG: 500 x 400 pix - 192k] [Normal - JPEG: 1000 x 800 pix - 592k] [FullRes - JPEG: 1575 x 1200 pix - 1.1M] Captions: ESO PR Photo 35e/04 was obtained during President Lagos' meeting with ESO Staff at the Paranal Residencia. On ESO PR Photo 35f/04, President Lagos and Mrs. Luisa Duran de Lagos are seen at a quiet moment during the visit to the VLT Control Room, together with Prof. Maria Teresa Ruiz (far right), Head of the Astronomy Department at the Universidad de Chile, and the ESO Director General. ESO PR Photo 35g/04 shows President Lagos with some ESO staff members in the Paranal Residencia. VLT obtains a splendid photo of a unique galaxy, NGC 1097 ESO PR Photo 35d/04 ESO PR Photo 35d/04 Spiral Galaxy NGC 1097 (Melipal + VIMOS) [Preview - JPEG: 400 x 525 pix - 181k] [Normal - JPEG: 800 x 1049 pix - 757k] [FullRes - JPEG: 2296 x 3012 pix - 7.9M] Captions: ESO PR Photo 35d/04 is an almost-true colour composite based on three images made with the multi-mode VIMOS instrument on the 8.2-m Melipal (Unit Telescope 3) of ESO's Very Large Telescope. They were taken on the night of December 9-10, 2004, in the presence of the President of the Republic of Chile, Ricardo Lagos. Details are available in the Technical Note below. A unique and very beautiful image was obtained with the VIMOS instrument with President Lagos at the control desk. Located at a distance of about 45 million light-years in the southern constellation Fornax (the Furnace), NGC 1097 is a relatively bright, barred spiral galaxy of type SBb, seen face-on. At magnitude 9.5, and thus just 25 times fainter than the faintest object that can be seen with the unaided eye, it appears in small telescopes as a bright, circular disc. ESO PR Photo 35d/04, taken on the night of December 9 to 10, 2004 with the VIsible Multi-Object Spectrograph ("VIMOS), a four-channel multiobject spectrograph and imager attached to the 8.2-m VLT Melipal telescope, shows that the real structure is much more complicated. NGC 1097 is indeed a most interesting object in many respects. As this striking image reveals, NGC 1097 presents a centre that consists of a broken ring of bright knots surrounding the galaxy's nucleus. The sizes of these knots - presumably gigantic bubbles of hydrogen atoms having lost one electron (HII regions) through the intense radiation from luminous massive stars - range from roughly 750 to 2000 light-years. The presence of these knots suggests that an energetic burst of star formation has recently occurred. NGC 1097 is also known as an example of the so-called LINER (Low-Ionization Nuclear Emission Region Galaxies) class. Objects of this type are believed to be low-luminosity examples of Active Galactic Nuclei (AGN), whose emission is thought to arise from matter (gas and stars) falling into oblivion in a central black hole. There is indeed much evidence that a supermassive black hole is located at the very centre of NGC 1097, with a mass of several tens of million times the mass of the Sun. This is at least ten times more massive than the central black hole in our own Milky Way. However, NGC 1097 possesses a comparatively faint nucleus only, and the black hole in its centre must be on a very strict "diet": only a small amount of gas and stars is apparently being swallowed by the black hole at any given moment. A turbulent past As can be clearly seen in the upper part of PR Photo 35d/04, NGC 1097 also has a small galaxy companion; it is designated NGC 1097A and is located about 42,000 light-years away from the centre of NGC 1097. This peculiar elliptical galaxy is 25 times fainter than its big brother and has a "box-like" shape, not unlike NGC 6771, the smallest of the three galaxies that make up the famous Devil's Mask, cf. ESO PR Photo 12/04. There is evidence that NGC 1097 and NGC 1097A have been interacting in the recent past. Another piece of evidence for this galaxy's tumultuous past is the presence of four jets - not visible on this image - discovered in the 1970's on photographic plates. These jets are now believed to be the captured remains of a disrupted dwarf galaxy that passed through the inner part of the disc of NGC 1097. Moreover, another interesting feature of this active galaxy is the fact that no less than two supernovae were detected inside it within a time span of only four years. SN 1999eu was discovered by Japanese amateur Masakatsu Aoki (Toyama, Japan) on November 5, 1999. This 17th-magnitude supernova was a peculiar Type II supernova, the end result of the core collapse of a very massive star. And in the night of January 5 to 6, 2003, Reverend Robert Evans (Australia) discovered another Type II supernova of 15th magnitude. Also visible in this very nice image which was taken during very good sky conditions - the seeing was well below 1 arcsec - are a multitude of background galaxies of different colours and shapes. Given the fact that the total exposure time for this three-colour image was just 11 min, it is a remarkable feat, demonstrating once again the very high efficiency of the VLT.
Integrated test system of infrared and laser data based on USB 3.0
NASA Astrophysics Data System (ADS)
Fu, Hui Quan; Tang, Lin Bo; Zhang, Chao; Zhao, Bao Jun; Li, Mao Wen
2017-07-01
Based on USB3.0, this paper presents the design method of an integrated test system for both infrared image data and laser signal data processing module. The core of the design is FPGA logic control, the design uses dual-chip DDR3 SDRAM to achieve high-speed laser data cache, and receive parallel LVDS image data through serial-to-parallel conversion chip, and it achieves high-speed data communication between the system and host computer through the USB3.0 bus. The experimental results show that the developed PC software realizes the real-time display of 14-bit LVDS original image after 14-to-8 bit conversion and JPEG2000 compressed image after decompression in software, and can realize the real-time display of the acquired laser signal data. The correctness of the test system design is verified, indicating that the interface link is normal.
NASA Astrophysics Data System (ADS)
Wang, Ke-Yan; Li, Yun-Song; Liu, Kai; Wu, Cheng-Ke
2008-08-01
A novel compression algorithm for interferential multispectral images based on adaptive classification and curve-fitting is proposed. The image is first partitioned adaptively into major-interference region and minor-interference region. Different approximating functions are then constructed for two kinds of regions respectively. For the major interference region, some typical interferential curves are selected to predict other curves. These typical curves are then processed by curve-fitting method. For the minor interference region, the data of each interferential curve are independently approximated. Finally the approximating errors of two regions are entropy coded. The experimental results show that, compared with JPEG2000, the proposed algorithm not only decreases the average output bit-rate by about 0.2 bit/pixel for lossless compression, but also improves the reconstructed images and reduces the spectral distortion greatly, especially at high bit-rate for lossy compression.
Human visual system-based color image steganography using the contourlet transform
NASA Astrophysics Data System (ADS)
Abdul, W.; Carré, P.; Gaborit, P.
2010-01-01
We present a steganographic scheme based on the contourlet transform which uses the contrast sensitivity function (CSF) to control the force of insertion of the hidden information in a perceptually uniform color space. The CIELAB color space is used as it is well suited for steganographic applications because any change in the CIELAB color space has a corresponding effect on the human visual system as is very important for steganographic schemes to be undetectable by the human visual system (HVS). The perceptual decomposition of the contourlet transform gives it a natural advantage over other decompositions as it can be molded with respect to the human perception of different frequencies in an image. The evaluation of the imperceptibility of the steganographic scheme with respect to the color perception of the HVS is done using standard methods such as the structural similarity (SSIM) and CIEDE2000. The robustness of the inserted watermark is tested against JPEG compression.
Unequal power allocation for JPEG transmission over MIMO systems.
Sabir, Muhammad Farooq; Bovik, Alan Conrad; Heath, Robert W
2010-02-01
With the introduction of multiple transmit and receive antennas in next generation wireless systems, real-time image and video communication are expected to become quite common, since very high data rates will become available along with improved data reliability. New joint transmission and coding schemes that explore advantages of multiple antenna systems matched with source statistics are expected to be developed. Based on this idea, we present an unequal power allocation scheme for transmission of JPEG compressed images over multiple-input multiple-output systems employing spatial multiplexing. The JPEG-compressed image is divided into different quality layers, and different layers are transmitted simultaneously from different transmit antennas using unequal transmit power, with a constraint on the total transmit power during any symbol period. Results show that our unequal power allocation scheme provides significant image quality improvement as compared to different equal power allocations schemes, with the peak-signal-to-noise-ratio gain as high as 14 dB at low signal-to-noise-ratios.
NASA Astrophysics Data System (ADS)
2001-01-01
At the beginning of the new millennium, ESO and its staff are facing the future with confidence. The four 8.2-m Unit Telescopes of the Very Large Telescope (VLT) are in great shape and the VLT Interferometer (VLTI) will soon have "first fringes". The intercontinental ALMA project is progressing well and concepts for extremely large optical/infrared telescopes are being studied. They can also look back at a fruitful and rewarding past year. Perhaps the most important, single development has been the rapid transition of the Very Large Telescope (VLT). From being a "high-tech project under construction" it has now become a highly proficient, world-class astronomical observatory. This trend is clearly reflected in ESO's Press Releases , as more and more front-line scientific results emerge from rich data obtained at this very efficient facility. There were also exciting news from several of the instruments at La Silla. At the same time, the ESO community may soon grow, as steps towards membership are being taken by various European countries. Throughout 2000, a total of 54 PR communications were made, with a large number of Press Photos and Video Clips, cf. the 2000 PR Index. Some of the ESO PR highlights may be accessed directly via the clickable image on the present page. ESO PR Photo 01/01 is also available in a larger (non-clickable) version [ JPEG: 566 x 566 pix - 112k]. It may be reproduced, if credit is given to the European Southern Observatory.
Evaluation of a postexposure rabies prophylaxis protocol for domestic animals in Texas: 2000-2009.
Wilson, Pamela J; Oertli, Ernest H; Hunt, Patrick R; Sidwa, Thomas J
2010-12-15
To determine whether postexposure rabies prophylaxis (PEP) in domestic animals, as mandated in Texas, has continued to be effective and to evaluate preexposure or postexposure vaccination failures from 2000 through 2009. Retrospective case series. 1,014 unvaccinated domestic animals (769 dogs, 126 cats, 72 horses, 39 cattle, 3 sheep, 4 goats, and 1 llama) that received PEP and 12 vaccinated domestic animals (7 dogs and 5 cats) with possible failure of protection. Zoonotic incident reports from 2000 through 2009 were reviewed for information regarding unvaccinated domestic animals that received PEP in accordance with the state protocol after exposure to a laboratory-confirmed rabid animal; reports also were reviewed for any preexposure or postexposure vaccination failures. The state-required PEP protocol was as follows: immediately vaccinate the animal against rabies, isolate the animal for 90 days, and administer booster vaccinations during the third and eighth weeks of the isolation period. From 2000 through 2009, 1,014 animals received PEP; no failures were recorded. One preexposure vaccination failure was recorded. The Texas PEP protocol was used during the 10-year period. Results indicated that an effective PEP protocol for unvaccinated domestic animals exposed to rabies was immediate vaccination against rabies, a strict isolation period of 90 days, and administration of booster vaccinations during the third and eighth weeks of the isolation period.
Running key mapping in a quantum stream cipher by the Yuen 2000 protocol
NASA Astrophysics Data System (ADS)
Shimizu, Tetsuya; Hirota, Osamu; Nagasako, Yuki
2008-03-01
A quantum stream cipher by Yuen 2000 protocol (so-called Y00 protocol or αη scheme) consisting of linear feedback shift register of short key is very attractive in implementing secure 40 Gbits/s optical data transmission, which is expected as a next-generation network. However, a basic model of the Y00 protocol with a very short key needs a careful design against fast correlation attacks as pointed out by Donnet This Brief Report clarifies an effectiveness of irregular mapping between running key and physical signals in the driver for selection of M -ary basis in the transmitter, and gives a design method. Consequently, quantum stream cipher by the Y00 protocol with our mapping has immunity against the proposed fast correlation attacks on a basic model of the Y00 protocol even if the key is very short.
An 802.11 n wireless local area network transmission scheme for wireless telemedicine applications.
Lin, C F; Hung, S I; Chiang, I H
2010-10-01
In this paper, an 802.11 n transmission scheme is proposed for wireless telemedicine applications. IEEE 802.11n standards, a power assignment strategy, space-time block coding (STBC), and an object composition Petri net (OCPN) model are adopted. With the proposed wireless system, G.729 audio bit streams, Joint Photographic Experts Group 2000 (JPEG 2000) clinical images, and Moving Picture Experts Group 4 (MPEG-4) video bit streams achieve a transmission bit error rate (BER) of 10-7, 10-4, and 103 simultaneously. The proposed system meets the requirements prescribed for wireless telemedicine applications. An essential feature of this proposed transmission scheme is that clinical information that requires a high quality of service (QoS) is transmitted at a high power transmission rate with significant error protection. For maximizing resource utilization and minimizing the total transmission power, STBC and adaptive modulation techniques are used in the proposed 802.11 n wireless telemedicine system. Further, low power, direct mapping (DM), low-error protection scheme, and high-level modulation are adopted for messages that can tolerate a high BER. With the proposed transmission scheme, the required reliability of communication can be achieved. Our simulation results have shown that the proposed 802.11 n transmission scheme can be used for developing effective wireless telemedicine systems.
JPEG XS-based frame buffer compression inside HEVC for power-aware video compression
NASA Astrophysics Data System (ADS)
Willème, Alexandre; Descampe, Antonin; Rouvroy, Gaël.; Pellegrin, Pascal; Macq, Benoit
2017-09-01
With the emergence of Ultra-High Definition video, reference frame buffers (FBs) inside HEVC-like encoders and decoders have to sustain huge bandwidth. The power consumed by these external memory accesses accounts for a significant share of the codec's total consumption. This paper describes a solution to significantly decrease the FB's bandwidth, making HEVC encoder more suitable for use in power-aware applications. The proposed prototype consists in integrating an embedded lightweight, low-latency and visually lossless codec at the FB interface inside HEVC in order to store each reference frame as several compressed bitstreams. As opposed to previous works, our solution compresses large picture areas (ranging from a CTU to a frame stripe) independently in order to better exploit the spatial redundancy found in the reference frame. This work investigates two data reuse schemes namely Level-C and Level-D. Our approach is made possible thanks to simplified motion estimation mechanisms further reducing the FB's bandwidth and inducing very low quality degradation. In this work, we integrated JPEG XS, the upcoming standard for lightweight low-latency video compression, inside HEVC. In practice, the proposed implementation is based on HM 16.8 and on XSM 1.1.2 (JPEG XS Test Model). Through this paper, the architecture of our HEVC with JPEG XS-based frame buffer compression is described. Then its performance is compared to HM encoder. Compared to previous works, our prototype provides significant external memory bandwidth reduction. Depending on the reuse scheme, one can expect bandwidth and FB size reduction ranging from 50% to 83.3% without significant quality degradation.
Implementation of remote monitoring and managing switches
NASA Astrophysics Data System (ADS)
Leng, Junmin; Fu, Guo
2010-12-01
In order to strengthen the safety performance of the network and provide the big convenience and efficiency for the operator and the manager, the system of remote monitoring and managing switches has been designed and achieved using the advanced network technology and present network resources. The fast speed Internet Protocol Cameras (FS IP Camera) is selected, which has 32-bit RSIC embedded processor and can support a number of protocols. An Optimal image compress algorithm Motion-JPEG is adopted so that high resolution images can be transmitted by narrow network bandwidth. The architecture of the whole monitoring and managing system is designed and implemented according to the current infrastructure of the network and switches. The control and administrative software is projected. The dynamical webpage Java Server Pages (JSP) development platform is utilized in the system. SQL (Structured Query Language) Server database is applied to save and access images information, network messages and users' data. The reliability and security of the system is further strengthened by the access control. The software in the system is made to be cross-platform so that multiple operating systems (UNIX, Linux and Windows operating systems) are supported. The application of the system can greatly reduce manpower cost, and can quickly find and solve problems.
Kasenda, Benjamin; Schandelmaier, Stefan; Sun, Xin; von Elm, Erik; You, John; Blümle, Anette; Tomonaga, Yuki; Saccilotto, Ramon; Amstutz, Alain; Bengough, Theresa; Meerpohl, Joerg J; Stegert, Mihaela; Olu, Kelechi K; Tikkinen, Kari A O; Neumann, Ignacio; Carrasco-Labra, Alonso; Faulhaber, Markus; Mulla, Sohail M; Mertz, Dominik; Akl, Elie A; Bassler, Dirk; Busse, Jason W; Ferreira-González, Ignacio; Lamontagne, Francois; Nordmann, Alain; Gloy, Viktoria; Raatz, Heike; Moja, Lorenzo; Rosenthal, Rachel; Ebrahim, Shanil; Vandvik, Per O; Johnston, Bradley C; Walter, Martin A; Burnand, Bernard; Schwenkglenks, Matthias; Hemkens, Lars G; Bucher, Heiner C; Guyatt, Gordon H; Briel, Matthias
2014-07-16
To investigate the planning of subgroup analyses in protocols of randomised controlled trials and the agreement with corresponding full journal publications. Cohort of protocols of randomised controlled trial and subsequent full journal publications. Six research ethics committees in Switzerland, Germany, and Canada. 894 protocols of randomised controlled trial involving patients approved by participating research ethics committees between 2000 and 2003 and 515 subsequent full journal publications. Of 894 protocols of randomised controlled trials, 252 (28.2%) included one or more planned subgroup analyses. Of those, 17 (6.7%) provided a clear hypothesis for at least one subgroup analysis, 10 (4.0%) anticipated the direction of a subgroup effect, and 87 (34.5%) planned a statistical test for interaction. Industry sponsored trials more often planned subgroup analyses compared with investigator sponsored trials (195/551 (35.4%) v 57/343 (16.6%), P<0.001). Of 515 identified journal publications, 246 (47.8%) reported at least one subgroup analysis. In 81 (32.9%) of the 246 publications reporting subgroup analyses, authors stated that subgroup analyses were prespecified, but this was not supported by 28 (34.6%) corresponding protocols. In 86 publications, authors claimed a subgroup effect, but only 36 (41.9%) corresponding protocols reported a planned subgroup analysis. Subgroup analyses are insufficiently described in the protocols of randomised controlled trials submitted to research ethics committees, and investigators rarely specify the anticipated direction of subgroup effects. More than one third of statements in publications of randomised controlled trials about subgroup prespecification had no documentation in the corresponding protocols. Definitive judgments regarding credibility of claimed subgroup effects are not possible without access to protocols and analysis plans of randomised controlled trials. © The DISCO study group 2014.
NASA Astrophysics Data System (ADS)
Yu, Shanshan; Murakami, Yuri; Obi, Takashi; Yamaguchi, Masahiro; Ohyama, Nagaaki
2006-09-01
The article proposes a multispectral image compression scheme using nonlinear spectral transform for better colorimetric and spectral reproducibility. In the method, we show the reduction of colorimetric error under a defined viewing illuminant and also that spectral accuracy can be improved simultaneously using a nonlinear spectral transform called Labplus, which takes into account the nonlinearity of human color vision. Moreover, we show that the addition of diagonal matrices to Labplus can further preserve the spectral accuracy and has a generalized effect of improving the colorimetric accuracy under other viewing illuminants than the defined one. Finally, we discuss the usage of the first-order Markov model to form the analysis vectors for the higher order channels in Labplus to reduce the computational complexity. We implement a multispectral image compression system that integrates Labplus with JPEG2000 for high colorimetric and spectral reproducibility. Experimental results for a 16-band multispectral image show the effectiveness of the proposed scheme.
Comparative performance between compressed and uncompressed airborne imagery
NASA Astrophysics Data System (ADS)
Phan, Chung; Rupp, Ronald; Agarwal, Sanjeev; Trang, Anh; Nair, Sumesh
2008-04-01
The US Army's RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD), Countermine Division is evaluating the compressibility of airborne multi-spectral imagery for mine and minefield detection application. Of particular interest is to assess the highest image data compression rate that can be afforded without the loss of image quality for war fighters in the loop and performance of near real time mine detection algorithm. The JPEG-2000 compression standard is used to perform data compression. Both lossless and lossy compressions are considered. A multi-spectral anomaly detector such as RX (Reed & Xiaoli), which is widely used as a core algorithm baseline in airborne mine and minefield detection on different mine types, minefields, and terrains to identify potential individual targets, is used to compare the mine detection performance. This paper presents the compression scheme and compares detection performance results between compressed and uncompressed imagery for various level of compressions. The compression efficiency is evaluated and its dependence upon different backgrounds and other factors are documented and presented using multi-spectral data.
Fast H.264/AVC FRExt intra coding using belief propagation.
Milani, Simone
2011-01-01
In the H.264/AVC FRExt coder, the coding performance of Intra coding significantly overcomes the previous still image coding standards, like JPEG2000, thanks to a massive use of spatial prediction. Unfortunately, the adoption of an extensive set of predictors induces a significant increase of the computational complexity required by the rate-distortion optimization routine. The paper presents a complexity reduction strategy that aims at reducing the computational load of the Intra coding with a small loss in the compression performance. The proposed algorithm relies on selecting a reduced set of prediction modes according to their probabilities, which are estimated adopting a belief-propagation procedure. Experimental results show that the proposed method permits saving up to 60 % of the coding time required by an exhaustive rate-distortion optimization method with a negligible loss in performance. Moreover, it permits an accurate control of the computational complexity unlike other methods where the computational complexity depends upon the coded sequence.
Segmentation-driven compound document coding based on H.264/AVC-INTRA.
Zaghetto, Alexandre; de Queiroz, Ricardo L
2007-07-01
In this paper, we explore H.264/AVC operating in intraframe mode to compress a mixed image, i.e., composed of text, graphics, and pictures. Even though mixed contents (compound) documents usually require the use of multiple compressors, we apply a single compressor for both text and pictures. For that, distortion is taken into account differently between text and picture regions. Our approach is to use a segmentation-driven adaptation strategy to change the H.264/AVC quantization parameter on a macroblock by macroblock basis, i.e., we deviate bits from pictorial regions to text in order to keep text edges sharp. We show results of a segmentation driven quantizer adaptation method applied to compress documents. Our reconstructed images have better text sharpness compared to straight unadapted coding, at negligible visual losses on pictorial regions. Our results also highlight the fact that H.264/AVC-INTRA outperforms coders such as JPEG-2000 as a single coder for compound images.
Progressive transmission of images over fading channels using rate-compatible LDPC codes.
Pan, Xiang; Banihashemi, Amir H; Cuhadar, Aysegul
2006-12-01
In this paper, we propose a combined source/channel coding scheme for transmission of images over fading channels. The proposed scheme employs rate-compatible low-density parity-check codes along with embedded image coders such as JPEG2000 and set partitioning in hierarchical trees (SPIHT). The assignment of channel coding rates to source packets is performed by a fast trellis-based algorithm. We examine the performance of the proposed scheme over correlated and uncorrelated Rayleigh flat-fading channels with and without side information. Simulation results for the expected peak signal-to-noise ratio of reconstructed images, which are within 1 dB of the capacity upper bound over a wide range of channel signal-to-noise ratios, show considerable improvement compared to existing results under similar conditions. We also study the sensitivity of the proposed scheme in the presence of channel estimation error at the transmitter and demonstrate that under most conditions our scheme is more robust compared to existing schemes.
Google Books: making the public domain universally accessible
NASA Astrophysics Data System (ADS)
Langley, Adam; Bloomberg, Dan S.
2007-01-01
Google Book Search is working with libraries and publishers around the world to digitally scan books. Some of those works are now in the public domain and, in keeping with Google's mission to make all the world's information useful and universally accessible, we wish to allow users to download them all. For users, it is important that the files are as small as possible and of printable quality. This means that a single codec for both text and images is impractical. We use PDF as a container for a mixture of JBIG2 and JPEG2000 images which are composed into a final set of pages. We discuss both the implementation of an open source JBIG2 encoder, which we use to compress text data, and the design of the infrastructure needed to meet the technical, legal and user requirements of serving many scanned works. We also cover the lessons learnt about dealing with different PDF readers and how to write files that work on most of the readers, most of the time.
2-Step scalar deadzone quantization for bitplane image coding.
Auli-Llinas, Francesc
2013-12-01
Modern lossy image coding systems generate a quality progressive codestream that, truncated at increasing rates, produces an image with decreasing distortion. Quality progressivity is commonly provided by an embedded quantizer that employs uniform scalar deadzone quantization (USDQ) together with a bitplane coding strategy. This paper introduces a 2-step scalar deadzone quantization (2SDQ) scheme that achieves same coding performance as that of USDQ while reducing the coding passes and the emitted symbols of the bitplane coding engine. This serves to reduce the computational costs of the codec and/or to code high dynamic range images. The main insights behind 2SDQ are the use of two quantization step sizes that approximate wavelet coefficients with more or less precision depending on their density, and a rate-distortion optimization technique that adjusts the distortion decreases produced when coding 2SDQ indexes. The integration of 2SDQ in current codecs is straightforward. The applicability and efficiency of 2SDQ are demonstrated within the framework of JPEG2000.
Fu, Chi-Yung; Petrich, Loren I.
1997-01-01
An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described.
Fingerprint recognition of wavelet-based compressed images by neuro-fuzzy clustering
NASA Astrophysics Data System (ADS)
Liu, Ti C.; Mitra, Sunanda
1996-06-01
Image compression plays a crucial role in many important and diverse applications requiring efficient storage and transmission. This work mainly focuses on a wavelet transform (WT) based compression of fingerprint images and the subsequent classification of the reconstructed images. The algorithm developed involves multiresolution wavelet decomposition, uniform scalar quantization, entropy and run- length encoder/decoder and K-means clustering of the invariant moments as fingerprint features. The performance of the WT-based compression algorithm has been compared with JPEG current image compression standard. Simulation results show that WT outperforms JPEG in high compression ratio region and the reconstructed fingerprint image yields proper classification.
Halftoning processing on a JPEG-compressed image
NASA Astrophysics Data System (ADS)
Sibade, Cedric; Barizien, Stephane; Akil, Mohamed; Perroton, Laurent
2003-12-01
Digital image processing algorithms are usually designed for the raw format, that is on an uncompressed representation of the image. Therefore prior to transforming or processing a compressed format, decompression is applied; then, the result of the processing application is finally re-compressed for further transfer or storage. The change of data representation is resource-consuming in terms of computation, time and memory usage. In the wide format printing industry, this problem becomes an important issue: e.g. a 1 m2 input color image, scanned at 600 dpi exceeds 1.6 GB in its raw representation. However, some image processing algorithms can be performed in the compressed-domain, by applying an equivalent operation on the compressed format. This paper is presenting an innovative application of the halftoning processing operation by screening, to be applied on JPEG-compressed image. This compressed-domain transform is performed by computing the threshold operation of the screening algorithm in the DCT domain. This algorithm is illustrated by examples for different halftone masks. A pre-sharpening operation, applied on a JPEG-compressed low quality image is also described; it allows to de-noise and to enhance the contours of this image.
Design and evaluation of web-based image transmission and display with different protocols
NASA Astrophysics Data System (ADS)
Tan, Bin; Chen, Kuangyi; Zheng, Xichuan; Zhang, Jianguo
2011-03-01
There are many Web-based image accessing technologies used in medical imaging area, such as component-based (ActiveX Control) thick client Web display, Zerofootprint thin client Web viewer (or called server side processing Web viewer), Flash Rich Internet Application(RIA) ,or HTML5 based Web display. Different Web display methods have different peformance in different network environment. In this presenation, we give an evaluation on two developed Web based image display systems. The first one is used for thin client Web display. It works between a PACS Web server with WADO interface and thin client. The PACS Web server provides JPEG format images to HTML pages. The second one is for thick client Web display. It works between a PACS Web server with WADO interface and thick client running in browsers containing ActiveX control, Flash RIA program or HTML5 scripts. The PACS Web server provides native DICOM format images or JPIP stream for theses clients.
Barisoni, Laura; Troost, Jonathan P; Nast, Cynthia; Bagnasco, Serena; Avila-Casado, Carmen; Hodgin, Jeffrey; Palmer, Matthew; Rosenberg, Avi; Gasim, Adil; Liensziewski, Chrysta; Merlino, Lino; Chien, Hui-Ping; Chang, Anthony; Meehan, Shane M; Gaut, Joseph; Song, Peter; Holzman, Lawrence; Gibson, Debbie; Kretzler, Matthias; Gillespie, Brenda W; Hewitt, Stephen M
2016-07-01
The multicenter Nephrotic Syndrome Study Network (NEPTUNE) digital pathology scoring system employs a novel and comprehensive methodology to document pathologic features from whole-slide images, immunofluorescence and ultrastructural digital images. To estimate inter- and intra-reader concordance of this descriptor-based approach, data from 12 pathologists (eight NEPTUNE and four non-NEPTUNE) with experience from training to 30 years were collected. A descriptor reference manual was generated and a webinar-based protocol for consensus/cross-training implemented. Intra-reader concordance for 51 glomerular descriptors was evaluated on jpeg images by seven NEPTUNE pathologists scoring 131 glomeruli three times (Tests I, II, and III), each test following a consensus webinar review. Inter-reader concordance of glomerular descriptors was evaluated in 315 glomeruli by all pathologists; interstitial fibrosis and tubular atrophy (244 cases, whole-slide images) and four ultrastructural podocyte descriptors (178 cases, jpeg images) were evaluated once by six and five pathologists, respectively. Cohen's kappa for inter-reader concordance for 48/51 glomerular descriptors with sufficient observations was moderate (0.40
Two VLT 8.2-m Unit Telescopes in Action
NASA Astrophysics Data System (ADS)
1999-04-01
Visitors at ANTU - Astronomical Images from KUEYEN The VLT Control Room at the Paranal Observatory is becoming a busy place indeed. From here, two specialist teams of ESO astronomers and engineers now operate two VLT 8.2-m Unit Telescopes in parallel, ANTU and KUEYEN (formerly UT1 and UT2, for more information about the naming and the pronunciation, see ESO Press Release 06/99 ). Regular science observations have just started with the first of these giant telescopes, while impressive astronomical images are being obtained with the second. The work is hard, but the mood in the control room is good. Insiders claim that there have even been occasions on which the groups have had a friendly "competition" about which telescope makes the "best" images! The ANTU-team has worked with the FORS multi-mode instrument , their colleagues at KUEYEN use the VLT Test Camera for the ongoing tests of this new telescope. While the first is a highly developed astronomical instrument with a large-field CCD imager (6.8 x 6.8 arcmin 2 in the normal mode; 3.4 x 3.4 arcmin 2 in the high-resolution mode), the other is a less complex CCD camera with a smaller field (1.5 x 1.5 arcmin 2 ), suited to verify the optical performance of the telescope. As these images demonstrate, the performance of the second VLT Unit Telescope is steadily improving and it may not be too long before its optical quality will approach that of the first. First KUEYEN photos of stars and galaxies We present here some of the first astronomical images, taken with the second telescope, KUEYEN, in late March and early April 1999. They reflect the current status of the optical, electronic and mechanical systems, still in the process of being tuned. As expected, the experience gained from ANTU last year has turned out to be invaluable and has allowed good progress during this extremely delicate process. ESO PR Photo 19a/99 ESO PR Photo 19a/99 [Preview - JPEG: 400 x 433 pix - 160k] [Normal - JPEG: 800 x 866 pix - 457k] [High-Res - JPEG: 1985 x 2148 pix - 2.0M] ESO PR Photo 19b/99 ESO PR Photo 19b/99 [Preview - JPEG: 400 x 478 pix - 165k] [Normal - JPEG: 800 x 956 pix - 594k] [High-Res - JPEG: 3000 x 3583 pix - 7.1M] Caption to PR Photo 19a/99 : This photo was obtained with VLT KUEYEN on April 4, 1999. It is reproduced from an excellent 60-second R(ed)-band exposure of the innermost region of a globular cluster, Messier 68 (NGC 4590) , in the southern constellation Hydra (The Water-Snake). The distance to this 8-mag cluster is about 35,000 light years, and the diameter is about 140 light-years. The excellent image quality is 0.38 arcsec , demonstrating a good optical and mechanical state of the telescope, already at this early stage of the commissioning phase. The field measures about 90 x 90 arcsec 2. The original scale is 0.0455 pix/arcsec and there are 2048x2048 pixels in one frame. North is up and East is left. Caption to PR Photo 19b/99 : This photo shows the central region of spiral galaxy ESO 269-57 , located in the southern constellation Centaurus at a distance of about 150 million light-years. Many galaxies are seen in this direction at about the same distance, forming a loose cluster; there are also some fainter, more distant ones in the background. The designation refers to the ESO/Uppsala Survey of the Southern Sky in the 1970's during which over 15,000 southern galaxies were catalogued. ESO 269-57 is a tightly bound object of type Sar , the "r" referring to the "ring" that surrounds the bright centre, that is overexposed here. The photo is a composite, based on three exposures (Blue - 600 sec; Yellow-Green - 300 sec; Red - 300 sec) obtained with KUEYEN on March 28, 1999. The image quality is 0.7 arcsec and the field is 90 x 90 arcsec 2. North is up and East is left. ESO PR Photo 19c/99 ESO PR Photo 19c/99 [Preview - JPEG: 400 x 478 pix - 132k] [Normal - JPEG: 800 x 956 pix - 446k] [High-Res - JPEG: 3000 x 3583 pix - 4.6M] ESO PR Photo 19d/99 ESO PR Photo 19d/99 [Preview - JPEG: 400 x 454 pix - 86k] [Normal - JPEG: 800 x 907 pix - 301k] [High-Res - JPEG: 978 x 1109 pix - 282k] Caption to PR Photo 19c/99 : Somewhat further out in space, and right on the border between the southern constellations Hydra and Centaurus lies this knotty spiral galaxy, IC 4248 ; the distance is about 210 million light-years. It was imaged with KUEYEN on March 28, 1999, with the same filters and exposure times as used for Photo 19b/99. The image quality is 0.75 arcsec and the field is 90 x 90 arcsec 2. North is up and East is left. Caption to PR Photo 19d/99 : This is a close-up view of the double galaxy NGC 5090 (right) and NGC 5091 (left), in the southern constellation Centaurus. The first is a typical S0 galaxy with a bright diffuse centre, surrounded by a fainter envelope of stars (not resolved in this picture). However, some of the starlike objects seen in this region may be globular clusters (or dwarf galaxies) in orbit around NGC 5090. The other galaxy is of type Sa (the spiral structure is more developed) and is seen at a steep angle. The three-colour composite is based on frames obtained with KUEYEN on March 29, 1999, with the same filters and exposure times as used for Photo 19b/99. The image quality is 0.7 arcsec and the field is 90 x 90 arcsec 2. North is up and East is left. ( Note inserted on April 26: The original caption text identified the second galaxy as NGC 5090B - this error has now been corrected. ESO PR Photo 19e/99 ESO PR Photo 19e/99 [Preview - JPEG: 400 x 441 pix - 282k] [Normal - JPEG: 800 x 882 pix - 966k] [High-Res - JPEG: 3000 x 3307 pix - 6,4M] Caption to PR Photo 19e/99 : Wide-angle photo of the second 8.2-m VLT Unit Telescope, KUEYEN , obtained on March 10, 1999, with the main mirror and its cell in place at the bottom of the telescope structure. The Test Camera with which the astronomical images above were made, is positioned at the Cassegrain focus, inside this mirror cell. The Paranal Inauguration on March 5, 1999, took place under this telescope that was tilted towards the horizon to accommodate nearly 300 persons on the observing floor. Astronomical observations with ANTU have started On April 1, 1999, the first 8.2-m VLT Unit Telescope, ANTU , was "handed over" to the astronomers. Last year, about 270 observing proposals competed about the first, precious observing time at Europe's largest optical telescope and more than 100 of these were accommodated within the six-month period until the end of September 1999. The complete observing schedule is available on the web. These observations will be carried out in two different modes. During the Visitor Mode , the astronomers will be present at the telescope, while in the Service Mode , ESO observers perform the observations. The latter procedure allows a greater degree of flexibility and the possibility to assign periods of particularly good observing conditions to programmes whose success is critically dependent on this. The first ten nights at ANTU were allocated to service mode observations. After some initial technical problems with the instruments, these have now started. Already in the first night, programmes at ISAAC requiring 0.4 arcsec conditions could be satisfied, and some images better than 0.3 arcsec were obtained in the near-infrared . The first astronomers to use the telescope in visitors mode will be Professors Immo Appenzeller (Heidelberg, Germany; "Photo-polarimetry of pulsars") and George Miley (Leiden, The Netherlands; "Distant radio galaxies") with their respective team colleagues. How to obtain ESO Press Information ESO Press Information is made available on the World-Wide Web (URL: http://www.eso.org../ ). ESO Press Photos may be reproduced, if credit is given to the European Southern Observatory. Note also the dedicated webarea with VLT Information.
VIMOS - a Cosmology Machine for the VLT
NASA Astrophysics Data System (ADS)
2002-03-01
Successful Test Observations With Powerful New Instrument at Paranal [1] Summary One of the most fundamental tasks of modern astrophysics is the study of the evolution of the Universe . This is a daunting undertaking that requires extensive observations of large samples of objects in order to produce reasonably detailed maps of the distribution of galaxies in the Universe and to perform statistical analysis. Much effort is now being put into mapping the relatively nearby space and thereby to learn how the Universe looks today . But to study its evolution, we must compare this with how it looked when it still was young . This is possible, because astronomers can "look back in time" by studying remote objects - the larger their distance, the longer the light we now observe has been underway to us, and the longer is thus the corresponding "look-back time". This may sound easy, but it is not. Very distant objects are very dim and can only be observed with large telescopes. Looking at one object at a time would make such a study extremely time-consuming and, in practical terms, impossible. To do it anyhow, we need the largest possible telescope with a highly specialised, exceedingly sensitive instrument that is able to observe a very large number of (faint) objects in the remote universe simultaneously . The VLT VIsible Multi-Object Spectrograph (VIMOS) is such an instrument. It can obtain many hundreds of spectra of individual galaxies in the shortest possible time; in fact, in one special observing mode, up to 6400 spectra of the galaxies in a remote cluster during a single exposure, augmenting the data gathering power of the telescope by the same proportion. This marvellous science machine has just been installed at the 8.2-m MELIPAL telescope, the third unit of the Very Large Telescope (VLT) at the ESO Paranal Observatory. A main task will be to carry out 3-dimensional mapping of the distant Universe from which we can learn its large-scale structure . "First light" was achieved on February 26, 2002, and a first series of test observations has successfully demonstrated the huge potential of this amazing facility. Much work on VIMOS is still ahead during the coming months in order to put into full operation and fine-tune the most efficient "galaxy cruncher" in the world. VIMOS is the outcome of a fruitful collaboration between ESO and several research institutes in France and Italy, under the responsibility of the Laboratoire d'Astrophysique de Marseille (CNRS, France). The other partners in the "VIRMOS Consortium" are the Laboratoire d'Astrophysique de Toulouse, Observatoire Midi-Pyrénées, and Observatoire de Haute-Provence in France, and Istituto di Radioastronomia (Bologna), Istituto di Fisica Cosmica e Tecnologie Relative (Milano), Osservatorio Astronomico di Bologna, Osservatorio Astronomico di Brera (Milano) and Osservatorio Astronomico di Capodimonte (Naples) in Italy. PR Photo 09a/02 : VIMOS image of the Antennae Galaxies (centre). PR Photo 09b/02 : First VIMOS Multi-Object Spectrum (full field) PR Photo 09c/02 : The VIMOS instrument on VLT MELIPAL PR Photo 09d/02 : The VIMOS team at "First Light". PR Photo 09e/02 : "First Light" image of NGC 5364 PR Photo 09f/02 : Image of the Crab Nebula PR Photo 09g/02 : Image of spiral galaxy NGC 2613 PR Photo 09h/02 : Image of spiral galaxy Messier 100 PR Photo 09i/02 : Image of cluster of galaxies ACO 3341 PR Photo 09j/02 : Image of cluster of galaxies MS 1008.1-1224 PR Photo 09k/02 : Mask design for MOS exposure PR Photo 09l/02 : First VIMOS Multi-Object Spectrum (detail) PR Photo 09m/02 : Integrated Field Spectroscopy of central area of the "Antennae Galaxies" PR Photo 09n/02 : Integrated Field Spectroscopy of central area of the "Antennae Galaxies" (detail) Science with VIMOS ESO PR Photo 09a/02 ESO PR Photo 09a/02 [Preview - JPEG: 400 x 469 pix - 152k] [Normal - JPEG: 800 x 938 pix - 408k] ESO PR Photo 09b/02 ESO PR Photo 09b/02 [Preview - JPEG: 400 x 511 pix - 304k] [Normal - JPEG: 800 x 1022 pix - 728k] Caption : PR Photo 09a/02 : One of the first images from the new VIMOS facility, obtained right after the moment of "first light" on Ferbruary 26, 2002. It shows the famous "Antennae Galaxies" (NGC 4038/39), the result of a recent collision between two galaxies. As an immediate outcome of this dramatic event, stars are born within massive complexes that appear blue in this composite photo, based on exposures through green, orange and red optical filtres. PR Photo 09b/02 : Some of the first spectra of distant galaxies obtained with VIMOS in Multi-Object-Spectroscopy (MOS) mode. More than 220 galaxies were observed simultaneously, an unprecedented efficiency for such a "deep" exposure, reaching so far out in space. These spectra allow to obtain the redshift, a measure of distance, as well as to assess the physical status of the gas and stars in each of these galaxies. A part of this photo is enlarged as PR Photo 09l/02. Technical information about these photos is available below. Other "First Light" images from VIMOS are shown in the photo gallery below. The next in the long series of front-line instruments to be installed on the ESO Very Large Telescope (VLT), VIMOS (and its complementary, infrared-sensitive counterpart NIRMOS, now in the design stage) will allow mapping of the distribution of galaxies, clusters, and quasars during a time interval spanning more than 90% of the age of the universe. It will let us look back in time to a moment only ~1.5 billion years after the Big Bang (corresponding to a redshift of about 5). Like archaeologists, astronomers can then dig deep into those early ages when the first building blocks of galaxies were still in the process of formation. They will be able to determine when most of the star formation occurred in the universe and how it evolved with time. They will analyse how the galaxies cluster in space, and how this distribution varies with time. Such observations will put important constraints on evolution models, in particular on the average density of matter in the Universe. Mapping the distant universe requires to determine the distances of the enormous numbers of remote galaxies seen in deep pictures of the sky, adding depth - the third, indispensible dimension - to the photo. VIMOS offers this capability, and very efficiently. Multi-object spectroscopy is a technique by which many objects are observed simultaneously. VIMOS can observe the spectra of about 1000 galaxies in one exposure, from which redshifts, hence distances, can be measured [2]. The possibility to observe two galaxies at once would be equivalent to having a telescope twice the size of a VLT Unit Telescope. VIMOS thus effectively "increases" the size of the VLT hundreds of times. From these spectra, the stellar and gaseous content and internal velocities of galaxies can be infered, forming the base for detailed physical studies. At present the distances of only a few thousand galaxies and quasars have been measured in the distant universe. VIMOS aims at observing 100 times more, over one hundred thousand of those remote objects. This will form a solid base for unprecedented and detailed statistical studies of the population of galaxies and quasars in the very early universe. The international VIRMOS Consortium VIMOS is one of two major astronomical instruments to be delivered by the VIRMOS Consortium of French and Italian institutes under a contract signed in the summer of 1997 between the European Southern Observatory (ESO) and the French Centre National de la Recherche Scientifique (CNRS). The participating institutes are: in France: * Laboratoire d'Astrophysique de Marseille (LAM), Observatoire Marseille-Provence (project responsible) * Laboratoire d'Astrophysique de Toulouse, Observatoire Midi-Pyrénées * Observatoire de Haute-Provence (OHP) in Italy: * Istituto di Radioastronomia (IRA-CNR) (Bologna) * Istituto di Fisica Cosmica e Tecnologie Relative (IFCTR) (Milano) * Osservatorio Astronomico di Capodimonte (OAC) (Naples) * Osservatorio Astronomico di Bologna (OABo) * Osservatorio Astronomico di Brera (OABr) (Milano) VIMOS at the VLT: a unique and powerful combination ESO PR Photo 09c/02 ESO PR Photo 09c/02 [Preview - JPEG: 501 x 400 pix - 312k] [Normal - JPEG: 1002 x 800 pix - 840k] Caption : PR Photo 09c/02 shows the new VIMOS instrument on one of the Nasmyth platforms of the 8.2-m VLT MELIPAL telescope at Paranal. VIMOS is installed on the Nasmyth "Focus B" platform of the 8.2-m VLT MELIPAL telescope, cf. PR Photo 09c/02 . It may be compared to four multi-mode instruments of the FORS-type (cf. ESO PR 14/98 ), joined in one stiff structure. The construction of VIMOS has involved the production of large and complex optical elements and their integration in more than 30 remotely controlled, finely moving functions in the instrument. In the configuration employed for the "first light", VIMOS made use of two of its four channels. The two others will be put into operation in the next commissioning period during the coming months. However, VIMOS is already now the most efficient multi-object spectrograph in the world , with an equivalent (accumulated) slit length of up to 70 arcmin on the sky. VIMOS has a field-of-view as large as half of the full moon (14 x 16 arcmin 2 for the four quadrants), the largest sky field to be imaged so far by the VLT. It has excellent sensitivity in the blue region of the spectrum (about 60% more efficient than any other similar instruments in the ultraviolet band), and it is also very sensitive in all other visible spectral regions, all the way to the red limit. But the absolutely unique feature of VIMOS is its capability to take large numbers of spectra simultaneously , leading to exceedingly efficient use of the observing time. Up to about 1000 objects can be observed in a single exposure in multi-slit mode. And no less than 6400 spectra can be recorded with the Integral Field Unit , in which a closely packed fibre optics bundle can simultaneously observe a continuous sky area measuring no less than 56 x 56 arcsec 2. A dedicated machine, the Mask Manufacturing Unit (MMU) , cuts the slits for the entrance apertures of the spectrograph. The laser is capable of cutting 200 slits in less than 15 minutes. This facility was put into operation at Paranal by the VIRMOS Consortium already in August 2000 and has since been extensively used for observations with the FORS2 instrument; more details are available in ESO PR 19/99. Fast start-up of VIMOS at Paranal ESO PR Photo 09d/02 ESO PR Photo 09d/02 [Preview - JPEG: 473 x 400 pix - 280k] [Normal - JPEG: 946 x 1209 pix - 728k] ESO PR Photo 09e/02 ESO PR Photo 09e/02 [Preview - JPEG: 400 x 438 pix - 176k] [Normal - JPEG: 800 x 876 pix - 664k] Caption : PR Photo 09d/02 : The VIRMOS team in the MELIPAL control room, moments after "First Light" on February 26, 2002. From left to right: Oreste Caputi, Marco Scodeggio, Giovanni Sciarretta , Olivier Le Fevre, Sylvie Brau-Nogue, Christian Lucuix, Bianca Garilli, Markus Kissler-Patig (in front), Xavier Reyes, Michel Saisse, Luc Arnold and Guido Mancini . PR Photo 09e/02 : The spiral galaxy NGC 5364 was the first object to be observed by VIMOS. This false-colour near-infrared, raw "First Light" photo shows the extensive spiral arms. Technical information about this photo is available below. VIMOS was shipped from Observatoire de Haute-Provence (France) at the end of 2001, and reassembled at Paranal during a first period in January 2002. From mid-February, the instrument was made ready for installation on the VLT MELIPAL telescope; this happened on February 24, 2002. VIMOS saw "First Light" just two days later, on February 26, 2000, cf. PR Photo 09e/02 . During the same night, a number of excellent images were obtained of various objects, demonstrating the fine capabilities of the instrument in the "direct imaging"-mode. The first spectra were successfully taken during the night of March 2 - 3, 2002 . The slit masks that were used on this occasion were prepared with dedicated software that also optimizes the object selection, cf. PR Photo 09k/02 , and were then cut with the laser machine. From the first try on, the masks have been well aligned on the sky objects. The first observations with large numbers of spectra were obtained shortly thereafter. First accomplishments Images of nearby galaxies, clusters of galaxies, and distant galaxy fields were among the first to be obtained, using the VIMOS imaging mode and demonstrating the excellent efficiency of the instrument, various examples are shown below. The first observations of multi-spectra were performed in a selected sky field in which many faint galaxies are present; it is known as the "VIRMOS-VLT Deep Survey Field at 1000+02". Thanks to the excellent sensitivity of VIMOS, the spectra of galaxies as faint as (red) magnitude R = 23 (i.e. over 6 million times fainter than what can be perceived with the unaided eye) are visible on exposures lasting only 15 minutes. Some of the first observations with the Integral Field Unit were made of the core of the famous Antennae Galaxies (NGC 4038/39) . They will form the basis for a detailed map of the strong emission produced by the current, dramatic collision of the two galaxies. First Images and Spectra from VIMOS - a Gallery The following photos are from a collection of the first images and spectra obtained with VIMOS . See also PR Photos 09a/02 , 09b/02 and 09e/02 , reproduced above. Technical information about all of them is available below. ESO PR Photo 09f/02 ESO PR Photo 09f/02 [Preview - JPEG: 400 x 469 pix - 224k] [Normal - JPEG: 800 x 937 pix - 544k] [HiRes - JPEG: 2001 x 2343 pix - 3.6M] Caption : PR Photo 09f/02 : The Crab Nebula (Messier 1) , as observed by VIMOS. This well-known object is the remnant of a stellar explosion in the year 1054. ESO PR Photo 09g/02 ESO PR Photo 09g/02 [Preview - JPEG: 478 x 400 pix - 184k] [Normal - JPEG: 956 x 1209 pix - 416k] [HiRes - JPEG: 1801 x 1507 pix - 1.4M] Caption : PR Photo 09g/02 : VIMOS photo of NGC 2613 , a spiral galaxy that ressembles our own Milky Way. ESO PR Photo 09h/02 ESO PR Photo 09h/02 [Preview - JPEG: 400 x 469 pix - 152k] [Normal - JPEG: 800 x 938 pix - 440k] [HiRes - JPEG: 1800 x 2100 pix - 2.0M] Caption : PR Photo 09h/02 : Messier 100 is one of the largest and brightest spiral galaxies in the sky. ESO PR Photo 09i/02 ESO PR Photo 09i/02 [Preview - JPEG: 400 x 405 pix - 144k] [Normal - JPEG: 800 x 810 pix - 312k] Caption : PR Photo 09i/02 : The cluster of galaxies ACO 3341 is located at a distance of about 300 million light-years (redshift z = 0.037), i.e., comparatively nearby in cosmological terms. It contains a large number of galaxies of different size and brightness that are bound together by gravity. ESO PR Photo 09j/02 ESO PR Photo 09j/02 [Preview - JPEG: 447 x 400 pix - 200k] [Normal - JPEG: 893 x 800 pix - 472k] [HiRes - JPEG: 1562 x 1399 pix - 1.1M] Caption : PR Photo 09j/02 : The distant cluster of galaxies MS 1008.1-1224 is some 3 billion light-years distant (redshift z = 0.301). The galaxies in this cluster - that we observe as they were 3 billion years ago - are different from galaxies in our neighborhood; their stellar populations, on the average, are younger. ESO PR Photo 09k/02 ESO PR Photo 09k/02 [Preview - JPEG: 400 x 455 pix - 280k] [Normal - JPEG: 800 x 909 pix - 696k] Caption : PR Photo 09k/02 : Design of a Mask for Multi-Object Spectroscopy (MOS) observations with VIMOS. The mask serves to block, as far as possible, unwanted background light from the "night sky" (radiation from atoms and molecules in the Earth's upper atmosphere). During the set-up process for multi-object observations, the VIMOS software optimizes the position of the individual slits in the mask (one for each object for which a spectrum will be obtained) before these are cut. The photo shows an example of this fitting process, with the slit contours superposed on a short pre-exposure of the sky field to be observed. ESO PR Photo 09l/02 ESO PR Photo 09l/02 [Preview - JPEG: 470 x 400 pix - 200k] [Normal - JPEG: 939 x 800 pix - 464k] Caption : PR Photo 09l/02 : First Multi-Object Spectroscopy (MOS) observations with VIMOS; enlargement of a small part of the field shown in PR Photo 09b/02. The light from each galaxy passes through the dedicated slit in the mask (see PR Photo 09k/02 ) and produces a spectrum on the detector. Each vertical rectangle contains the spectrum of one galaxy that is located several billion light-years away. The horizontal lines are the strong emission from the "night sky" (radiation from atoms and molecules in the Earth's upper atmosphere), while the vertical traces are the spectral signatures of the galaxies. The full field contains the spectra of over 220 galaxies that were observed simultaneously, illustrating the great efficiency of this technique. Later, about 1000 spectra will be obtained in one exposure. ESO PR Photo 09m/02 ESO PR Photo 09m/02 [Preview - JPEG: 470 x 400 pix - 264k] [Normal - JPEG: 939 x 800 pix - 720k] Caption : PR Photo 09m/02 : was obtained with the Integral Field Spectroscopy mode of VIMOS. In one single exposure, more than 3000 spectra were taken of the central area of the Antennae Galaxies ( PR Photo 09a/02 ). ESO PR Photo 09n/02 ESO PR Photo 09n/02 [Preview - JPEG: 532 x 400 pix - 320k] [Normal - JPEG: 1063 x 800 pix - 864k] Caption : PR Photo 09n/02 : An enlargement of a small area in PR Photo 09m/02. This observation allows mapping of the distribution of elements like hydrogen (H) and sulphur (S II), for which the signatures are clearly identified in these spectra. The wavelength increases towards the top (arrow). Notes [1]: This is a joint Press Release of ESO , Centre National de la Recherche Scientifique (CNRS) in France, and Consiglio Nazionale delle Ricerche (CNR) and Istituto Nazionale di Astrofisica (INAF) in Italy. [2]: In astronomy, the redshift denotes the fraction by which the lines in the spectrum of an object are shifted towards longer wavelengths. The observed redshift of a distant galaxy gives a direct estimate of the apparent recession velocity as caused by the universal expansion. Since the expansion rate increases with distance, the velocity is itself a function (the Hubble relation) of the distance to the object. Technical information about the photos PR Photo 09a/01 : Composite VRI image of NGC 4038/39, obtained on 26 February 2002, in a bright sky (full moon). Individual exposures of 60 sec each; image quality 0.6 arcsec FWHM; the field measures 3.5 x 3.5 arcmin 2. North is up and East is left. PR Photo 09b/02 : MOS-spectra obtained with two quadrants totalling 221 slits + 6 reference objects (stars placed in square holes to ensure a correct alignment). Exposure time 15 min; LR(red) grism. This is the raw (unprocessed) image of the spectra. PR Photo 09e/02 : A 60 sec i exposure of NGC 5364 on February 26, 2002; image quality 0.6 arcsec FWHM; full moon; 3.5 x 3.5 arcmin 2 ; North is up and East is left. PR Photo 09f/02 : Composite VRI image of Messier 1, obtained on March 4, 2002. The individual exposures lasted 180 sec; image quality 0.7 arcsec FWHM; field 7 x 7 arcmin 2 ; North is up and East is left. PR Photo 09g/02 : Composite VRI image of NGC 2613, obtained on February 28, 2002. The individual exposures lasted 180 sec; image quality 0.7 arcsec FWHM; field 7 x 7 arcmin 2 ; North is up and East is left. PR Photo 09h/02 : Composite VRI image of Messier 100, obtained on March 3, 2002. The individual exposures lasted 180 sec, image quality 0.7 arcsec FWHM; field 7 x 7 arcmin 2 ; North is up and East is left. PR Photo 09i/02 : R-band image of galaxy cluster ACO 3341, obtained on March 4, 2002. Exposure 300 sec, image quality 0.5 arcsec FWHM;. field 7 x 7 arcmin 2 ; North is up and East is left. PR Photo 09j/02 : Composite VRI image of the distant cluster of galaxies MS 1008.1-1224. The individual exposures lasted 300 sec; image quality 0.8 arcsec FWHM; field 5 x 3 arcmin 2 ; North is to the right and East is up. PR Photo 09k/02 : Mask design made with the VMMPS tool, overlaying a pre-image. The selected objects are seen at the centre of the yellow squares, where a 1 arcsec slit is cut along the spatial X-axis. The rectangles in white represent the dispersion in wavelength of the spectra along the Y-axis. Masks are cut with the Mask Manufacturing Unit (MMU) built by the Virmos Consortium. PR Photo 09l/02 : Enlargement of a small area of PR Photo 09b/02. PR Photo 09m/02 : Spectra of the central area of NGC 4038/39, obtained with the Integral Field Unit on February 26, 2002. The exposure lasted 5 min and was made with the low resolution red grating. PR Photo 09m/02 : Zoom-in on small area of PR Photo 09m/02. The strong emission lines of hydrogen (H-alpha) and ionized sulphur (S II) are seen.
Fu, C.Y.; Petrich, L.I.
1997-12-30
An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described. 22 figs.
Tersmette, Derek Gideon; Engberts, Dirk Peter
2017-01-01
The Committee for Medical Ethics (CME) of Leiden University Medical Center (LUMC) was established as the first medical ethics reviewing committee (MREC) in the Netherlands. In the period 2000-2010 the CME received 2,162 protocols for review. Some of these protocols were never approved. Until now, there has existed neither an overview of these failed protocols nor an overview of the reasons for their failure. This report draws on data from the digital database, the physical archives, and the minutes of the meetings of the CME. Additional information has been obtained from the Central Committee on Research involving Human Subjects (CCRH) and survey-based research. Protocols were itemized based on characteristic features and their reviewing procedures were analyzed. In total, 1,952 out of 2,162 research protocols submitted during 2000-2010 (90.3%) were approved by the CME; 210 of 2,162 protocols (9.7%) were not approved. Of these 210 protocols, 177 failed due to reasons not related to CME reviewing. In 15 cases CME reviewing led to protocol failure, while another 10 protocols were rejected outright. Eight of the 210 submitted protocols without approval had been conducted prior to submission. In the aforementioned period, little protocol failure occurred. For the most part, protocol failure was caused by problems that are not CME related. This type of failure has several identifiable factors, none of which have anything to do with the ethical reviewing procedure by the CME. A mere 1.2% of protocols failed due to ethical review. Unacceptable burden and risks to the subject and an inadequate methodology are the most common reasons for this CME-related protocol failure.
Precision of Four Acoustic Bone Measurement Devices
NASA Technical Reports Server (NTRS)
Miller, Christopher; Feiveson, Alan H.; Shackelford, Linda; Rianon, Nahida; LeBlanc, Adrian
2000-01-01
Though many studies have quantified the precision of various acoustic bone measurement devices, it is difficult to directly compare the results among the studies, because they used disparate subject pools, did not specify the estimation methodology, or did not use consistent definitions for various precision characteristics. In this study, we used a repeated measures design protocol to directly determine the precision characteristics of four acoustic bone measurement devices: the Mechanical Response Tissue Analyzer (MRTA), the UBA-575+, the SoundScan 2000 (S2000), and the Sahara Ultrasound Done Analyzer. Ten men and ten women were scanned on all four devices by two different operators at five discrete time points: Week 1, Week 2, Week 3, Month 3 and Month 6. The percent coefficient of variation (%CV) and standardized coefficient of variation were computed for the following precision characteristics: interoperator effect, operator-subject interaction, short-term error variance, and long-term drift, The MRTA had high interoperator errors for its ulnar and tibial stiffness measures and a large long-term drift in its tibial stiffness measurement. The UBA-575+ exhibited large short-term error variances and long-term drift for all three of its measurements. The S2000's tibial speed of sound measurement showed a high short-term error variance and a significant operator-subject interaction but very good values ( < 1%) for the other precision characteristics. The Sahara seemed to have the best overall performance, but was hampered by a large %CV for short-term error variance in its broadband ultrasound attenuation measure.
Precision of Four Acoustic Bone Measurement Devices
NASA Technical Reports Server (NTRS)
Miller, Christopher; Rianon, Nahid; Feiveson, Alan; Shackelford, Linda; LeBlanc, Adrian
2000-01-01
Though many studies have quantified the precision of various acoustic bone measurement devices, it is difficult to directly compare the results among the studies, because they used disparate subject pools, did not specify the estimation methodology, or did not use consistent definitions for various precision characteristics. In this study, we used a repeated measures design protocol to directly determine the precision characteristics of four acoustic bone measurement devices: the Mechanical Response Tissue Analyzer (MRTA), the UBA-575+, the SoundScan 2000 (S2000), and the Sahara Ultrasound Bone Analyzer. Ten men and ten women were scanned on all four devices by two different operators at five discrete time points: Week 1, Week 2, Week 3, Month 3 and Month 6. The percent coefficient of variation (%CV) and standardized coefficient of variation were computed for the following precision characteristics: interoperator effect, operator-subject interaction, short-term error variance, and long-term drift. The MRTA had high interoperator errors for its ulnar and tibial stiffness measures and a large long-term drift in its tibial stiffness measurement. The UBA-575+ exhibited large short-term error variances and long-term drift for all three of its measurements. The S2000's tibial speed of sound measurement showed a high short-term error variance and a significant operator-subject interaction but very good values (less than 1%) for the other precision characteristics. The Sahara seemed to have the best overall performance, but was hampered by a large %CV for short-term error variance in its broadband ultrasound attenuation measure.
NASA Astrophysics Data System (ADS)
Brown, Nicholas J.; Lloyd, David S.; Reynolds, Melvin I.; Plummer, David L.
2002-05-01
A visible digital image is rendered from a set of digital image data. Medical digital image data can be stored as either: (a) pre-rendered format, corresponding to a photographic print, or (b) un-rendered format, corresponding to a photographic negative. The appropriate image data storage format and associated header data (metadata) required by a user of the results of a diagnostic procedure recorded electronically depends on the task(s) to be performed. The DICOM standard provides a rich set of metadata that supports the needs of complex applications. Many end user applications, such as simple report text viewing and display of a selected image, are not so demanding and generic image formats such as JPEG are sometimes used. However, these are lacking some basic identification requirements. In this paper we make specific proposals for minimal extensions to generic image metadata of value in various domains, which enable safe use in the case of two simple healthcare end user scenarios: (a) viewing of text and a selected JPEG image activated by a hyperlink and (b) viewing of one or more JPEG images together with superimposed text and graphics annotation using a file specified by a profile of the ISO/IEC Basic Image Interchange Format (BIIF).
A JPEG backward-compatible HDR image compression
NASA Astrophysics Data System (ADS)
Korshunov, Pavel; Ebrahimi, Touradj
2012-10-01
High Dynamic Range (HDR) imaging is expected to become one of the technologies that could shape next generation of consumer digital photography. Manufacturers are rolling out cameras and displays capable of capturing and rendering HDR images. The popularity and full public adoption of HDR content is however hindered by the lack of standards in evaluation of quality, file formats, and compression, as well as large legacy base of Low Dynamic Range (LDR) displays that are unable to render HDR. To facilitate wide spread of HDR usage, the backward compatibility of HDR technology with commonly used legacy image storage, rendering, and compression is necessary. Although many tone-mapping algorithms were developed for generating viewable LDR images from HDR content, there is no consensus on which algorithm to use and under which conditions. This paper, via a series of subjective evaluations, demonstrates the dependency of perceived quality of the tone-mapped LDR images on environmental parameters and image content. Based on the results of subjective tests, it proposes to extend JPEG file format, as the most popular image format, in a backward compatible manner to also deal with HDR pictures. To this end, the paper provides an architecture to achieve such backward compatibility with JPEG and demonstrates efficiency of a simple implementation of this framework when compared to the state of the art HDR image compression.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mihaljevic, Miodrag J.
2007-05-15
It is shown that the security, against known-plaintext attacks, of the Yuen 2000 (Y00) quantum-encryption protocol can be considered via the wire-tap channel model assuming that the heterodyne measurement yields the sample for security evaluation. Employing the results reported on the wire-tap channel, a generic framework is proposed for developing secure Y00 instantiations. The proposed framework employs a dedicated encoding which together with inherent quantum noise at the attacker's side provides Y00 security.
Recce imagery compression options
NASA Astrophysics Data System (ADS)
Healy, Donald J.
1995-09-01
The errors introduced into reconstructed RECCE imagery by ATARS DPCM compression are compared to those introduced by the more modern DCT-based JPEG compression algorithm. For storage applications in which uncompressed sensor data is available JPEG provides better mean-square-error performance while also providing more flexibility in the selection of compressed data rates. When ATARS DPCM compression has already been performed, lossless encoding techniques may be applied to the DPCM deltas to achieve further compression without introducing additional errors. The abilities of several lossless compression algorithms including Huffman, Lempel-Ziv, Lempel-Ziv-Welch, and Rice encoding to provide this additional compression of ATARS DPCM deltas are compared. It is shown that the amount of noise in the original imagery significantly affects these comparisons.
Limited distortion in LSB steganography
NASA Astrophysics Data System (ADS)
Kim, Younhee; Duric, Zoran; Richards, Dana
2006-02-01
It is well known that all information hiding methods that modify the least significant bits introduce distortions into the cover objects. Those distortions have been utilized by steganalysis algorithms to detect that the objects had been modified. It has been proposed that only coefficients whose modification does not introduce large distortions should be used for embedding. In this paper we propose an effcient algorithm for information hiding in the LSBs of JPEG coefficients. Our algorithm uses parity coding to choose the coefficients whose modifications introduce minimal additional distortion. We derive the expected value of the additional distortion as a function of the message length and the probability distribution of the JPEG quantization errors of cover images. Our experiments show close agreement between the theoretical prediction and the actual additional distortion.
Atmospheric Science Data Center
2013-04-16
article title: Twilight in Antarctica View larger JPEG ... SpectroRadiometer (MISR) instrument on board Terra. The Ross Ice Shelf and Transantarctic Mountains are illuminated by low Sun. MISR was ...
DOT National Transportation Integrated Search
2001-07-01
The National Highway Traffic Safety Administration's National Occupant Protection Use Survey (NOPUS) expanded its data collection protocols during October and November 2000 to obtain national estimates of driver cell phone use. The results of NOPUS f...
The effects of lossy compression on diagnostically relevant seizure information in EEG signals.
Higgins, G; McGinley, B; Faul, S; McEvoy, R P; Glavin, M; Marnane, W P; Jones, E
2013-01-01
This paper examines the effects of compression on EEG signals, in the context of automated detection of epileptic seizures. Specifically, it examines the use of lossy compression on EEG signals in order to reduce the amount of data which has to be transmitted or stored, while having as little impact as possible on the information in the signal relevant to diagnosing epileptic seizures. Two popular compression methods, JPEG2000 and SPIHT, were used. A range of compression levels was selected for both algorithms in order to compress the signals with varying degrees of loss. This compression was applied to the database of epileptiform data provided by the University of Freiburg, Germany. The real-time EEG analysis for event detection automated seizure detection system was used in place of a trained clinician for scoring the reconstructed data. Results demonstrate that compression by a factor of up to 120:1 can be achieved, with minimal loss in seizure detection performance as measured by the area under the receiver operating characteristic curve of the seizure detection system.
Evaluation of Algorithms for Compressing Hyperspectral Data
NASA Technical Reports Server (NTRS)
Cook, Sid; Harsanyi, Joseph; Faber, Vance
2003-01-01
With EO-1 Hyperion in orbit NASA is showing their continued commitment to hyperspectral imaging (HSI). As HSI sensor technology continues to mature, the ever-increasing amounts of sensor data generated will result in a need for more cost effective communication and data handling systems. Lockheed Martin, with considerable experience in spacecraft design and developing special purpose onboard processors, has teamed with Applied Signal & Image Technology (ASIT), who has an extensive heritage in HSI spectral compression and Mapping Science (MSI) for JPEG 2000 spatial compression expertise, to develop a real-time and intelligent onboard processing (OBP) system to reduce HSI sensor downlink requirements. Our goal is to reduce the downlink requirement by a factor > 100, while retaining the necessary spectral and spatial fidelity of the sensor data needed to satisfy the many science, military, and intelligence goals of these systems. Our compression algorithms leverage commercial-off-the-shelf (COTS) spectral and spatial exploitation algorithms. We are currently in the process of evaluating these compression algorithms using statistical analysis and NASA scientists. We are also developing special purpose processors for executing these algorithms onboard a spacecraft.
Bit-Grooming: Shave Your Bits with Razor-sharp Precision
NASA Astrophysics Data System (ADS)
Zender, C. S.; Silver, J.
2017-12-01
Lossless compression can reduce climate data storage by 30-40%. Further reduction requires lossy compression that also reduces precision. Fortunately, geoscientific models and measurements generate false precision (scientifically meaningless data bits) that can be eliminated without sacrificing scientifically meaningful data. We introduce Bit Grooming, a lossy compression algorithm that removes the bloat due to false-precision, those bits and bytes beyond the meaningful precision of the data.Bit Grooming is statistically unbiased, applies to all floating point numbers, and is easy to use. Bit-Grooming reduces geoscience data storage requirements by 40-80%. We compared Bit Grooming to competitors Linear Packing, Layer Packing, and GRIB2/JPEG2000. The other compression methods have the edge in terms of compression, but Bit Grooming is the most accurate and certainly the most usable and portable.Bit Grooming provides flexible and well-balanced solutions to the trade-offs among compression, accuracy, and usability required by lossy compression. Geoscientists could reduce their long term storage costs, and show leadership in the elimination of false precision, by adopting Bit Grooming.
Real-time access of large volume imagery through low-bandwidth links
NASA Astrophysics Data System (ADS)
Phillips, James; Grohs, Karl; Brower, Bernard; Kelly, Lawrence; Carlisle, Lewis; Pellechia, Matthew
2010-04-01
Providing current, time-sensitive imagery and geospatial information to deployed tactical military forces or first responders continues to be a challenge. This challenge is compounded through rapid increases in sensor collection volumes, both with larger arrays and higher temporal capture rates. Focusing on the needs of these military forces and first responders, ITT developed a system called AGILE (Advanced Geospatial Imagery Library Enterprise) Access as an innovative approach based on standard off-the-shelf techniques to solving this problem. The AGILE Access system is based on commercial software called Image Access Solutions (IAS) and incorporates standard JPEG 2000 processing. Our solution system is implemented in an accredited, deployable form, incorporating a suite of components, including an image database, a web-based search and discovery tool, and several software tools that act in concert to process, store, and disseminate imagery from airborne systems and commercial satellites. Currently, this solution is operational within the U.S. Government tactical infrastructure and supports disadvantaged imagery users in the field. This paper presents the features and benefits of this system to disadvantaged users as demonstrated in real-world operational environments.
NASA Astrophysics Data System (ADS)
2000-06-01
The Republic of Portugal will become the ninth member state of the European Southern Observatory (ESO) [1]. Today, during a ceremony at the ESO Headquarters in Garching (Germany), a corresponding Agreement was signed by the Portuguese Minister of Science and Technology, José Mariano Gago and the ESO Director General, Catherine Cesarsky , in the presence of other high officials from Portugal and the ESO member states (see Video Clip 05/00 below). Following subsequent ratification by the Portuguese Parliament of the ESO Convention and the associated protocols [2], it is foreseen that Portugal will formally join this organisation on January 1, 2001. Uniting European Astronomy ESO PR Photo 16/00 ESO PR Photo 16/00 [Preview - JPEG: 400 x 405 pix - 160k] [Normal - JPEG: 800 x 809 pix - 408k] Caption : Signing of the Portugal-ESO Agreement on June 27, 2000, at the ESO Headquarters in Garching (Germany). At the table, the ESO Director General, Catherine Cesarsky , and the Portuguese Minister of Science and Technology, José Mariano Gago . In his speech, the Portuguese Minister of Science and Technology, José Mariano Gago , stated that "the accession of Portugal to ESO is the result of a joint effort by ESO and Portugal during the last ten years. It was made possible by the rapid Portuguese scientific development and by the growth and internationalisation of its scientific community." He continued: "Portugal is fully committed to European scientific and technological development. We will devote our best efforts to the success of ESO". Catherine Cesarsky , ESO Director General since 1999, warmly welcomed the Portuguese intention to join ESO. "With the accession of their country to ESO, Portuguese astronomers will have great opportunities for working on research programmes at the frontiers of modern astrophysics." "This is indeed a good time to join ESO", she added. "The four 8.2-m VLT Unit Telescopes with their many first-class instruments are nearly ready, and the VLT Interferometer will soon follow. With a decision about the intercontinental millimetre-band ALMA project expected next year and the first concept studies for gigantic optical/infrared telescopes like OWL now well under way at ESO, there is certainly no lack of perspectives, also for coming generations of European astronomers!" Portuguese astronomy: a decade of progress The beginnings of the collaboration between Portugal and ESO, now culminating in the imminent accession of that country to the European research organisation, were almost exactly ten years ago. On July 10, 1990, the Republic of Portugal and ESO signed a Co-operation Agreement , aimed at full Portuguese membership of the ESO organisation within the next decade. During the interim period, Portuguese astronomers were granted access to ESO facilities while the Portuguese government would provide support towards the development of astronomy and the associated infrastructure in this country. A joint Portuguese/ESO Advisory Body was set up to monitor the development of Portuguese astronomy and its interaction with ESO. Over the years, an increasing number of measures to strengthen the Portuguese research infrastructure within astrophysics and related fields were proposed and funded. More and more, mostly young Portuguese astronomers began to make use of ESO's facilities at the La Silla observatory and recently, of the Very Large Telescope (VLT) at Paranal. Now, ten years later, the Portuguese astronomical community is the youngest in Europe with more than 90% of its PhD's awarded during the last eight years. As expected, the provisional access to ESO telescopes - especially the Very Large Telescope (VLT) with its suite of state-of-the-art instruments for observations at wavelengths ranging from the UV to the mid-infrared - has proven to be a great incentive to the Portuguese scientists. As a clear demonstration of these positive developments, a very successful Workshop entitled "Portugal - ESO - VLT" was held in Lisbon on April 17-18, 2000. It was primarily directed towards young Portuguese scientists and served to inform them about the ESO Very Large Telescope (VLT) and the steadily evolving, exciting research possibilities with this world-class facility. Notes [1]: Current ESO member countries are Belgium, Denmark, France, Germany, Italy, the Netherlands, Sweden and Switzerland. [2]: The ESO Convention was established in 1962 and specifies the goals of ESO and the means to achieve these, e.g., "The Governments of the States parties to this convention... desirous of jointly creating an observatory equipped with powerful instruments in the Southern hemisphere and accordingly promoting and organizing co-operation in astronomical research..." (from the Preamble to the ESO Convention). Video Clip from the Signing Ceremony
Siquier, B; Sánchez-Alvarez, J; García-Mendez, E; Sabriá, M; Santos, J; Pallarés, R; Twynholm, M; Dal-Ré, R
2006-03-01
This randomized, double-blind, non-inferiority trial evaluated the efficacy and safety of pharmacokinetically enhanced amoxicillin/clavulanate 2000/125 mg twice daily versus amoxicillin/clavulanate 875/125 mg three times daily, both given orally for 7 or 10 days, in the treatment of adults with community-acquired pneumonia in Spain, a country with a high prevalence of penicillin-resistant Streptococcus pneumoniae. Following 2:1 randomization, 566 patients (intent-to-treat population) received either amoxicillin/clavulanate 2000/125 mg (n = 374) or amoxicillin/clavulanate 875/125 mg (n = 192). Among the patients who did not deviate from the protocol (clinical per-protocol population), clinical success at day 21-28 post-therapy (test of cure; primary efficacy endpoint) was 92.4% (266/288) for amoxicillin/clavulanate 2000/125 mg and 91.2% (135/148) for amoxicillin/clavulanate 875/125 mg (treatment difference, 1.1; 95% confidence interval, -4.4, 6.6). Bacteriological success at test of cure in the bacteriology per-protocol population was 90.8% (79/87) with amoxicillin/clavulanate 2000/125 mg and 86.0% (43/50) with amoxicillin/clavulanate 875/125 mg (treatment difference 4.8; 95% confidence interval, -6.6, 16.2). At test of cure, amoxicillin/clavulanate 2000/125 mg was clinically and bacteriologically effective against 7/7 penicillin-resistant Streptococcus pneumoniae (MIC > or = 2 mg/L) isolates (including three amoxicillin non-susceptible strains) and amoxicillin/clavulanate 875/125 mg against 5/5 isolates (including one amoxicillin non-susceptible strain). Both treatment regimens were well tolerated. Amoxicillin/clavulanate 2000/125 mg was at least as effective clinically and as safe as amoxicillin/clavulanate 875/125 mg in the treatment of community-acquired pneumonia in adults in a country with a high prevalence of penicillin-resistant S. pneumoniae and has a more convenient twice daily posology.
NASA Astrophysics Data System (ADS)
Osada, Masakazu; Tsukui, Hideki
2002-09-01
ABSTRACT Picture Archiving and Communication System (PACS) is a system which connects imaging modalities, image archives, and image workstations to reduce film handling cost and improve hospital workflow. Handling diagnostic ultrasound and endoscopy images is challenging, because it produces large amount of data such as motion (cine) images of 30 frames per second, 640 x 480 in resolution, with 24-bit color. Also, it requires enough image quality for clinical review. We have developed PACS which is able to manage ultrasound and endoscopy cine images with above resolution and frame rate, and investigate suitable compression method and compression rate for clinical image review. Results show that clinicians require capability for frame-by-frame forward and backward review of cine images because they carefully look through motion images to find certain color patterns which may appear in one frame. In order to satisfy this quality, we have chosen motion JPEG, installed and confirmed that we could capture this specific pattern. As for acceptable image compression rate, we have performed subjective evaluation. No subjects could tell the difference between original non-compressed images and 1:10 lossy compressed JPEG images. One subject could tell the difference between original and 1:20 lossy compressed JPEG images although it is acceptable. Thus, ratios of 1:10 to 1:20 are acceptable to reduce data amount and cost while maintaining quality for clinical review.
Prior-Based Quantization Bin Matching for Cloud Storage of JPEG Images.
Liu, Xianming; Cheung, Gene; Lin, Chia-Wen; Zhao, Debin; Gao, Wen
2018-07-01
Millions of user-generated images are uploaded to social media sites like Facebook daily, which translate to a large storage cost. However, there exists an asymmetry in upload and download data: only a fraction of the uploaded images are subsequently retrieved for viewing. In this paper, we propose a cloud storage system that reduces the storage cost of all uploaded JPEG photos, at the expense of a controlled increase in computation mainly during download of requested image subset. Specifically, the system first selectively re-encodes code blocks of uploaded JPEG images using coarser quantization parameters for smaller storage sizes. Then during download, the system exploits known signal priors-sparsity prior and graph-signal smoothness prior-for reverse mapping to recover original fine quantization bin indices, with either deterministic guarantee (lossless mode) or statistical guarantee (near-lossless mode). For fast reverse mapping, we use small dictionaries and sparse graphs that are tailored for specific clusters of similar blocks, which are classified via tree-structured vector quantizer. During image upload, cluster indices identifying the appropriate dictionaries and graphs for the re-quantized blocks are encoded as side information using a differential distributed source coding scheme to facilitate reverse mapping during image download. Experimental results show that our system can reap significant storage savings (up to 12.05%) at roughly the same image PSNR (within 0.18 dB).
Generating Animated Displays of Spacecraft Orbits
NASA Technical Reports Server (NTRS)
Candey, Robert M.; Chimiak, Reine A.; Harris, Bernard T.
2005-01-01
Tool for Interactive Plotting, Sonification, and 3D Orbit Display (TIPSOD) is a computer program for generating interactive, animated, four-dimensional (space and time) displays of spacecraft orbits. TIPSOD utilizes the programming interface of the Satellite Situation Center Web (SSCWeb) services to communicate with the SSC logic and database by use of the open protocols of the Internet. TIPSOD is implemented in Java 3D and effects an extension of the preexisting SSCWeb two-dimensional static graphical displays of orbits. Orbits can be displayed in any or all of the following seven reference systems: true-of-date (an inertial system), J2000 (another inertial system), geographic, geomagnetic, geocentric solar ecliptic, geocentric solar magnetospheric, and solar magnetic. In addition to orbits, TIPSOD computes and displays Sibeck's magnetopause and Fairfield's bow-shock surfaces. TIPSOD can be used by the scientific community as a means of projection or interpretation. It also has potential as an educational tool.
NASA Astrophysics Data System (ADS)
2005-09-01
Large Population of Galaxies Found in the Young Universe with ESO's VLT The Universe was a more fertile place soon after it was formed than has previously been suspected. A team of French and Italian astronomers [1] made indeed the surprising discovery of a large and unknown population of distant galaxies observed when the Universe was only 10 to 30% its present age. ESO PR Photo 29a/05 ESO PR Photo 29a/05 New Population of Distant Galaxies [Preview - JPEG: 400 x 424 pix - 191k] [Normal - JPEG: 800 x 847 pix - 449k] [HiRes - JPEG: 2269 x 2402 pix - 2.0M] ESO PR Photo 29b/05 ESO PR Photo 29b/05 Average Spectra of Distant Galaxies [Preview - JPEG: 400 x 506 pix - 141k] [Normal - JPEG: 800 x 1012 pix - 320k] This breakthrough is based on observations made with the Visible Multi-Object Spectrograph (VIMOS) as part of the VIMOS VLT Deep Survey (VVDS). The VVDS started early 2002 on Melipal, one of the 8.2-m telescopes of ESO's Very Large Telescope Array [2]. In a total sample of about 8,000 galaxies selected only on the basis of their observed brightness in red light, almost 1,000 bright and vigorously star forming galaxies were discovered that were formed between 9 and 12 billion years ago (i.e. about 1,500 to 4,500 million years after the Big Bang). "To our surprise, says Olivier Le Fèvre, from the Laboratoire d'Astrophysique de Marseille (France) and co-leader of the VVDS project, "this is two to six times higher than had been found previously. These galaxies had been missed because previous surveys had selected objects in a much more restrictive manner than we did. And they did so to accommodate the much lower efficiency of the previous generation of instruments." While observations and models have consistently indicated that the Universe had not yet formed many stars in the first billion years of cosmic time, the discovery announced today by scientists calls for a significant change in this picture. The astronomers indeed find that stars formed two to three times faster than previously estimated. "These observations will demand a profound reassessment of our theories of the formation and evolution of galaxies in a changing Universe", says Gianpaolo Vettolani, the other co-leader of the VVDS project, working at INAF-IRA in Bologna (Italy). These results are reported in the September 22 issue of the journal Nature (Le Fèvre et al., "A large population of galaxies 9 to 12 billion years back in the life of the Universe").
Governance and Trust in Higher Education
ERIC Educational Resources Information Center
Vidovich, Lesley; Currie, Jan
2011-01-01
The adoption of more corporate models of governance is a contemporary trend in higher education. In the early 2000s, the Australian Government legislated national governance protocols for universities, using the policy lever of financial sanctions. These more corporate-style governance protocols followed similar changes in the UK, consistent with…
NASA Astrophysics Data System (ADS)
Mansoor, Awais; Robinson, J. Paul; Rajwa, Bartek
2009-02-01
Modern automated microscopic imaging techniques such as high-content screening (HCS), high-throughput screening, 4D imaging, and multispectral imaging are capable of producing hundreds to thousands of images per experiment. For quick retrieval, fast transmission, and storage economy, these images should be saved in a compressed format. A considerable number of techniques based on interband and intraband redundancies of multispectral images have been proposed in the literature for the compression of multispectral and 3D temporal data. However, these works have been carried out mostly in the elds of remote sensing and video processing. Compression for multispectral optical microscopy imaging, with its own set of specialized requirements, has remained under-investigated. Digital photography{oriented 2D compression techniques like JPEG (ISO/IEC IS 10918-1) and JPEG2000 (ISO/IEC 15444-1) are generally adopted for multispectral images which optimize visual quality but do not necessarily preserve the integrity of scientic data, not to mention the suboptimal performance of 2D compression techniques in compressing 3D images. Herein we report our work on a new low bit-rate wavelet-based compression scheme for multispectral fluorescence biological imaging. The sparsity of signicant coefficients in high-frequency subbands of multispectral microscopic images is found to be much greater than in natural images; therefore a quad-tree concept such as Said et al.'s SPIHT1 along with correlation of insignicant wavelet coefficients has been proposed to further exploit redundancy at high-frequency subbands. Our work propose a 3D extension to SPIHT, incorporating a new hierarchal inter- and intra-spectral relationship amongst the coefficients of 3D wavelet-decomposed image. The new relationship, apart from adopting the parent-child relationship of classical SPIHT, also brought forth the conditional "sibling" relationship by relating only the insignicant wavelet coefficients of subbands at the same level of decomposition. The insignicant quadtrees in dierent subbands in the high-frequency subband class are coded by a combined function to reduce redundancy. A number of experiments conducted on microscopic multispectral images have shown promising results for the proposed method over current state-of-the-art image-compression techniques.
No-reference quality assessment based on visual perception
NASA Astrophysics Data System (ADS)
Li, Junshan; Yang, Yawei; Hu, Shuangyan; Zhang, Jiao
2014-11-01
The visual quality assessment of images/videos is an ongoing hot research topic, which has become more and more important for numerous image and video processing applications with the rapid development of digital imaging and communication technologies. The goal of image quality assessment (IQA) algorithms is to automatically assess the quality of images/videos in agreement with human quality judgments. Up to now, two kinds of models have been used for IQA, namely full-reference (FR) and no-reference (NR) models. For FR models, IQA algorithms interpret image quality as fidelity or similarity with a perfect image in some perceptual space. However, the reference image is not available in many practical applications, and a NR IQA approach is desired. Considering natural vision as optimized by the millions of years of evolutionary pressure, many methods attempt to achieve consistency in quality prediction by modeling salient physiological and psychological features of the human visual system (HVS). To reach this goal, researchers try to simulate HVS with image sparsity coding and supervised machine learning, which are two main features of HVS. A typical HVS captures the scenes by sparsity coding, and uses experienced knowledge to apperceive objects. In this paper, we propose a novel IQA approach based on visual perception. Firstly, a standard model of HVS is studied and analyzed, and the sparse representation of image is accomplished with the model; and then, the mapping correlation between sparse codes and subjective quality scores is trained with the regression technique of least squaresupport vector machine (LS-SVM), which gains the regressor that can predict the image quality; the visual metric of image is predicted with the trained regressor at last. We validate the performance of proposed approach on Laboratory for Image and Video Engineering (LIVE) database, the specific contents of the type of distortions present in the database are: 227 images of JPEG2000, 233 images of JPEG, 174 images of White Noise, 174 images of Gaussian Blur, 174 images of Fast Fading. The database includes subjective differential mean opinion score (DMOS) for each image. The experimental results show that the proposed approach not only can assess many kinds of distorted images quality, but also exhibits a superior accuracy and monotonicity.
NASA Technical Reports Server (NTRS)
Critchfield, Anna R.; Zepp, Robert H.
2000-01-01
We propose that the user interact with the spacecraft as if the spacecraft were a file server, so that the user can select and receive data as files in standard formats (e.g., tables or images, such as jpeg) via the Internet. Internet technology will be used end-to-end from the spacecraft to authorized users, such as the flight operation team, and project scientists. The proposed solution includes a ground system and spacecraft architecture, mission operations scenarios, and an implementation roadmap showing migration from current practice to the future, where distributed users request and receive files of spacecraft data from archives or spacecraft with equal ease. This solution will provide ground support personnel and scientists easy, direct, secure access to their authorized data without cumbersome processing, and can be extended to support autonomous communications with the spacecraft.
NASA Astrophysics Data System (ADS)
1999-11-01
First Images from FORS2 at VLT KUEYEN on Paranal The first, major astronomical instrument to be installed at the ESO Very Large Telescope (VLT) was FORS1 ( FO cal R educer and S pectrograph) in September 1998. Immediately after being attached to the Cassegrain focus of the first 8.2-m Unit Telescope, ANTU , it produced a series of spectacular images, cf. ESO PR 14/98. Many important observations have since been made with this outstanding facility. Now FORS2 , its powerful twin, has been installed at the second VLT Unit Telescope, KUEYEN . It is the fourth major instrument at the VLT after FORS1 , ISAAC and UVES.. The FORS2 Commissioning Team that is busy installing and testing this large and complex instrument reports that "First Light" was successfully achieved already on October 29, 1999, only two days after FORS2 was first mounted at the Cassegrain focus. Since then, various observation modes have been carefully tested, including normal and high-resolution imaging, echelle and multi-object spectroscopy, as well as fast photometry with millisecond time resolution. A number of fine images were obtained during this work, some of which are made available with the present Press Release. The FORS instruments ESO PR Photo 40a/99 ESO PR Photo 40a/99 [Preview - JPEG: 400 x 345 pix - 203k] [Normal - JPEG: 800 x 689 pix - 563kb] [Full-Res - JPEG: 1280 x 1103 pix - 666kb] Caption to PR Photo 40a/99: This digital photo shows the twin instruments, FORS2 at KUEYEN (in the foreground) and FORS1 at ANTU, seen in the background through the open ventilation doors in the two telescope enclosures. Although they look alike, the two instruments have specific functions, as described in the text. FORS1 and FORS2 are the products of one of the most thorough and advanced technological studies ever made of a ground-based astronomical instrument. They have been specifically designed to investigate the faintest and most remote objects in the universe. They are "multi-mode instruments" that may be used in several different observation modes. FORS2 is largely identical to FORS1 , but there are a number of important differences. For example, it contains a Mask Exchange Unit (MXU) for laser-cut star-plates [1] that may be inserted at the focus, allowing a large number of spectra of different objects, in practice up to about 70, to be taken simultaneously. Highly sophisticated software assigns slits to individual objects in an optimal way, ensuring a great degree of observing efficiency. Instead of the polarimetry optics found in FORS1 , FORS2 has new grisms that allow the use of higher spectral resolutions. The FORS project was carried out under ESO contract by a consortium of three German astronomical institutes, the Heidelberg State Observatory and the University Observatories of Göttingen and Munich. The participating institutes have invested a total of about 180 man-years of work in this unique programme. The photos below demonstrate some of the impressive possibilities with this new instrument. They are based on observations with the FORS2 standard resolution collimator (field size 6.8 x 6.8 armin = 2048 x 2048 pixels; 1 pixel = 0.20 arcsec). In addition, observations of the Crab pulsar demonstrate a new observing mode, high-speed photometry. Protostar HH-34 in Orion ESO PR Photo 40b/99 ESO PR Photo 40b/99 [Preview - JPEG: 400 x 444 pix - 220kb] [Normal - JPEG: 800 x 887 pix - 806kb] [Full-Res - JPEG: 2000 x 2217 pix - 3.6Mb] The Area around HH-34 in Orion ESO PR Photo 40c/99 ESO PR Photo 40c/99 [Preview - JPEG: 400 x 494 pix - 262kb] [Full-Res - JPEG: 802 x 991 pix - 760 kb] The HH-34 Superjet in Orion (centre) PR Photo 40b/99 shows a three-colour composite of the young object Herbig-Haro 34 (HH-34) , now in the protostar stage of evolution. It is based on CCD frames obtained with the FORS2 instrument in imaging mode, on November 2 and 6, 1999. This object has a remarkable, very complicated appearance that includes two opposite jets that ram into the surrounding interstellar matter. This structure is produced by a machine-gun-like blast of "bullets" of dense gas ejected from the star at high velocities (approaching 250 km/sec). This seems to indicate that the star experiences episodic "outbursts" when large chunks of material fall onto it from a surrounding disk. HH-34 is located at a distance of approx. 1,500 light-years, near the famous Orion Nebula , one of the most productive star birth regions. Note also the enigmatic "waterfall" to the upper left, a feature that is still unexplained. PR Photo 40c/99 is an enlargement of a smaller area around the central object. Technical information : Photo 40b/99 is based on a composite of three images taken through three different filters: B (wavelength 429 nm; Full-Width-Half-Maximum (FWHM) 88 nm; exposure time 10 min; here rendered as blue), H-alpha (centered on the hydrogen emission line at wavelength 656 nm; FWHM 6 nm; 30 min; green) and S II (centrered at the emission lines of inonized sulphur at wavelength 673 nm; FWHM 6 nm; 30 min; red) during a period of 0.8 arcsec seeing. The field shown measures 6.8 x 6.8 arcmin and the images were recorded in frames of 2048 x 2048 pixels, each measuring 0.2 arcsec. The Full Resolution version shows the original pixels. North is up; East is left. N 70 Nebula in the Large Magellanic Cloud ESO PR Photo 40d/99 ESO PR Photo 40d/99 [Preview - JPEG: 400 x 444 pix - 360kb] [Normal - JPEG: 800 x 887 pix - 1.0Mb] [Full-Res - JPEG: 1997 x 2213 pix - 3.4Mb] The N 70 Nebula in the LMC ESO PR Photo 40e/99 ESO PR Photo 40e/99 [Preview - JPEG: 400 x 485 pix - 346kb] [Full-Res - JPEG: 986 x 1196 pix - 1.2Mb] The N70 Nebula in the LMC (detail) PR Photo 40d/99 shows a three-colour composite of the N 70 nebula. It is a "Super Bubble" in the Large Magellanic Cloud (LMC) , a satellite galaxy to the Milky Way system, located in the southern sky at a distance of about 160,000 light-years. This photo is based on CCD frames obtained with the FORS2 instrument in imaging mode in the morning of November 5, 1999. N 70 is a luminous bubble of interstellar gas, measuring about 300 light-years in diameter. It was created by winds from hot, massive stars and supernova explosions and the interior is filled with tenuous, hot expanding gas. An object like N70 provides astronomers with an excellent opportunity to explore the connection between the lifecycles of stars and the evolution of galaxies. Very massive stars profoundly affect their environment. They stir and mix the interstellar clouds of gas and dust, and they leave their mark in the compositions and locations of future generations of stars and star systems. PR Photo 40e/99 is an enlargement of a smaller area of this nebula. Technical information : Photos 40d/99 is based on a composite of three images taken through three different filters: B (429 nm; FWHM 88 nm; 3 min; here rendered as blue), V (554 nm; FWHM 111 nm; 3 min; green) and H-alpha (656 nm; FWHM 6 nm; 3 min; red) during a period of 1.0 arcsec seeing. The field shown measures 6.8 x 6.8 arcmin and the images were recorded in frames of 2048 x 2048 pixels, each measuring 0.2 arcsec. The Full Resolution version shows the original pixels. North is up; East is left. The Crab Nebula in Taurus ESO PR Photo 40f/99 ESO PR Photo 40f/99 [Preview - JPEG: 400 x 446 pix - 262k] [Normal - JPEG: 800 x 892 pix - 839 kb] [Full-Res - JPEG: 2036 x 2269 pix - 3.6Mb] The Crab Nebula in Taurus ESO PR Photo 40g/99 ESO PR Photo 40g/99 [Preview - JPEG: 400 x 444 pix - 215kb] [Full-Res - JPEG: 817 x 907 pix - 485 kb] The Crab Nebula in Taurus (detail) PR Photo 40f/99 shows a three colour composite of the well-known Crab Nebula (also known as "Messier 1" ), as observed with the FORS2 instrument in imaging mode in the morning of November 10, 1999. It is the remnant of a supernova explosion at a distance of about 6,000 light-years, observed almost 1000 years ago, in the year 1054. It contains a neutron star near its center that spins 30 times per second around its axis (see below). PR Photo 40g/99 is an enlargement of a smaller area. More information on the Crab Nebula and its pulsar is available on the web, e.g. at a dedicated website for Messier objects. In this picture, the green light is predominantly produced by hydrogen emission from material ejected by the star that exploded. The blue light is predominantly emitted by very high-energy ("relativistic") electrons that spiral in a large-scale magnetic field (so-called syncrotron emission ). It is believed that these electrons are continuously accelerated and ejected by the rapidly spinning neutron star at the centre of the nebula and which is the remnant core of the exploded star. This pulsar has been identified with the lower/right of the two close stars near the geometric center of the nebula, immediately left of the small arc-like feature, best seen in PR Photo 40g/99 . Technical information : Photo 40f/99 is based on a composite of three images taken through three different optical filters: B (429 nm; FWHM 88 nm; 5 min; here rendered as blue), R (657 nm; FWHM 150 nm; 1 min; green) and S II (673 nm; FWHM 6 nm; 5 min; red) during periods of 0.65 arcsec (R, S II) and 0.80 (B) seeing, respectively. The field shown measures 6.8 x 6.8 arcmin and the images were recorded in frames of 2048 x 2048 pixels, each measuring 0.2 arcsec. The Full Resolution version shows the original pixels. North is up; East is left. The High Time Resolution mode (HIT) of FORS2 ESO PR Photo 40h/99 ESO PR Photo 40h/99 [Preview - JPEG: 400 x 304 pix - 90kb] [Normal - JPEG: 707 x 538 pix - 217kb] Time Sequence of the Pulsar in the Crab Nebula ESO PR Photo 40i/99 ESO PR Photo 40i/99 [Preview - JPEG: 400 x 324 pix - 42kb] [Normal - JPEG: 800 x 647 pix - 87kb] Lightcurve of the Pulsar in the Crab Nebula In combination with the large light collecting power of the VLT Unit Telescopes, the high time resolution (25 nsec = 0.000000025 sec) of the ESO-developed FIERA CCD-detector controller opens a new observing window for celestial objects that undergo light intensity variations on very short time scales. A first implementation of this type of observing mode was tested with FORS2 during the first commissioning phase, by means of one of the most fascinating astronomical objects, the rapidly spinning neutron star in the Crab Nebula . It is also known as the Crab pulsar and is an exceedingly dense object that represents an extreme state of matter - it weighs as much as the Sun, but measures only about 30 km across. The result presented here was obtained in the so-called trailing mode , during which one of the rectangular openings of the Multi-Object Spectroscopy (MOS) assembly within FORS2 is placed in front of the lower end of the field. In this way, the entire surface of the CCD is covered, except the opening in which the object under investigation is positioned. By rotating this opening, some neighbouring objects (e.g. stars for alignment) may be observed simultaneously. As soon as the shutter is opened, the charges on the chip are progressively shifted upwards, one pixel at a time, until those first collected in the bottom row behind the opening have reached the top row. Then the entire CCD is read out and the digital data with the full image is stored in the computer. In this way, successive images (or spectra) of the object are recorded in the same frame, displaying the intensity variation with time during the exposure. For this observation, the total exposure lasted 2.5 seconds. During this time interval the image of the pulsar (and those of some neighbouring stars) were shifted 2048 times over the 2048 rows of the CCD. Each individual exposure therefore lasted exactly 1.2 msec (0.0012 sec), corresponding to a nominal time-resolution of 2.4 msec (2 pixels). Faster or slower time resolutions are possible by increasing or decreasing the shift and read-out rate [2]. In ESO PR Photo 40h/99 , the continuous lines in the top and bottom half are produced by normal stars of constant brightness, while the series of dots represents the individual pulses of the Crab pulsar, one every 33 milliseconds (i.e. the neutron star rotates around its axis 30 times per second). It is also obvious that these dots are alternatively brighter and fainter: they mirror the double-peaked profile of the light pulses, as shown in ESO PR Photo 40i/99 . In this diagramme, the time increases along the abscissa axis (1 pixel = 1.2 msec) and the momentary intensity (uncalibrated) is along the ordinate axis. One full revolution of the neutron star corresponds to the distance from one high peak to the next, and the diagramme therefore covers six consecutive revolutions (about 200 milliseconds). Following thorough testing, this new observing mode will allow to investigate the brightness variations of this and many other objects in great detail in order to gain new and fundamental insights in the physical mechanisms that produce the radiation pulses. In addition, it is foreseen to do high time resolution spectroscopy of rapidly varying phenomena. Pushing it to the limits with an 8.2-m telescope like KUEYEN will be a real challenge to the observers that will most certainly lead to great and exciting research projects in various fields of modern astrophysics. Technical information : The frame shown in Photo 40h/99 was obtained during a total exposure time of 2.5 sec without any optical filtre. During this time, the charges on the CCD were shifted over 2048 rows; each row was therefore exposed during 1.2 msec. The bright continuous line comes from the star next to the pulsar; the orientation was such that the "observation slit" was placed over two neighbouring stars. Preliminary data reduction: 11 pixels were added across the pulsar image to increase the signal-to-noise ratio and the background light from the Crab Nebula was subtracted for the same reason. Division by a brighter star (also background-subtracted, but not shown in the image) helped to reduce the influence of the Earth's atmosphere. Notes [1] The masks are produced by the Mask Manufacturing Unit (MMU) built by the VIRMOS Consortium for the VIMOS and NIRMOS instruments that will be installed at the VLT MELIPAL and YEPUN telescopes, respectively. [2] The time resolution achieved during the present test was limited by the maximum charge transfer rate of this particular CCD chip; in the future, FORS2 may be equipped with a new chip with a rate that is up to 20 times faster. How to obtain ESO Press Information ESO Press Information is made available on the World-Wide Web (URL: http://www.eso.org../ ). ESO Press Photos may be reproduced, if credit is given to the European Southern Observatory.
A Draft Test Protocol for Detecting Possible Biohazards in Martian Samples Returned to Earth
NASA Technical Reports Server (NTRS)
Rummel, John D. (Editor); Race, Margaret S.; DeVincenzi, Donald L.; Schad, P. Jackson; Stabekis, Pericles D.; Viso, Michel; Acevedo, Sara E.
2002-01-01
This document presents the first complete draft of a protocol for detecting possible biohazards in Mars samples returned to Earth: it is the final product of the Mars Sample Handling Protocol Workshop Series. convened in 2000-2001 by NASA's Planetary Protection Officer. The goal of the five-workshop Series vas to develop a comprehensive protocol by which returned martian sample materials could be assessed k r the presence of any biological hazard(s) while safeguarding the purity of the samples from possible terrestrial contamination.
Context-dependent JPEG backward-compatible high-dynamic range image compression
NASA Astrophysics Data System (ADS)
Korshunov, Pavel; Ebrahimi, Touradj
2013-10-01
High-dynamic range (HDR) imaging is expected, together with ultrahigh definition and high-frame rate video, to become a technology that may change photo, TV, and film industries. Many cameras and displays capable of capturing and rendering both HDR images and video are already available in the market. The popularity and full-public adoption of HDR content is, however, hindered by the lack of standards in evaluation of quality, file formats, and compression, as well as large legacy base of low-dynamic range (LDR) displays that are unable to render HDR. To facilitate the wide spread of HDR usage, the backward compatibility of HDR with commonly used legacy technologies for storage, rendering, and compression of video and images are necessary. Although many tone-mapping algorithms are developed for generating viewable LDR content from HDR, there is no consensus of which algorithm to use and under which conditions. We, via a series of subjective evaluations, demonstrate the dependency of the perceptual quality of the tone-mapped LDR images on the context: environmental factors, display parameters, and image content itself. Based on the results of subjective tests, it proposes to extend JPEG file format, the most popular image format, in a backward compatible manner to deal with HDR images also. An architecture to achieve such backward compatibility with JPEG is proposed. A simple implementation of lossy compression demonstrates the efficiency of the proposed architecture compared with the state-of-the-art HDR image compression.
JPEG XS, a new standard for visually lossless low-latency lightweight image compression
NASA Astrophysics Data System (ADS)
Descampe, Antonin; Keinert, Joachim; Richter, Thomas; Fößel, Siegfried; Rouvroy, Gaël.
2017-09-01
JPEG XS is an upcoming standard from the JPEG Committee (formally known as ISO/IEC SC29 WG1). It aims to provide an interoperable visually lossless low-latency lightweight codec for a wide range of applications including mezzanine compression in broadcast and Pro-AV markets. This requires optimal support of a wide range of implementation technologies such as FPGAs, CPUs and GPUs. Targeted use cases are professional video links, IP transport, Ethernet transport, real-time video storage, video memory buffers, and omnidirectional video capture and rendering. In addition to the evaluation of the visual transparency of the selected technologies, a detailed analysis of the hardware and software complexity as well as the latency has been done to make sure that the new codec meets the requirements of the above-mentioned use cases. In particular, the end-to-end latency has been constrained to a maximum of 32 lines. Concerning the hardware complexity, neither encoder nor decoder should require more than 50% of an FPGA similar to Xilinx Artix 7 or 25% of an FPGA similar to Altera Cyclon 5. This process resulted in a coding scheme made of an optional color transform, a wavelet transform, the entropy coding of the highest magnitude level of groups of coefficients, and the raw inclusion of the truncated wavelet coefficients. This paper presents the details and status of the standardization process, a technical description of the future standard, and the latest performance evaluation results.
Image enhancement using the hypothesis selection filter: theory and application to JPEG decoding.
Wong, Tak-Shing; Bouman, Charles A; Pollak, Ilya
2013-03-01
We introduce the hypothesis selection filter (HSF) as a new approach for image quality enhancement. We assume that a set of filters has been selected a priori to improve the quality of a distorted image containing regions with different characteristics. At each pixel, HSF uses a locally computed feature vector to predict the relative performance of the filters in estimating the corresponding pixel intensity in the original undistorted image. The prediction result then determines the proportion of each filter used to obtain the final processed output. In this way, the HSF serves as a framework for combining the outputs of a number of different user selected filters, each best suited for a different region of an image. We formulate our scheme in a probabilistic framework where the HSF output is obtained as the Bayesian minimum mean square error estimate of the original image. Maximum likelihood estimates of the model parameters are determined from an offline fully unsupervised training procedure that is derived from the expectation-maximization algorithm. To illustrate how to apply the HSF and to demonstrate its potential, we apply our scheme as a post-processing step to improve the decoding quality of JPEG-encoded document images. The scheme consistently improves the quality of the decoded image over a variety of image content with different characteristics. We show that our scheme results in quantitative improvements over several other state-of-the-art JPEG decoding methods.
Development of bull trout sampling protocols
R. F. Thurow; J. T. Peterson; J. W. Guzevich
2001-01-01
This report describes results of research conducted in Washington in 2000 through Interagency Agreement #134100H002 between the U.S. Fish and Wildlife Service (USFWS) and the U.S. Forest Service Rocky Mountain Research Station (RMRS). The purpose of this agreement is to develop a bull trout (Salvelinus confluentus) sampling protocol by integrating...
Design of a system based on DSP and FPGA for video recording and replaying
NASA Astrophysics Data System (ADS)
Kang, Yan; Wang, Heng
2013-08-01
This paper brings forward a video recording and replaying system with the architecture of Digital Signal Processor (DSP) and Field Programmable Gate Array (FPGA). The system achieved encoding, recording, decoding and replaying of Video Graphics Array (VGA) signals which are displayed on a monitor during airplanes and ships' navigating. In the architecture, the DSP is a main processor which is used for a large amount of complicated calculation during digital signal processing. The FPGA is a coprocessor for preprocessing video signals and implementing logic control in the system. In the hardware design of the system, Peripheral Device Transfer (PDT) function of the External Memory Interface (EMIF) is utilized to implement seamless interface among the DSP, the synchronous dynamic RAM (SDRAM) and the First-In-First-Out (FIFO) in the system. This transfer mode can avoid the bottle-neck of the data transfer and simplify the circuit between the DSP and its peripheral chips. The DSP's EMIF and two level matching chips are used to implement Advanced Technology Attachment (ATA) protocol on physical layer of the interface of an Integrated Drive Electronics (IDE) Hard Disk (HD), which has a high speed in data access and does not rely on a computer. Main functions of the logic on the FPGA are described and the screenshots of the behavioral simulation are provided in this paper. In the design of program on the DSP, Enhanced Direct Memory Access (EDMA) channels are used to transfer data between the FIFO and the SDRAM to exert the CPU's high performance on computing without intervention by the CPU and save its time spending. JPEG2000 is implemented to obtain high fidelity in video recording and replaying. Ways and means of acquiring high performance for code are briefly present. The ability of data processing of the system is desirable. And smoothness of the replayed video is acceptable. By right of its design flexibility and reliable operation, the system based on DSP and FPGA for video recording and replaying has a considerable perspective in analysis after the event, simulated exercitation and so forth.
Correlation Between Iron and alpha and pi Glutathione-S-Transferase Levels in Humans
2012-09-01
assays were performed as described in the Biotrin High Sensitivity Alpha GST EIA kit protocol. First, serum samples were diluted 1:10 with wash solution...immunosorbent assays were performed as described in the Biotrin Pi GST EIA kit protocol. First, plasma samples were diluted 1:5 with sample diluent...immunosorbent assays were performed as described in the AssayMax Human Transferrin ELISA kit protocol. First, serum samples were diluted 1:2000 with MIX
VLT Spectra "Resolve" a Stellar Disk at 25,000 Light-Years Distance
NASA Astrophysics Data System (ADS)
2001-04-01
Unique Observations of a Microlensing Event Summary Like our Sun, stars are large gaseous spheres. However, while we are able to perceive the Sun's disk, all other stars are so far away that they normally appear as points of light . Only specialized observing techniques, like interferometry [1], are able to "resolve" the images of nearby stars and to show them as extended balls of fire. But opportunities may sometimes arise that allow amazing observational feats in this field . Indeed, an international team of astronomers [2] has just "resolved" a single, normal star some 25,000 light years away , or about 1.6 billion times more distant than the Sun [3], by taking advantage of a multiple microlensing event . During such a rare event, the light from the remote star is amplified by the gravity of a faint object that passes in front of it, as seen from the Earth . In fact, this gravitational lens acts as a magnifying glass that focusses different parts of the star's image at different times. Using the FORS1 multi-mode instrument at the 8.2-m VLT ANTU telescope on Paranal during a microlensing event, the team was able to obtain detailed spectra of the different parts of the remote star. In doing so, they managed to probe its gaseous atmosphere at different depths. This is the first time that it has been possible to obtain detailed, spatially resolved spectra across the full face of a normal star other than the Sun [4]. PR Photo 16a/01 : The light-curve of Microlensing Event EROS-BLG-2000-5 . PR Photo 16b/01 : The sky area of EROS-BLG-2000-5. PR Photo 16c/01 : A VLT spectrum of EROS-BLG-2000-5. PR Photo 16d/01 : The observed change of the H-alpha line strength of EROS-BLG-2000-5. A many-faceted success story The following story is about a most unusual astronomical observation and also shows how modern astrophysics works . It combines the study of stellar atmospheres with the intricate optical effects produced by the gravitational field of a binary star in the Milky Way. The successful outcome was dependent on diligent observers in various regions of the world and ultimately on the critical timing of spectral observations with the ESO Very Large Telescope (VLT) at the Paranal Observatory in Chile. Thanks to the effective collaboration among the scientists and a certain measure of good luck, unique data were obtained that are now providing fundamental new insights into stellar astrophysics. The face of a star Distant stars appear as small points of light, even to the largest telescopes on Earth. They are simply too far away to be "resolved" by normal telescopes, and no information can therefore be obtained about what the stellar surfaces look like. This is a fundamental obstacle to the detailed study of stars other than the Sun. We know, however, that the disk of a star does not present itself as a uniform surface. As is the case of the Sun that exhibits variable structures like sunspots (in particular at the time of the present solar maximum), other stars may also have "star-spots" . Another general feature of solar and stellar disks is that they appear fainter towards the periphery. This phenomenon is known as "limb darkening" and is actually a matter of the viewing angle. When we look towards the middle of the solar disk, we see into rather deep and hot layers of its atmosphere. Contrarily, when we view the very edge of the solar disk, we only see the upper, cooler and dimmer parts. Thus, by looking at different areas of its disk, we are able to probe different depths of the solar atmosphere. This in turn permits to determine the structure (temperature, pressure, chemical composition, etc.) of the upper layers of the Sun. For more distant stars, however, their disks appear much too small for this kind of detailed observation. Despite much instrumental progress, therefore, fundamental observational information about stars is still lacking, especially for stars different from the Sun. This is one of the main reasons why the astronomers are thrilled by a new series of spectra from the FORS1 multi-mode instrument at the 8.2-m VLT ANTU telescope at Paranal. They "resolve" for the first time the surface of a normal star some 25,000 light-years away. This amazing observational feat has been possible with some help from a natural "magnifying glass". The road leading to this remarkable result is an instructive and interesting one. Gravitational microlensing ESO PR Photo 16a/01 ESO PR Photo 16a/01 [Preview - JPEG: 361 x 400 pix - 34k] [Normal - JPEG: 721 x 800 pix - 83k] [Hi-Res - JPEG: 2705 x 3000 pix - 536k] Caption : Schematic representation of the lightcurve of the EROS-BLG-2000-5 microlensing event. It represents the changing brightness of a background star, as its light is being amplified by a binary gravitational lens that passes the line-of-sight from the Earth to the star. The ordinate indicates the factor by which the intensity increases during the various phases of the lensing event, as compared to the normal brightness of the star. The moment of the second "caustic crossing" is indicated, during which the image of the star is substantially brighter. Spectral observations were made with the VLT at the times indicated by arrows. For details, see the text. The light from a distant star is affected by the gravity of the objects it passes on its way to us. This effect was predicted by Albert Einstein early last century and observationally confirmed in 1919 when a solar eclipse allowed the study of stars close to the line of sight of the Sun. Accurate positional measurements showed that the light from those remote stars was bent by the Sun's gravitational field. However, the light may not only be deflected, it can also be amplified . In that case, the massive object works like a giant "magnifying lens" that concentrates the light from the distant source. Effects of gravitational optics in space were first observed in 1979. When produced by extended, very heavy clusters of galaxies, they may take the form of large, spectacular arcs and well-separated multiple images, cf. ESO PR Photos 46d/98 and 46f/98 . Less massive lenses, however, produce images with extensions that are too small to be distinguished directly. Such "microlensing" effects occur when a compact body (usually a Milky Way star moving in its galactic orbit) passes almost directly between the observer and a luminous background object (usually also a star). One then sees that the brightness of that object rises and falls as the lens passes across the line-of-sight. The observed light intensity is described by a so-called "light curve", cf. PR Photo 16a/01 . Normally, the lensing object is a faint low-mass star, one of the most common objects in the Milky Way. Microlensing events ESO PR Photo 16b/01 ESO PR Photo 16b/01 [Preview - JPEG: 346 x 400 pix - 44k] [Normal - JPEG: 692 x 800 pix - 112k] [Hi-Res - JPEG: 2596 x 3000 pix - 584k] Caption : A photo of the sky area around the microlensing event EROS-BLG-2000-5 (indicated, near the centre) that is described in this Press Release. Technical information about this photo is available below. In most cases, these low-mass stars are too faint to be directly observed. This is especially so in crowded sky fields in which there are many much brighter stars - including the luminous giant stars that are monitored for microlensing effects. However, the gravity of a low-mass star is strong enough to produce a lensing effect if the geometrical alignment is sufficiently precise. This happens rarely, but by looking at a large number of background stars, it has been possible to detect a fair number of microlensing events during the past few years. International collaborations like Experience pour la Recherche d'Objets Sombres (EROS) , Optical Gravitational Lensing Experiment (OGLE) and Microlensing Observations in Astrophysics (MOA) scan the skies continuously for such microlensing events which typically last from a few weeks to some months. When a star is found to brighten in a way that looks like what is expected from microlensing, they send electronic alerts to other teams like Probing Lensing Anomalies NETwork (PLANET) and Microlensing Planet Search Project (MPS) who then intensively monitor the possible lensing events. One of the main goals of these research programmes is to search for "dark matter" . Indeed, microlensing effects are excellent tools for learning more about this mysterious component of the Universe, as they provide information about lensing objects that otherwise are too faint to be observed. However, microlensing events may also provide very useful information about the background object (the "source"), the light of which is amplified and magnified . When more light is available, more detailed (e.g., spectroscopic) observations can be made. In particular, on rare occasions, it can also help to "resolve" the surface of a distant star. Using distorted lenses If the lensing object is multiple , e.g., a binary star or a star with a planet, the gravitational lens will give rise to interesting phenomena. Whenever the gravitational fields from the two (or more) objects "co-operate", the lensing effect may become distorted and/or unusually strong. Depending on the exact geometry of the lens, i.e. the momentary, relative positions in the sky of the lensing objects and the background object, it is possible that the background source may at some moment be very sharply magnified . In fact, this effect may be so "sharp", that the light from a certain area of the extremely small, apparent disk of a distant star is enhanced much more than that from other areas of the disk. If so, the stellar light registered by the terrestrial telescope will come mainly from that particular area . From optical terminology, such an event is referred to as a "caustic crossing" . However, the exact circumstances are difficult and complex to calculate. The light curve during a lensing event depends on the relative motions of the involved objects or, in other words, on exactly how the distorting and magnifying glass (the lensing object), as seen from the Earth, moves across the background object. In this context, binary lenses are particularly interesting. Not only can they very efficiently enhance the brightness of the source, but there will also be two "caustic crossings" and two associated light maxima. This implies that once the first crossing/maximum has passed, it may be possible to predict when and how the source will be magnified a second time. In that case, the astronomers will have time to prepare for detailed observations at the moment of the second caustic crossing. In particular, this may then include spectroscopic observations that can reveal the structure of the background star. The May 2000 microlensing event ESO PR Photo 16c/01 ESO PR Photo 16c/01 [Preview - JPEG: 400 x 278 pix - 31k] [Normal - JPEG: 800 x 555 pix - 78k] [Hi-Res - JPEG: 3000 x 2081 pix - 528k] Caption : PR Photo 16c/01 shows a spectrum of EROS-BLG-2000-5 , obtained with the FORS1 multi-mode instrument at the 8.2-m VLT ANTU telescope at Paranal on June 25, 2000, before the second caustic crossing described in the text. A small part of the spectrum around the H-alpha line at wavelength 656.2 nm is enlarged in the insert. On 5 May 2000, the EROS group announced an apparently normal microlensing event in a direction a few degrees from the Galactic Centre ( PR Photo 16b/00 ). The brightness of the background star was rising and the PLANET team began to monitor it during its regular operations. About one month later, on 8 June 2000, the MPS team noticed that the event, now designated EROS-BLG-2000-5 , was undergoing an unexpected, sudden and significant brightening. PLANET observers immediately turned their full attention to it, monitoring it continuously from five different observing sites located at suitable longitudes around the Earth. The light curve changed dramatically while the source went through a first caustic crossing ( PR Photo 16a/00 ). On 10 June 2000, the PLANET team alerted the community that this particular event was indeed due to a multiple lens , thus indicating that another light maximum would follow at the second caustic crossing. While continuing to monitor the light curve in order to predict the timing of this second event, the PLANET team contacted ESO with an urgent request to carry out a novel set of observations. The astronomers called attention to the unique possibility of performing detailed spectral observations during the second caustic crossing that could provide information about the chemistry of the stellar atmosphere of the magnified star . ESO concurred and within a day, their observing proposal was granted "Director's Discretionary Time" with the FORS1 spectrograph on the 8.2-m VLT ANTU telescope at the appropriate moment. Some spectra were taken of the background star while it was still magnified, cf. PR Photo 16c/00 , but had not yet made the second caustic crossing. The star was now identified as a cool giant star , located some 25,000 light-years away [3] in the general direction of the Galactic Centre (in the "Galactic Bulge"). Then the team waited. Their predictions indicated that the second caustic crossing might last unusually long, several days rather than a more normal 10-20 hours. The observing plan was therefore changed to ensure that spectra could be taken on four consecutive nights ( PR Photo 16a/00 ) during this caustic crossing. The light curve would then first brighten, and then drop dramatically. During the four nights, the lens would successively magnify different areas of the disk of the cool giant star while "the gravitational magnifying glass slowly moved across it" , as seen from the VLT. First it would mostly be the light from the cool limb of the star that would be amplified, then the hotter middle of the disk, and finally the other, also cooler limb. The VLT observations ESO PR Photo 16d/01 ESO PR Photo 16d/01 [Preview - JPEG: 400 x 270 pix - 28k] [Normal - JPEG: 800 x 540 pix - 63k] [Hi-Res - JPEG: 3000 x 2025 pix - 416k] Caption : The red points are the nightly averages of the strength of the H-alpha absorption line as measured before (point to the left) and during the second caustic crossing. The fully drawn lines represent the expected change, according to two different simulations of the event. The models agree with the data in their general form, but differ on the last night when the trailing limb was crossing the caustic. The two simulations shown differ in their assumptions about the geometry of the event; further data and modeling are now refining these assumptions so that a more quantitative comparison can be made. On each of the four nights beginning on July 4, 5, 6 and 7, 2000, ESO astronomers at Paranal performed two hours of service observations according to the detailed planning of the microlensing team. Spectra were successfully taken of the giant star with the multi-mode FORS1 instrument at the 8.2-m VLT ANTU telescope at the moment of the second caustic crossing. The magnitude was about I=13 at the brightness peak, dropping about 2 magnitudes towards the end of the period ( PR Photo 16a/01 ). In a first scientific assessment of these unique spectra, the team concentrated on an absorption line in the red spectral region (the "H-alpha" line) that is produced by hydrogen in the stellar atmosphere. They found a clear change in the strength of this line of the source star during the four nights ( PR Photo 16d/01 ). No such variations were seen in the spectra of neighbouring stars that were observed simultaneously, providing a secure check that the observed changes are real. The astronomers then went on to interpret this change. For this they performed various simulations by means of a computer model of the atmosphere of the cool giant star, applying the expected effects of the lensing and then comparing with the observed spectra. The expected changes in the strength of the H-alpha absorption line during the crossing from two simple simulations are plotted as lines over the observed data in PR Photo 16d/01 . The observed changes of the H-alpha line during the caustic crossing agree well with the model calculations . During this event, the microlens magnifies successive areas of the stellar disk particularly strongly. To begin with, the light from the relatively cool, leading limb of the star dominates the registered spectrum - and here the absorption line strength drops slightly, exactly as expected. It then becomes stronger as the hotter areas near the middle of the disk "come into focus" and then again decreases when the cooler trailing limb is strongly magnified. This is the first time that this effect has ever been measured for all phases of a caustic crossing. More to come More quantitative predictions of the modeling will now be carried out, refining the geometry of the caustic crossing and involving many more spectral lines. This will allow a sophisticated tomographic analysis of the atmosphere of this star. For this, the detailed brightness measurements that were collected from over two thousand observations of EROS-BLG-2000-5 by PLANET observers in Tasmania, Western Australia, South Africa, Chile and the United States will be of great help in determining better the exact geometry of the event. In due time, the VLT spectra data will then make it possible to test directly the best models of stellar atmospheres now devised by astronomers. Observations like these are very important because they allow detailed investigation of a stellar atmosphere other than that of the Sun. It is remarkable that this is based on the "resolution" of the disk of a star over 25000 light-years away, i.e. about 1.6 billion times more distant than our own Sun [4]. More information Further detailed information is available at the PLANET website and in a research paper ( "H-alpha Equivalent Width Variations across the Face of a Microlensed K Giant in the Galactic Bulge" ) that appeared in the April 1, 2001 issue of the "Astrophysical Journal" (available on the web at ApJL 550, L173 or astro-ph0011380). Notes [1] Note the recent ESO Press Release 06/01 about the VLT Interferometer. Observations of binary stars that undergo eclipses from time to time also allow indirect studies of the surfaces of the two components; such objects, however, influence each other and cannot be characterized as "normal" stars. [2] The team (the PLANET collaboration) consists of Michael Albrow , Kailash C. Sahu (Space Telescope Science Institute, Baltimore, MD, USA) Jin H. An (Dept. of Astronomy, Ohio State University, Columbus, OH, USA), Jean-Philippe Beaulieu (Institut d'Astrophysique de Paris, France), John A. R. Caldwell , John W. Menzies , Pierre Vermaak (South African Astronomical Observatory, Cape Town, South Africa), Martin Dominik , Penny D. Sackett (Kapteyn Astronomical Institute, Groningen, The Netherlands) , John Greenhill , Kym Hill , Stephen Kane , Robert Watson (University of Tasmania, Hobart, Tasmania, Australia), Ralph Martin , Andrew Williams (Perth Observatory, Australia), Karen Pollard (Physics Dept., Gettysburg College, PA, USA) and Peter H. Hauschildt (Dept. of Physics and Astronomy & Center for Simulational Physics, University of Georgia, Athens, GA, USA). [3] The distance to the Sun is 149.6 million kilometres; 25,000 light-years = 240,000,000,000,000,000 kilometres. 1 billion = 1000 million. [4] The diameter of the cool giant star is approx. 15 million km (about ten times that of the Sun). At the indicated distance, 25,000 light-years, this corresponds to a very small angle, about 10 micro-arcsec. This is equal to the angle subtended by a human hair (diameter 50 microns = 0.05 mm) at a distance of 1000 km. Technical information about the photos PR Photo 16b/01 shows a 0.25-sec acquisition exposure of EROS-BLG-2000-5 , obtained with VLT ANTU + FORS1 in order to set up the spectrograph slit for the subsequent spectral exposures. The filter was Bessell-I (wavelength about 900 nm) and the field measures about 2 x 2 arcmin 2. North is up and East is left. The FORS1 spectrum shown in PR Photo 16c/01 is a composite of 300-sec exposures taken with with the 600B (spectral interval 390 - 580 nm), 600R (538 - 753 nm) and 600I (705 - 918 nm) gratings; the insert covers a 10 nm wide region near H-alpha.
A Draft Test Protocol for Detecting Possible Biohazards in Martian Samples Returned to Earth
NASA Technical Reports Server (NTRS)
Rummel, John D.; Race, Margaret S.; DeVinenzi, Donald L.; Schad, P. Jackson; Stabekis, Pericles D.; Viso, Michel; Acevedo, Sara E.
2002-01-01
This document presents the first complete draft of a protocol for detecting possible biohazards in Mars samples returned to Earth; it is the final product of the Mars Sample Handling Protocol Workshop Series, convened in 2000-2001 by NASA's Planetary Protection Officer. The goal of the five-workshop Series vas to develop a comprehensive protocol by which returned martian sample materials could be assessed for the presence of any biological hazard(s) while safeguarding the purity of the samples from possible terrestrial contamination The reference numbers for the proceedings from the five individual Workshops.
Web surveillance system using platform-based design
NASA Astrophysics Data System (ADS)
Lin, Shin-Yo; Tsai, Tsung-Han
2004-04-01
A revolutionary methodology of SOPC platform-based design environment for multimedia communications will be developed. We embed a softcore processor to perform the image compression in FPGA. Then, we plug-in an Ethernet daughter board in the SOPC development platform system. Afterward, a web surveillance platform system is presented. The web surveillance system consists of three parts: image capture, web server and JPEG compression. In this architecture, user can control the surveillance system by remote. By the IP address configures to Ethernet daughter board, the user can access the surveillance system via browser. When user access the surveillance system, the CMOS sensor presently capture the remote image. After that, it will feed the captured image with the embedded processor. The embedded processor immediately performs the JPEG compression. Afterward, the user receives the compressed data via Ethernet. To sum up of the above mentioned, the all system will be implemented on APEX20K200E484-2X device.
Novel approach to multispectral image compression on the Internet
NASA Astrophysics Data System (ADS)
Zhu, Yanqiu; Jin, Jesse S.
2000-10-01
Still image coding techniques such as JPEG have been always applied onto intra-plane images. Coding fidelity is always utilized in measuring the performance of intra-plane coding methods. In many imaging applications, it is more and more necessary to deal with multi-spectral images, such as the color images. In this paper, a novel approach to multi-spectral image compression is proposed by using transformations among planes for further compression of spectral planes. Moreover, a mechanism of introducing human visual system to the transformation is provided for exploiting the psycho visual redundancy. The new technique for multi-spectral image compression, which is designed to be compatible with the JPEG standard, is demonstrated on extracting correlation among planes based on human visual system. A high measure of compactness in the data representation and compression can be seen with the power of the scheme taken into account.
First Digit Law and Its Application to Digital Forensics
NASA Astrophysics Data System (ADS)
Shi, Yun Q.
Digital data forensics, which gathers evidence of data composition, origin, and history, is crucial in our digital world. Although this new research field is still in its infancy stage, it has started to attract increasing attention from the multimedia-security research community. This lecture addresses the first digit law and its applications to digital forensics. First, the Benford and generalized Benford laws, referred to as first digit law, are introduced. Then, the application of first digit law to detection of JPEG compression history for a given BMP image and detection of double JPEG compressions are presented. Finally, applying first digit law to detection of double MPEG video compressions is discussed. It is expected that the first digit law may play an active role in other task of digital forensics. The lesson learned is that statistical models play an important role in digital forensics and for a specific forensic task different models may provide different performance.
Connors, Bret A; Evan, Andrew P; Blomgren, Philip M; Handa, Rajash K; Willis, Lynn R; Gao, Sujuan
2009-01-01
To determine if the starting voltage in a step-wise ramping protocol for extracorporeal shock wave lithotripsy (SWL) alters the size of the renal lesion caused by the SWs. To address this question, one kidney from 19 juvenile pigs (aged 7-8 weeks) was treated in an unmodified Dornier HM-3 lithotripter (Dornier Medical Systems, Kennesaw, GA, USA) with either 2000 SWs at 24 kV (standard clinical treatment, 120 SWs/min), 100 SWs at 18 kV followed by 2000 SWs at 24 kV or 100 SWs at 24 kV followed by 2000 SWs at 24 kV. The latter protocols included a 3-4 min interval, between the 100 SWs and the 2000 SWs, used to check the targeting of the focal zone. The kidneys were removed at the end of the experiment so that lesion size could be determined by sectioning the entire kidney and quantifying the amount of haemorrhage in each slice. The average parenchymal lesion for each pig was then determined and a group mean was calculated. Kidneys that received the standard clinical treatment had a mean (sem) lesion size of 3.93 (1.29)% functional renal volume (FRV). The mean lesion size for the 18 kV ramping group was 0.09 (0.01)% FRV, while lesion size for the 24 kV ramping group was 0.51 (0.14)% FRV. The lesion size for both of these groups was significantly smaller than the lesion size in the standard clinical treatment group. The data suggest that initial voltage in a voltage-ramping protocol does not correlate with renal damage. While voltage ramping does reduce injury when compared with SWL with no voltage ramping, starting at low or high voltage produces lesions of the same approximate size. Our findings also suggest that the interval between the initial shocks and the clinical dose of SWs, in our one-step ramping protocol, is important for protecting the kidney against injury.
Iris Recognition: The Consequences of Image Compression
NASA Astrophysics Data System (ADS)
Ives, Robert W.; Bishop, Daniel A.; Du, Yingzi; Belcher, Craig
2010-12-01
Iris recognition for human identification is one of the most accurate biometrics, and its employment is expanding globally. The use of portable iris systems, particularly in law enforcement applications, is growing. In many of these applications, the portable device may be required to transmit an iris image or template over a narrow-bandwidth communication channel. Typically, a full resolution image (e.g., VGA) is desired to ensure sufficient pixels across the iris to be confident of accurate recognition results. To minimize the time to transmit a large amount of data over a narrow-bandwidth communication channel, image compression can be used to reduce the file size of the iris image. In other applications, such as the Registered Traveler program, an entire iris image is stored on a smart card, but only 4 kB is allowed for the iris image. For this type of application, image compression is also the solution. This paper investigates the effects of image compression on recognition system performance using a commercial version of the Daugman iris2pi algorithm along with JPEG-2000 compression, and links these to image quality. Using the ICE 2005 iris database, we find that even in the face of significant compression, recognition performance is minimally affected.
Wu, Xiaolin; Zhang, Xiangjun; Wang, Xiaohan
2009-03-01
Recently, many researchers started to challenge a long-standing practice of digital photography: oversampling followed by compression and pursuing more intelligent sparse sampling techniques. In this paper, we propose a practical approach of uniform down sampling in image space and yet making the sampling adaptive by spatially varying, directional low-pass prefiltering. The resulting down-sampled prefiltered image remains a conventional square sample grid, and, thus, it can be compressed and transmitted without any change to current image coding standards and systems. The decoder first decompresses the low-resolution image and then upconverts it to the original resolution in a constrained least squares restoration process, using a 2-D piecewise autoregressive model and the knowledge of directional low-pass prefiltering. The proposed compression approach of collaborative adaptive down-sampling and upconversion (CADU) outperforms JPEG 2000 in PSNR measure at low to medium bit rates and achieves superior visual quality, as well. The superior low bit-rate performance of the CADU approach seems to suggest that oversampling not only wastes hardware resources and energy, and it could be counterproductive to image quality given a tight bit budget.
On scalable lossless video coding based on sub-pixel accurate MCTF
NASA Astrophysics Data System (ADS)
Yea, Sehoon; Pearlman, William A.
2006-01-01
We propose two approaches to scalable lossless coding of motion video. They achieve SNR-scalable bitstream up to lossless reconstruction based upon the subpixel-accurate MCTF-based wavelet video coding. The first approach is based upon a two-stage encoding strategy where a lossy reconstruction layer is augmented by a following residual layer in order to obtain (nearly) lossless reconstruction. The key advantages of our approach include an 'on-the-fly' determination of bit budget distribution between the lossy and the residual layers, freedom to use almost any progressive lossy video coding scheme as the first layer and an added feature of near-lossless compression. The second approach capitalizes on the fact that we can maintain the invertibility of MCTF with an arbitrary sub-pixel accuracy even in the presence of an extra truncation step for lossless reconstruction thanks to the lifting implementation. Experimental results show that the proposed schemes achieve compression ratios not obtainable by intra-frame coders such as Motion JPEG-2000 thanks to their inter-frame coding nature. Also they are shown to outperform the state-of-the-art non-scalable inter-frame coder H.264 (JM) lossless mode, with the added benefit of bitstream embeddedness.
Digital pathology: DICOM-conform draft, testbed, and first results.
Zwönitzer, Ralf; Kalinski, Thomas; Hofmann, Harald; Roessner, Albert; Bernarding, Johannes
2007-09-01
Hospital information systems are state of the art nowadays. Therefore, Digital Pathology, also labelled as Virtual Microscopy, has gained increased attention. Triggered by radiology, standardized information models and workflows were world-wide defined based on DICOM. However, DICOM-conform integration of Digital Pathology into existing clinical information systems imposes new problems requiring specific solutions concerning the huge amount of data as well as the special structure of the data to be managed, transferred, and stored. We implemented a testbed to realize and evaluate the workflow of digitized slides from acquisition to archiving. The experiences led to the draft of a DICOM-conform information model that accounted for extensions, definitions, and technical requirements necessary to integrate digital pathology in a hospital-wide DICOM environment. Slides were digitized, compressed, and could be viewed remotely. Real-time transfer of the huge amount of data was optimized using streaming techniques. Compared to a recent discussion in the DICOM Working Group for Digital Pathology (WG26) our experiences led to a preference of a JPEG2000/JPIP-based streaming of the whole slide image. The results showed that digital pathology is feasible but strong efforts by users and vendors are still necessary to integrate Digital Pathology into existing information systems.
Local wavelet transform: a cost-efficient custom processor for space image compression
NASA Astrophysics Data System (ADS)
Masschelein, Bart; Bormans, Jan G.; Lafruit, Gauthier
2002-11-01
Thanks to its intrinsic scalability features, the wavelet transform has become increasingly popular as decorrelator in image compression applications. Throuhgput, memory requirements and complexity are important parameters when developing hardware image compression modules. An implementation of the classical, global wavelet transform requires large memory sizes and implies a large latency between the availability of the input image and the production of minimal data entities for entropy coding. Image tiling methods, as proposed by JPEG2000, reduce the memory sizes and the latency, but inevitably introduce image artefacts. The Local Wavelet Transform (LWT), presented in this paper, is a low-complexity wavelet transform architecture using a block-based processing that results in the same transformed images as those obtained by the global wavelet transform. The architecture minimizes the processing latency with a limited amount of memory. Moreover, as the LWT is an instruction-based custom processor, it can be programmed for specific tasks, such as push-broom processing of infinite-length satelite images. The features of the LWT makes it appropriate for use in space image compression, where high throughput, low memory sizes, low complexity, low power and push-broom processing are important requirements.
BCI2000: a general-purpose brain-computer interface (BCI) system.
Schalk, Gerwin; McFarland, Dennis J; Hinterberger, Thilo; Birbaumer, Niels; Wolpaw, Jonathan R
2004-06-01
Many laboratories have begun to develop brain-computer interface (BCI) systems that provide communication and control capabilities to people with severe motor disabilities. Further progress and realization of practical applications depends on systematic evaluations and comparisons of different brain signals, recording methods, processing algorithms, output formats, and operating protocols. However, the typical BCI system is designed specifically for one particular BCI method and is, therefore, not suited to the systematic studies that are essential for continued progress. In response to this problem, we have developed a documented general-purpose BCI research and development platform called BCI2000. BCI2000 can incorporate alone or in combination any brain signals, signal processing methods, output devices, and operating protocols. This report is intended to describe to investigators, biomedical engineers, and computer scientists the concepts that the BC12000 system is based upon and gives examples of successful BCI implementations using this system. To date, we have used BCI2000 to create BCI systems for a variety of brain signals, processing methods, and applications. The data show that these systems function well in online operation and that BCI2000 satisfies the stringent real-time requirements of BCI systems. By substantially reducing labor and cost, BCI2000 facilitates the implementation of different BCI systems and other psychophysiological experiments. It is available with full documentation and free of charge for research or educational purposes and is currently being used in a variety of studies by many research groups.
Formally Generating Adaptive Security Protocols
2013-03-01
User Interfaces for Theorem Provers, 2012. [9] Xiaoming Liu, Christoph Kreitz, Robbert van Renesse, Jason J. Hickey, Mark Hayden, Ken- neth Birman, and...Constable, Mark Hayden, Jason Hickey, Christoph Kreitz, Robbert van Renesse, Ohad Rodeh, and Werner Vogels. The Horus and Ensemble projects: Accom...plishments and limitations. In DARPA Information Survivability Conference and Exposition (DISCEX 2000), pages 149–161, Hilton Head, SC, 2000. IEEE
Fast and accurate face recognition based on image compression
NASA Astrophysics Data System (ADS)
Zheng, Yufeng; Blasch, Erik
2017-05-01
Image compression is desired for many image-related applications especially for network-based applications with bandwidth and storage constraints. The face recognition community typical reports concentrate on the maximal compression rate that would not decrease the recognition accuracy. In general, the wavelet-based face recognition methods such as EBGM (elastic bunch graph matching) and FPB (face pattern byte) are of high performance but run slowly due to their high computation demands. The PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis) algorithms run fast but perform poorly in face recognition. In this paper, we propose a novel face recognition method based on standard image compression algorithm, which is termed as compression-based (CPB) face recognition. First, all gallery images are compressed by the selected compression algorithm. Second, a mixed image is formed with the probe and gallery images and then compressed. Third, a composite compression ratio (CCR) is computed with three compression ratios calculated from: probe, gallery and mixed images. Finally, the CCR values are compared and the largest CCR corresponds to the matched face. The time cost of each face matching is about the time of compressing the mixed face image. We tested the proposed CPB method on the "ASUMSS face database" (visible and thermal images) from 105 subjects. The face recognition accuracy with visible images is 94.76% when using JPEG compression. On the same face dataset, the accuracy of FPB algorithm was reported as 91.43%. The JPEG-compressionbased (JPEG-CPB) face recognition is standard and fast, which may be integrated into a real-time imaging device.
Pine Island Glacier, Antarctica, MISR Multi-angle Composite
Atmospheric Science Data Center
2013-12-17
... View Larger Image (JPEG) A large iceberg has finally separated from the calving front ... next due to stereo parallax. This parallax is used in MISR processing to retrieve cloud heights over snow and ice. Additionally, a plume ...
Urban forestry and carbon: what the reporting protocol means to you
E.G. McPherson
2008-01-01
Urban forests have a role to play in reducing levels of carbon dioxide and other greenhouse gases (GHG) in the atmosphere (Abdollahi et al. 2000; Pataki et al. 2006). However, very few tree planting projects have been undertaken because of the uncertainty regarding their performance and permanence. The Urban Forest Project Reporting Protocol was developed to reduce...
ERIC Educational Resources Information Center
Biggers, Mandy Sue
2013-01-01
Using a framework for variations of classroom inquiry (National Research Council [NRC], 2000, p. 29), this study explored 40 inservice elementary teachers' planning, modification, and enactment of kit-based science curriculum materials. As part of the study, a new observation protocol was modified from an existing protocol (Practices of…
Zakaria, Golam Abu; Schuette, Wilhelm
2007-01-01
For the determination of the absorbed dose to water for high-energy photon and electron beams the IAEA code of practice TRS-398 (2000) is applied internationally. In Germany, the German dosimetry protocol DIN 6800-2 (1997) is used. Recently, the DIN standard has been revised and published as Draft National Standard DIN 6800-2 (2006). It has adopted widely the methodology and dosimetric data of the code of practice. This paper compares these three dosimetry protocols systematically and identifies similarities as well as differences. The investigation was done with 6 and 18 MV photon as well as 5 to 21 MeV electron beams. While only cylindrical chambers were used for photon beams, measurements of electron beams were performed using cylindrical as well as plane-parallel chambers. The discrepancies in the determination of absorbed dose to water between the three protocols were 0.4% for photon beams and 1.5% for electron beams. Comparative measurements showed a deviation of less than 0.5% between our measurements following protocol DIN 6800-2 (2006) and TLD inter-comparison procedure in an external audit. PMID:21217912
Zakaria, Golam Abu; Schuette, Wilhelm
2007-01-01
For the determination of the absorbed dose to water for high-energy photon and electron beams the IAEA code of practice TRS-398 (2000) is applied internationally. In Germany, the German dosimetry protocol DIN 6800-2 (1997) is used. Recently, the DIN standard has been revised and published as Draft National Standard DIN 6800-2 (2006). It has adopted widely the methodology and dosimetric data of the code of practice. This paper compares these three dosimetry protocols systematically and identifies similarities as well as differences. The investigation was done with 6 and 18 MV photon as well as 5 to 21 MeV electron beams. While only cylindrical chambers were used for photon beams, measurements of electron beams were performed using cylindrical as well as plane-parallel chambers. The discrepancies in the determination of absorbed dose to water between the three protocols were 0.4% for photon beams and 1.5% for electron beams. Comparative measurements showed a deviation of less than 0.5% between our measurements following protocol DIN 6800-2 (2006) and TLD inter-comparison procedure in an external audit.
Landbird Monitoring Protocol for National Parks in the North Coast and Cascades Network
Siegel, Rodney B.; Wilkerson, Robert L.; Jenkins, Kurt J.; Kuntz, Robert C.; Boetsch, John R.; Schaberl, James P.; Happe, Patricia J.
2007-01-01
This protocol narrative outlines the rationale, sampling design and methods for monitoring landbirds in the North Coast and Cascades Network (NCCN) during the breeding season. The NCCN, one of 32 networks of parks in the National Park System, comprises seven national park units in the Pacific Northwest, including three large, mountainous, natural area parks (Mount Rainier [MORA] and Olympic [OLYM] National Parks, North Cascades National Park Service Complex [NOCA]), and four small historic cultural parks (Ebey's Landing National Historical Reserve [EBLA], Lewis and Clark National Historical Park [LEWI], Fort Vancouver National Historical Park [FOVA], and San Juan Island National Historical Park [SAJH]). The protocol reflects decisions made by the NCCN avian monitoring group, which includes NPS representatives from each of the large parks in the Network as well as personnel from the U.S. Geological Survey Forest and Rangeland Ecosystem Science Center (USGS-FRESC) Olympic Field Station, and The Institute for Bird Populations, at meetings held between 2000 (Siegel and Kuntz, 2000) and 2005. The protocol narrative describes the monitoring program in relatively broad terms, and its structure and content adhere to the outline and recommendations developed by Oakley and others (2003) and adopted by NPS. Finer details of the methodology are addressed in a set of standard operating procedures (SOPs) that accompany the protocol narrative. We also provide appendixes containing additional supporting materials that do not clearly belong in either the protocol narrative or the standard operating procedures.
NASA Technical Reports Server (NTRS)
Plesea, Lucian; Wood, James F.
2012-01-01
This software is a simple, yet flexible server of raster map products, compliant with the Open Geospatial Consortium (OGC) Web Map Service (WMS) 1.1.1 protocol. The server is a full implementation of the OGC WMS 1.1.1 as a fastCGI client and using Geospatial Data Abstraction Library (GDAL) for data access. The server can operate in a proxy mode, where all or part of the WMS requests are done on a back server. The server has explicit support for a colocated tiled WMS, including rapid response of black (no-data) requests. It generates JPEG and PNG images, including 16-bit PNG. The GDAL back-end support allows great flexibility on the data access. The server is a port to a Linux/GDAL platform from the original IRIX/IL platform. It is simpler to configure and use, and depending on the storage format used, it has better performance than other available implementations. The WMS server 2.0 is a high-performance WMS implementation due to the fastCGI architecture. The use of GDAL data back end allows for great flexibility. The configuration is relatively simple, based on a single XML file. It provides scaling and cropping, as well as blending of multiple layers based on layer transparency.
2004-09-01
the organization: “understanding” is a nebulous term to measure without applying some form of academic rigor, which in this case would be complex...to the RSDBU ISO 9001:2000 certification initiative was based on the work of University of Rhode Island Professor Quentin C. Turtle . His book...commensurate inspection protocol during design and production phases.84 The major difference in the Turtle approach compared to ISO 9000 is that Turtle
Metastatic medulloblastoma in adults: outcome of patients treated according to the HIT2000 protocol.
von Bueren, André O; Friedrich, Carsten; von Hoff, Katja; Kwiecien, Robert; Müller, Klaus; Pietsch, Torsten; Warmuth-Metz, Monika; Hau, Peter; Benesch, Martin; Kuehl, Joachim; Kortmann, Rolf D; Rutkowski, Stefan
2015-11-01
Due to the rarity of metastatic medulloblastoma in adults, knowledge about the efficacy and toxicity of intensified chemotherapy and radiotherapy is limited. Adults with disseminated medulloblastoma registered in the HIT2000 trial as observational patients and treated according to one of two different treatment regimens were analysed. The sandwich strategy MET-HIT2000AB4 consists of postoperative chemotherapy, hyperfractionated craniospinal radiotherapy, and maintenance chemotherapy; while the HIT'91 maintenance strategy consists of postoperative craniospinal radiotherapy, and maintenance chemotherapy. Twenty-three patients (median age: 30.7years), diagnosed from November 2001 to July 2009, and treated in 18 institutions in Germany and Austria, were eligible. The median follow-up of surviving patients was 3.99years. The 4-year event-free survival (EFS) and overall survival (OS)±standard error (SE) were 52%±12% and 91%±6%, respectively. The survival was similar in both treatment groups (HIT'91 maintenance strategy, n=9; MET-HIT2000AB4 sandwich strategy, n=14). Patients with large cell/anaplastic medulloblastoma relapsed and died (n=2; 4-year EFS/OS: 0%) and OS differed compared to patients with classic (n=11; 4-year EFS/OS: 71%/91%) and desmoplastic medulloblastoma (n=10; 4-year EFS/OS: 48%/100%), respectively (p=0.161 for EFS and p=0.033 for OS). Treatment-induced toxicities consisted mainly of neurotoxicity (50% of patients, ⩾ °II), followed by haematotoxicity and nephrotoxicity/ototoxicity. The professional outcome appeared to be negatively affected in the majority of evaluable patients (9/10). Treatment of adults with metastatic medulloblastoma according to the intensified paediatric HIT2000 protocol was feasible with acceptable toxicities. EFS rates achieved by both chemotherapeutic protocols were favourable and appear to be inferior to those obtained in older children/adolescents with metastatic disease. Copyright © 2015 Elsevier Ltd. All rights reserved.
[Protocol for the treatment of severe acute pancreatitis with necrosis].
Barreda, Luis; Targarona, Javier; Rodriguez, César
2005-01-01
The Severe Acute Pancreatic Unit of Edgardo Rebagliati Martins National Hospital was officially created in the year 2000. Up to date, we have cared for more than 195 patients with Pancreatic Necrosis. All of them have been treated under a management protocol presented by us. This has helped us to standardize treatment and also to compare results with work groups around the world. This Protocol comes from our own experience and that of our colleagues abroad with a wide knowledge in this kind of pathology abroad, with whom we maintain close ties.
Unconditional security of a three state quantum key distribution protocol.
Boileau, J-C; Tamaki, K; Batuwantudawe, J; Laflamme, R; Renes, J M
2005-02-04
Quantum key distribution (QKD) protocols are cryptographic techniques with security based only on the laws of quantum mechanics. Two prominent QKD schemes are the Bennett-Brassard 1984 and Bennett 1992 protocols that use four and two quantum states, respectively. In 2000, Phoenix et al. proposed a new family of three-state protocols that offers advantages over the previous schemes. Until now, an error rate threshold for security of the symmetric trine spherical code QKD protocol has been shown only for the trivial intercept-resend eavesdropping strategy. In this Letter, we prove the unconditional security of the trine spherical code QKD protocol, demonstrating its security up to a bit error rate of 9.81%. We also discuss how this proof applies to a version of the trine spherical code QKD protocol where the error rate is evaluated from the number of inconclusive events.
News from Online: More Spectroscopy
NASA Astrophysics Data System (ADS)
Sweeney Judd, Carolyn
1999-09-01
Absorption (one of three tools) (http://mc2.cchem.berkeley.edu/Chem1A/solar/applets/absorption/ index.html).
Evaporative cooling in a Bose-Einstein condensation ( http://www.colorado.edu/physics/2000/applets/bec.html). Let's start with the spectrum--the electromagnetic spectrum, of course. Go to the EMSpectrum Explorer at http://mc2.cchem.berkeley.edu/chemcnx/light_energy/EMSpectrum /emspectrum.html. Not only do you get information about wavelength, frequency, and energy, but you also get a handy converter that will calculate frequency, wavelength, and energy when one value is entered. And there is more. For example, clicking on red light of 680 nanometers reveals that mitochondria, the power plants of cells, are about the same size as this wavelength, which is also used for photosynthesis. Interesting food for thought! From the EMSpectrum Explorer, go to the Light and Energy page at http://mc2.cchem.berkeley.edu/chemcnx/light_energy/index.html for three Colors of Light Tools. The Color from Emission tool ( http://mc2.cchem.berkeley.edu/chemcnx/light_energy/applets/emission/index.html) illustrates additive color by mixing differing amounts of Red, Blue, and Green light. Then look at the Color from Absorption tool at http://mc2.cchem.berkeley.edu/chemcnx/light_energy/applets/absorption/index.html. The image from the applet shows the white beam and three filters. Take out the blue, green, and red components by altering the scroll bars or text boxes. The third tool, Removing Color with a Single Filter from Colored Light at http://mc2.cchem.berkeley.edu/chemcnx/light_energy/applets/single/index.html, uses a single filter to take out various colors. Excellent for explaining the theory behind the operation of a basic spectrometer. The Light and Energy tools module, which received support from the National Science Foundation, has been developed under the direction of the ChemLinks Coalition--headed by Beloit College; and The ModularChem Consortium, MC2, headed by the University of California at Berkeley. The Project Director is Marco Molinaro from the University of California at Berkeley; the Project Manager is Susan Walden; Susan Ketchner and Leighanne McConnaughey are also members of the team for this excellent teaching site. For your information, all of the applets will soon be moving, along with the MC2 site, but the old addresses will still work. The next place to explore is Physics 2000 at http://www.colorado.edu/physics/2000/introduction.html. The introductory graphic is a harbinger of good things to come: move the negatively charged particle and see the water molecule spin in response to the position of the charged particle. One goal of the Physics 2000 Educational Initiative is to make physics more accessible to students and people of all ages. Sounds like a good goal for all sciences! One of the first sections is called Einstein's Legacy. Here you can find spectral lines explained in terms of team colors for rival football squads ( http://www.colorado.edu/physics/2000/quantumzone/index.html). Choose from 20 elements to see characteristic emission spectra. The cartoon teachers and students help explain emission spectra. Great applets compare the Bohr atom and the Schrödinger model as well as emission and absorption ( http://www.colorado.edu/physics/2000/quantumzone/schroedinger.html). Einstein's Legacy has many topics: X-rays and CAT Scans, Electromagnetic Waves and Particles, the Quantum Atom, Microwave Ovens, Lasers, and TV & Laptop Screens. Several topics also have sections for the advanced student. One of those advanced sections is part of the second major section of Physics 2000: The Atomic Lab. Two topics are Interference Experiments and Bose-Einstein Condensate. An applet illustrating Laser Cooling is at http://www.colorado.edu/physics/2000/bec/lascool1.html. Next go on to Evaporative Cooling at http://www.colorado.edu/physics/2000/bec/evap_cool.html. The cartoon professors begin the explanation with a picture of steam rising from a cup of hot coffee. Next is an applet with atoms in a parabolic magnetic trap at http://www.colorado.edu/physics/2000/applets/bec.html. The height of the magnetic trap can be changed in order to allow for escape of the most energetic atoms, resulting in cooling so that the Bose-Einstein Condensate is formed. Physics 2000 demands robust computing power. Check the system requirements on the introductory screen before venturing too far into this site. Martin V. Goldman, from the University of Colorado at Boulder, is the Director of Physics 2000, which received support from the Colorado Commission on Higher Education and the National Science Foundation. David Rea is the Technical Director, and many others help make this excellent site possible. Mark your calendars: October 31 through December 3, 1999! Bookmark this site-- http://www.ched-ccce.org/confchem/1999/d/index.html --and sign up. The Winter 1999 CONFCHEM Online Conference will focus on Developments in Spectroscopy and Innovative Strategies for Teaching Spectroscopy in the Undergraduate Curriculum. Scott Van Bramer of Widener University is the conference chair. Experts will present six papers, each to be followed by online discussions. CONFCHEM Online Conferences are sponsored by the American Chemical Society Division of Chemical Education's Committee on Computers in Chemical Education (CCCE). Several Online Conferences are held each year--all are well worth your time. World Wide Web Addresses EMSpectrum Explorer http://mc2.cchem.berkeley.edu/chemcnx/light_energy/EMSpectrum/emspectrum.html Light and Energy http://mc2.cchem.berkeley.edu/chemcnx/light_energy/index.html Emission Spectrum Java Applet http://mc2.cchem.berkeley.edu/chemcnx/light_energy/applets/emission/index.html Absorption Java Applet http://mc2.cchem.berkeley.edu/chemcnx/light_energy/applets/absorption/index.html Removing Color with a Single Filter from Colored Light http://mc2.cchem.berkeley.edu/chemcnx/light_energy/applets/single/index.html Physics 2000 http://www.colorado.edu/physics/2000/introduction.html Einstein's Legacy: Spectral lines http://www.colorado.edu/physics/2000/quantumzone/index.html Einstein's: Schrödinger's Atom http://www.colorado.edu/physics/2000/quantumzone /schroedinger.html The Atomic Lab: Laser Cooling http://www.colorado.edu/physics/2000/bec/lascool1.html The Atomic Lab: Evaporative Cooling in a BoseEinstein Condensation http://www.colorado.edu/physics/2000/bec/evap_cool.html The Winter 1999 CONFCHEM Online Conference will focus on Developments in Spectroscopy and Innovative Strategies for Teaching Spectroscopy in the Undergraduate Curriculum http://www.ched-ccce.org/confchem/1999/d/index.html access date for all sites: July 1999
Random Walk Graph Laplacian-Based Smoothness Prior for Soft Decoding of JPEG Images.
Liu, Xianming; Cheung, Gene; Wu, Xiaolin; Zhao, Debin
2017-02-01
Given the prevalence of joint photographic experts group (JPEG) compressed images, optimizing image reconstruction from the compressed format remains an important problem. Instead of simply reconstructing a pixel block from the centers of indexed discrete cosine transform (DCT) coefficient quantization bins (hard decoding), soft decoding reconstructs a block by selecting appropriate coefficient values within the indexed bins with the help of signal priors. The challenge thus lies in how to define suitable priors and apply them effectively. In this paper, we combine three image priors-Laplacian prior for DCT coefficients, sparsity prior, and graph-signal smoothness prior for image patches-to construct an efficient JPEG soft decoding algorithm. Specifically, we first use the Laplacian prior to compute a minimum mean square error initial solution for each code block. Next, we show that while the sparsity prior can reduce block artifacts, limiting the size of the overcomplete dictionary (to lower computation) would lead to poor recovery of high DCT frequencies. To alleviate this problem, we design a new graph-signal smoothness prior (desired signal has mainly low graph frequencies) based on the left eigenvectors of the random walk graph Laplacian matrix (LERaG). Compared with the previous graph-signal smoothness priors, LERaG has desirable image filtering properties with low computation overhead. We demonstrate how LERaG can facilitate recovery of high DCT frequencies of a piecewise smooth signal via an interpretation of low graph frequency components as relaxed solutions to normalized cut in spectral clustering. Finally, we construct a soft decoding algorithm using the three signal priors with appropriate prior weights. Experimental results show that our proposal outperforms the state-of-the-art soft decoding algorithms in both objective and subjective evaluations noticeably.
NASA Technical Reports Server (NTRS)
Robinson, Julie A.; Webb, Edward L.; Evangelista, Arlene
2000-01-01
Studies that utilize astronaut-acquired orbital photographs for visual or digital classification require high-quality data to ensure accuracy. The majority of images available must be digitized from film and electronically transferred to scientific users. This study examined the effect of scanning spatial resolution (1200, 2400 pixels per inch [21.2 and 10.6 microns/pixel]), scanning density range option (Auto, Full) and compression ratio (non-lossy [TIFF], and lossy JPEG 10:1, 46:1, 83:1) on digital classification results of an orbital photograph from the NASA - Johnson Space Center archive. Qualitative results suggested that 1200 ppi was acceptable for visual interpretive uses for major land cover types. Moreover, Auto scanning density range was superior to Full density range. Quantitative assessment of the processing steps indicated that, while 2400 ppi scanning spatial resolution resulted in more classified polygons as well as a substantially greater proportion of polygons < 0.2 ha, overall agreement between 1200 ppi and 2400 ppi was quite high. JPEG compression up to approximately 46:1 also did not appear to have a major impact on quantitative classification characteristics. We conclude that both 1200 and 2400 ppi scanning resolutions are acceptable options for this level of land cover classification, as well as a compression ratio at or below approximately 46:1. Auto range density should always be used during scanning because it acquires more of the information from the film. The particular combination of scanning spatial resolution and compression level will require a case-by-case decision and will depend upon memory capabilities, analytical objectives and the spatial properties of the objects in the image.
López, Carlos; Jaén Martinez, Joaquín; Lejeune, Marylène; Escrivà, Patricia; Salvadó, Maria T; Pons, Lluis E; Alvaro, Tomás; Baucells, Jordi; García-Rojo, Marcial; Cugat, Xavier; Bosch, Ramón
2009-10-01
The volume of digital image (DI) storage continues to be an important problem in computer-assisted pathology. DI compression enables the size of files to be reduced but with the disadvantage of loss of quality. Previous results indicated that the efficiency of computer-assisted quantification of immunohistochemically stained cell nuclei may be significantly reduced when compressed DIs are used. This study attempts to show, with respect to immunohistochemically stained nuclei, which morphometric parameters may be altered by the different levels of JPEG compression, and the implications of these alterations for automated nuclear counts, and further, develops a method for correcting this discrepancy in the nuclear count. For this purpose, 47 DIs from different tissues were captured in uncompressed TIFF format and converted to 1:3, 1:23 and 1:46 compression JPEG images. Sixty-five positive objects were selected from these images, and six morphological parameters were measured and compared for each object in TIFF images and those of the different compression levels using a set of previously developed and tested macros. Roundness proved to be the only morphological parameter that was significantly affected by image compression. Factors to correct the discrepancy in the roundness estimate were derived from linear regression models for each compression level, thereby eliminating the statistically significant differences between measurements in the equivalent images. These correction factors were incorporated in the automated macros, where they reduced the nuclear quantification differences arising from image compression. Our results demonstrate that it is possible to carry out unbiased automated immunohistochemical nuclear quantification in compressed DIs with a methodology that could be easily incorporated in different systems of digital image analysis.
... First successful pancreas transplant in a human. 1994: Experiments with islet transplantation begin. 1999: "Edmonton protocol" results ... initial results in 2000. Though complex, the Edmonton experiments managed to succeed where so many others had ...
Energy efficiency of task allocation for embedded JPEG systems.
Fan, Yang-Hsin; Wu, Jan-Ou; Wang, San-Fu
2014-01-01
Embedded system works everywhere for repeatedly performing a few particular functionalities. Well-known products include consumer electronics, smart home applications, and telematics device, and so forth. Recently, developing methodology of embedded systems is applied to conduct the design of cloud embedded system resulting in the applications of embedded system being more diverse. However, the more energy consumes result from the more embedded system works. This study presents hyperrectangle technology (HT) to embedded system for obtaining energy saving. The HT adopts drift effect to construct embedded systems with more hardware circuits than software components or vice versa. It can fast construct embedded system with a set of hardware circuits and software components. Moreover, it has a great benefit to fast explore energy consumption for various embedded systems. The effects are presented by assessing a JPEG benchmarks. Experimental results demonstrate that the HT, respectively, achieves the energy saving by 29.84%, 2.07%, and 68.80% on average to GA, GHO, and Lin.
Image acquisition system using on sensor compressed sampling technique
NASA Astrophysics Data System (ADS)
Gupta, Pravir Singh; Choi, Gwan Seong
2018-01-01
Advances in CMOS technology have made high-resolution image sensors possible. These image sensors pose significant challenges in terms of the amount of raw data generated, energy efficiency, and frame rate. This paper presents a design methodology for an imaging system and a simplified image sensor pixel design to be used in the system so that the compressed sensing (CS) technique can be implemented easily at the sensor level. This results in significant energy savings as it not only cuts the raw data rate but also reduces transistor count per pixel; decreases pixel size; increases fill factor; simplifies analog-to-digital converter, JPEG encoder, and JPEG decoder design; decreases wiring; and reduces the decoder size by half. Thus, CS has the potential to increase the resolution of image sensors for a given technology and die size while significantly decreasing the power consumption and design complexity. We show that it has potential to reduce power consumption by about 23% to 65%.
Image steganalysis using Artificial Bee Colony algorithm
NASA Astrophysics Data System (ADS)
Sajedi, Hedieh
2017-09-01
Steganography is the science of secure communication where the presence of the communication cannot be detected while steganalysis is the art of discovering the existence of the secret communication. Processing a huge amount of information takes extensive execution time and computational sources most of the time. As a result, it is needed to employ a phase of preprocessing, which can moderate the execution time and computational sources. In this paper, we propose a new feature-based blind steganalysis method for detecting stego images from the cover (clean) images with JPEG format. In this regard, we present a feature selection technique based on an improved Artificial Bee Colony (ABC). ABC algorithm is inspired by honeybees' social behaviour in their search for perfect food sources. In the proposed method, classifier performance and the dimension of the selected feature vector depend on using wrapper-based methods. The experiments are performed using two large data-sets of JPEG images. Experimental results demonstrate the effectiveness of the proposed steganalysis technique compared to the other existing techniques.
Compressing images for the Internet
NASA Astrophysics Data System (ADS)
Beretta, Giordano B.
1998-01-01
The World Wide Web has rapidly become the hot new mass communications medium. Content creators are using similar design and layout styles as in printed magazines, i.e., with many color images and graphics. The information is transmitted over plain telephone lines, where the speed/price trade-off is much more severe than in the case of printed media. The standard design approach is to use palettized color and to limit as much as possible the number of colors used, so that the images can be encoded with a small number of bits per pixel using the Graphics Interchange Format (GIF) file format. The World Wide Web standards contemplate a second data encoding method (JPEG) that allows color fidelity but usually performs poorly on text, which is a critical element of information communicated on this medium. We analyze the spatial compression of color images and describe a methodology for using the JPEG method in a way that allows a compact representation while preserving full color fidelity.
Setti, E; Musumeci, R
2001-06-01
The world wide web is an exciting service that allows one to publish electronic documents made of text and images on the internet. Client software called a web browser can access these documents, and display and print them. The most popular browsers are currently Microsoft Internet Explorer (Microsoft, Redmond, WA) and Netscape Communicator (Netscape Communications, Mountain View, CA). These browsers can display text in hypertext markup language (HTML) format and images in Joint Photographic Expert Group (JPEG) and Graphic Interchange Format (GIF). Currently, neither browser can display radiologic images in native Digital Imaging and Communications in Medicine (DICOM) format. With the aim to publish radiologic images on the internet, we wrote a dedicated Java applet. Our software can display radiologic and histologic images in DICOM, JPEG, and GIF formats, and provides a a number of functions like windowing and magnification lens. The applet is compatible with some web browsers, even the older versions. The software is free and available from the author.
Energy Efficiency of Task Allocation for Embedded JPEG Systems
2014-01-01
Embedded system works everywhere for repeatedly performing a few particular functionalities. Well-known products include consumer electronics, smart home applications, and telematics device, and so forth. Recently, developing methodology of embedded systems is applied to conduct the design of cloud embedded system resulting in the applications of embedded system being more diverse. However, the more energy consumes result from the more embedded system works. This study presents hyperrectangle technology (HT) to embedded system for obtaining energy saving. The HT adopts drift effect to construct embedded systems with more hardware circuits than software components or vice versa. It can fast construct embedded system with a set of hardware circuits and software components. Moreover, it has a great benefit to fast explore energy consumption for various embedded systems. The effects are presented by assessing a JPEG benchmarks. Experimental results demonstrate that the HT, respectively, achieves the energy saving by 29.84%, 2.07%, and 68.80% on average to GA, GHO, and Lin. PMID:24982983
Comparing Examples: WebAssign versus Textbook
NASA Astrophysics Data System (ADS)
Richards, Evan; Polak, Jeff; Hardin, Ashley; Risley, John, , Dr.
2005-11-01
Research shows students can learn from worked examples.^1 This pilot study compared two groups of students' performance (10 each) in solving physics problems. One group had access to interactive examples^2 released in WebAssign^3, while the other group had access to the counterpart textbook examples. Verbal data from students in problem solving sessions was collected using a think aloud protocol^4 and the data was analyzed using Chi's procedures.^5 An explanation of the methodology and results will be presented. Future phases of this pilot study based upon these results will also be discussed. ^1Atkinson, R.K., Derry, S.J., Renkl A., Wortham, D. (2000). ``Learning from Examples: Instructional Principles from the Worked Examples Research'', Review of Educational Research, vol. 70, n. 2, pp. 181-214. ^2Serway, R.A. & Faughn, J.S. (2006). College Physics (7^th ed.). Belmont, CA: Thomson Brooks/Cole. ^3 see www.webassign.net ^4 Ericsson, K.A. & Simon, H.A. (1984). Protocol Analysis: Verbal Reports as Data. Cambridge, Massachusetts: The MIT Press. ^5 Chi, Michelene T.H. (1997). ``Quantifying Qualitative Analyses of Verbal Data: A Practical Guide,'' The Journal of the Learning Sciences, vol. 6, n. 3, pp. 271-315.
Franca, R; Rebora, P; Bertorello, N; Fagioli, F; Conter, V; Biondi, A; Colombini, A; Micalizzi, C; Zecca, M; Parasole, R; Petruzziello, F; Basso, G; Putti, M C; Locatelli, F; d'Adamo, P; Valsecchi, M G; Decorti, G; Rabusin, M
2017-01-01
Drug-related toxicities represent an important clinical concern in chemotherapy, genetic variants could help tailoring treatment to patient. A pharmacogenetic multicentric study was performed on 508 pediatric acute lymphoblastic leukemia patients treated with AIEOP-BFM 2000 protocol: 28 variants were genotyped by VeraCode and Taqman technologies, deletions of GST-M1 and GST-T1 by multiplex PCR. Toxicities were derived from a central database: 251 patients (49.4%) experienced at least one gastrointestinal (GI) or hepatic (HEP) or neurological (NEU) grade III/IV episode during the remission induction phase: GI occurred in 63 patients (12.4%); HEP in 204 (40.2%) and NEU in 44 (8.7%). Logistic regression model adjusted for sex, risk and treatment phase revealed that ITPA rs1127354 homozygous mutated patients showed an increased risk of severe GI and NEU. ABCC1 rs246240 and ADORA2A rs2236624 homozygous mutated genotypes were associated to NEU and HEP, respectively. These three variants could be putative predictive markers for chemotherapy-related toxicities in AIEOP-BFM protocols.
Abdel-Aal, Wafaa; Ghaffar, Esmat Abdel; El Shabrawy, Osama
2013-10-01
Globally, ethical issues in research are becoming of major importance, being well established in developed countries with little information about research ethics committees (RECs) in Africa to assess whether these committees are actually improving the protection of human research participants. To describe the establishment, structure, function, operations and outcome of the Medical Research Ethics Committee (MREC) of the National Research Center (NRC) of Egypt from 2003 to 2011. The committee established its regulatory rules for human and animal research ethics based on the Declaration of Helsinki 2000-2008 and WHO regulations 2000-2011. There were 974 protocols revised in the 7 years (2005-2011). The outcome of the committee discussions was to clear 262 of the protocols without conditions. A full 556 were cleared conditionally upon completion of modifications. Another 118 were deferred pending action and further consideration at a subsequent meeting. And 16 researchers did not reply, while 22 protocols were rejected. Since 2005, the MREC in NRC Egypt has built up considerable experience of evaluating the ethical issues arising within the field of medical research.
Montironi, R; Thompson, D; Scarpelli, M; Bartels, H G; Hamilton, P W; Da Silva, V D; Sakr, W A; Weyn, B; Van Daele, A; Bartels, P H
2002-01-01
Objective: To describe practical experiences in the sharing of very large digital data bases of histopathological imagery via the Internet, by investigators working in Europe, North America, and South America. Materials: Experiences derived from medium power (sampling density 2.4 pixels/μm) and high power (6 pixels/μm) imagery of prostatic tissues, skin shave biopsies, breast lesions, endometrial sections, and colonic lesions. Most of the data included in this paper were from prostate. In particular, 1168 histological images of normal prostate, high grade prostatic intraepithelial neoplasia (PIN), and prostate cancer (PCa) were recorded, archived in an image format developed at the Optical Sciences Center (OSC), University of Arizona, and transmitted to Ancona, Italy, as JPEG (joint photographic experts group) files. Images were downloaded for review using the Internet application FTP (file transfer protocol). The images were then sent from Ancona to other laboratories for additional histopathological review and quantitative analyses. They were viewed using Adobe Photoshop, Paint Shop Pro, and Imaging for Windows. For karyometric analysis full resolution imagery was used, whereas histometric analyses were carried out on JPEG imagery also. Results: The three applications of the telecommunication system were remote histopathological assessment, remote data acquisition, and selection of material. Typical data volumes for each project ranged from 120 megabytes to one gigabyte, and transmission times were usually less than one hour. There were only negligible transmission errors, and no problem in efficient communication, although real time communication was an exception, because of the time zone differences. As far as the remote histopathological assessment of the prostate was concerned, agreement between the pathologist's electronic diagnosis and the diagnostic label applied to the images by the recording scientist was present in 96.6% of instances. When these images were forwarded to two pathologists, the level of concordance with the reviewing pathologist who originally downloaded the files from Tucson was as high as 97.2% and 98.0%. Initial results of studies made by researchers belonging to our group but located in others laboratories showed the feasibility of making quantitative analysis on the same images. Conclusions: These experiences show that diagnostic teleconsultation and quantitative image analyses via the Internet are not only feasible, but practical, and allow a close collaboration between researchers widely separated by geographical distance and analytical resources. PMID:12037030
Another Look at an Enigmatic New World
NASA Astrophysics Data System (ADS)
2005-02-01
VLT NACO Performs Outstanding Observations of Titan's Atmosphere and Surface On January 14, 2005, the ESA Huygens probe arrived at Saturn's largest satellite, Titan. After a faultless descent through the dense atmosphere, it touched down on the icy surface of this strange world from where it continued to transmit precious data back to the Earth. Several of the world's large ground-based telescopes were also active during this exciting event, observing Titan before and near the Huygens encounter, within the framework of a dedicated campaign coordinated by the members of the Huygens Project Scientist Team. Indeed, large astronomical telescopes with state-of-the art adaptive optics systems allow scientists to image Titan's disc in quite some detail. Moreover, ground-based observations are not restricted to the limited period of the fly-by of Cassini and landing of Huygens. They hence complement ideally the data gathered by this NASA/ESA mission, further optimising the overall scientific return. A group of astronomers [1] observed Titan with ESO's Very Large Telescope (VLT) at the Paranal Observatory (Chile) during the nights from 14 to 16 January, by means of the adaptive optics NAOS/CONICA instrument mounted on the 8.2-m Yepun telescope [2]. The observations were carried out in several modes, resulting in a series of fine images and detailed spectra of this mysterious moon. They complement earlier VLT observations of Titan, cf. ESO Press Photos 08/04 and ESO Press Release 09/04. The highest contrast images ESO PR Photo 04a/05 ESO PR Photo 04a/05 Titan's surface (NACO/VLT) [Preview - JPEG: 400 x 712 pix - 64k] [Normal - JPEG: 800 x 1424 pix - 524k] ESO PR Photo 04b/05 ESO PR Photo 04b/05 Map of Titan's Surface (NACO/VLT) [Preview - JPEG: 400 x 651 pix - 41k] [Normal - JPEG: 800 x 1301 pix - 432k] Caption: ESO PR Photo 04a/05 shows Titan's trailing hemisphere [3] with the Huygens landing site marked as an "X". The left image was taken with NACO and a narrow-band filter centred at 2 microns. On the right is the NACO/SDI image of the same location showing Titan's surface through the 1.6 micron methane window. A spherical projection with coordinates on Titan is overplotted. ESO PR Photo 04b/05 is a map of Titan taken with NACO at 1.28 micron (a methane window allowing it to probe down to the surface). On the leading side of Titan, the bright equatorial feature ("Xanadu") is dominating. On the trailing side, the landing site of the Huygens probe is indicated. ESO PR Photo 04c/05 ESO PR Photo 04c/05 Titan, the Enigmatic Moon, and Huygens Landing Site (NACO-SDI/VLT and Cassini/ISS) [Preview - JPEG: 400 x 589 pix - 40k] [Normal - JPEG: 800 x 1178 pix - 290k] Caption: ESO PR Photo 04c/05 is a comparison between the NACO/SDI image and an image taken by Cassini/ISS while approaching Titan. The Cassini image shows the Huygens landing site map wrapped around Titan, rotated to the same position as the January NACO SDI observations. The yellow "X" marks the landing site of the ESA Huygens probe. The Cassini/ISS image is courtesy of NASA, JPL, Space Science Institute (see http://sci.esa.int/science-e/www/object/index.cfm?fobjectid=36222). The coloured lines delineate the regions that were imaged by Cassini at differing resolutions. The lower-resolution imaging sequences are outlined in blue. Other areas have been specifically targeted for moderate and high resolution mosaicking of surface features. These include the site where the European Space Agency's Huygens probe has touched down in mid-January (marked with the yellow X), part of the bright region named Xanadu (easternmost extent of the area covered), and a boundary between dark and bright regions. ESO PR Photo 04d/05 ESO PR Photo 04d/05 Evolution of the Atmosphere of Titan (NACO/VLT) [Preview - JPEG: 400 x 902 pix - 40k] [Normal - JPEG: 800 x 1804 pix - 320k] Caption: ESO PR Photo 04d/05 is an image of Titan's atmosphere at 2.12 microns as observed with NACO on the VLT at three different epochs from 2002 till now. Titan's atmosphere exhibits seasonal and meteorological changes which can clearly be seen here : the North-South asymmetry - indicative of changes in the chemical composition in one pole or the other, depending on the season - is now clearly in favour of the North pole. Indeed, the situation has reversed with respect to a few years ago when the South pole was brighter. Also visible in these images is a bright feature in the South pole, found to be presently dimming after having appeared very bright from 2000 to 2003. The differences in size are due to the variation in the distance to Earth of Saturn and its planetary system. The new images show Titan's atmosphere and surface at various near-infrared spectral bands. The surface of Titan's trailing side is visible in images taken through narrow-band filters at wavelengths 1.28, 1.6 and 2.0 microns. They correspond to the so-called "methane windows" which allow to peer all the way through the lower Titan atmosphere to the surface. On the other hand, Titan's atmosphere is visible through filters centred in the wings of these methane bands, e.g. at 2.12 and 2.17 microns. Eric Gendron of the Paris Observatory in France and leader of the team, is extremely pleased: "We believe that some of these images are the highest-contrast images of Titan ever taken with any ground-based or earth-orbiting telescope." The excellent images of Titan's surface show the location of the Huygens landing site in much detail. In particular, those centred at wavelength 1.6 micron and obtained with the Simultaneous Differential Imager (SDI) on NACO [4] provide the highest contrast and best views. This is firstly because the filters match the 1.6 micron methane window most accurately. Secondly, it is possible to get an even clearer view of the surface by subtracting accurately the simultaneously recorded images of the atmospheric haze, taken at wavelength 1.625 micron. The images show the great complexity of Titan's trailing side, which was earlier thought to be very dark. However, it is now obvious that bright and dark regions cover the field of these images. The best resolution achieved on the surface features is about 0.039 arcsec, corresponding to 200 km on Titan. ESO PR Photo 04c/04 illustrates the striking agreement between the NACO/SDI image taken with the VLT from the ground and the ISS/Cassini map. The images of Titan's atmosphere at 2.12 microns show a still-bright south pole with an additional atmospheric bright feature, which may be clouds or some other meteorological phenomena. The astronomers have followed it since 2002 with NACO and notice that it seems to be fading with time. At 2.17 microns, this feature is not visible and the north-south asymmetry - also known as "Titan's smile" - is clearly in favour in the north. The two filters probe different altitude levels and the images thus provide information about the extent and evolution of the north-south asymmetry. Probing the composition of the surface ESO PR Photo 04e/05 ESO PR Photo 04e/05 Spectrum of Two Regions on Titan (NACO/VLT) [Preview - JPEG: 400 x 623 pix - 44k] [Normal - JPEG: 800 x 1246 pix - 283k] Caption: ESO PR Photo 04e/05 represents two of the many spectra obtained on January 16, 2005 with NACO and covering the 2.02 to 2.53 micron range. The blue spectrum corresponds to the brightest region on Titan's surface within the slit, while the red spectrum corresponds to the dark area around the Huygens landing site. In the methane band, the two spectra are equal, indicating a similar atmospheric content; in the methane window centred at 2.0 microns, the spectra show differences in brightness, but are in phase. This suggests that there is no real variation in the composition beyond different atmospheric mixings. ESO PR Photo 04f/05 ESO PR Photo 04f/05 Imaging Titan with a Tunable Filter (NACO Fabry-Perot/VLT) [Preview - JPEG: 400 x 718 pix - 44k] [Normal - JPEG: 800 x 1435 pix - 326k] Caption: ESO PR Photo 04f/05 presents a series of images of Titan taken around the 2.0 micron methane window probing different layers of the atmosphere and the surface. The images are currently under thorough processing and analysis so as to reveal any subtle variations in wavelength that could be indicative of the spectral response of the various surface components, thus allowing the astronomers to identify them. Because the astronomers have also obtained spectroscopic data at different wavelengths, they will be able to recover useful information on the surface composition. The Cassini/VIMS instrument explores Titan's surface in the infrared range and, being so close to this moon, it obtains spectra with a much better spatial resolution than what is possible with Earth-based telescopes. However, with NACO at the VLT, the astronomers have the advantage of observing Titan with considerably higher spectral resolution, and thus to gain more detailed spectral information about the composition, etc. The observations therefore complement each other. Once the composition of the surface at the location of the Huygens landing is known from the detailed analysis of the in-situ measurements, it should become possible to learn the nature of the surface features elsewhere on Titan by combining the Huygens results with more extended cartography from Cassini as well as from VLT observations to come. More information Results on Titan obtained with data from NACO/VLT are in press in the journal Icarus ("Maps of Titan's surface from 1 to 2.5 micron" by A. Coustenis et al.). Previous images of Titan obtained with NACO and with NACO/SDI are accessible as ESO PR Photos 08/04 and ESO PR Photos 11/04. See also these Press Releases for additional scientific references.
Salmen, Marcus; Ewy, Gordon A; Sasson, Comilla
2012-01-01
To determine whether the use of cardiocerebral resuscitation (CCR) or AHA/ERC 2005 Resuscitation Guidelines improved patient outcomes from out-of-hospital cardiac arrest (OHCA) compared to older guidelines. Systematic review and meta-analysis. MEDLINE, EMBASE, Web of Science and the Cochrane Library databases. We also hand-searched study references and consulted experts. Design: randomised controlled trials and observational studies. OHCA patients, age >17 years. 'Control' protocol versus 'Study' protocol. 'Control' protocol defined as AHA/ERC 2000 Guidelines for cardiopulmonary resuscitation (CPR). 'Study' protocol defined as AHA/ERC 2005 Guidelines for CPR, or a CCR protocol. Survival to hospital discharge. High-quality or medium-quality studies, as measured by the Newcastle Ottawa Scale using predefined categories. Twelve observational studies met inclusion criteria. All the three studies using CCR demonstrated significantly improved survival compared to use of AHA 2000 Guidelines, as did five of the nine studies using AHA/ERC 2005 Guidelines. Pooled data demonstrate that use of a CCR protocol has an unadjusted OR of 2.26 (95% CI 1.64 to 3.12) for survival to hospital discharge among all cardiac arrest patients. Among witnessed ventricular fibrillation/ventricular tachycardia (VF/VT) patients, CCR increased survival by an OR of 2.98 (95% CI 1.92 to 4.62). Studies using AHA/ERC 2005 Guidelines showed an overall trend towards increased survival, but significant heterogeneity existed among these studies. We demonstrate an association with improved survival from OHCA when CCR protocols or AHA/ERC 2005 Guidelines are compared to use of older guidelines. In the subgroup of patients with witnessed VF/VT, there was a threefold increase in OHCA survival when CCR was used. CCR appears to be a promising resuscitation protocol for Emergency Medical Services providers in increasing survival from OHCA. Future research will need to be conducted to directly compare AHA/ERC 2010 Guidelines with the CCR approach.
How C2 Goes Wrong (Briefing Chart)
2014-06-01
Guardian/Pix/pictures/2012/12/19/1355903591995/Hillsborough-disaster-010.jpg Cases (3): Disaster/Emergency Response (Cont.) Columbine High School ...r337173_1529332.jpg http://bossip.files.wordpress.com/2012/11/ massacre -e1352384704110.jpeg?w=625&h=389 The Punchline “What we’ve got here, is
Genomics & Genetics | National Agricultural Library
Skip to main content Home National Agricultural Library United States Department of Agriculture Ag agricultural and environmental settings. Deadpool proximal sensing cart docx xlsx 3x jpeg 5x pdf Data from Buytaert. NAL Home | USDA.gov | Agricultural Research Service | Plain Language | FOIA | Accessibility
ERIC Educational Resources Information Center
Entman, Robert M.; Katz, Michael L.
The Aspen Institute's Communications and Society Program convened leaders and experts in the telecommunications and related fields to address telecommunications regulation in an IP (Internet Protocols) environment at the 15th annual Aspen Institute Telecommunications Policy Conference (Aspen, Colorado, August 12-16, 2000). The report from this…
Cario, Gunnar; Zimmermann, Martin; Romey, Renja; Gesk, Stefan; Vater, Inga; Harbott, Jochen; Schrauder, André; Moericke, Anja; Izraeli, Shai; Akasaka, Takashi; Dyer, Martin J S; Siebert, Reiner; Schrappe, Martin; Stanulla, Martin
2010-07-01
High-level expression of the cytokine receptor-like factor 2 gene, CRLF2, in precursor B-cell acute lymphoblastic leukemia (pB-ALL) was shown to be caused by a translocation involving the IGH@ locus or a deletion juxtaposing CRLF2 with the P2RY8 promoter. To assess its possible prognostic value, CRLF2 expression was analyzed in 555 childhood pB-ALL patients treated according to the Acute Lymphoblastic Leukemia Berlin-Frankfurt-Münster 2000 (ALL-BFM 2000) protocol. Besides CRLF2 rearrangements, high-level CRLF2 expression was seen in cases with supernumerary copies of the CRLF2 locus. On the basis of the detection of CRLF2 rearrangements, a CRLF2 high-expression group (n = 49) was defined. This group had a 6-year relapse incidence of 31% plus or minus 8% compared with 11% plus or minus 1% in the CRLF2 low-expression group (P = .006). This difference was mainly attributable to an extremely high incidence of relapse (71% +/- 19%) in non-high-risk patients with P2RY8-CRLF2 rearrangement. The assessment of CRLF2 aberrations may therefore serve as new stratification tool in Berlin-Frankfurt-Münster-based protocols by identifying additional high-risk patients who may benefit from an intensified and/or targeted treatment.
OXYGEN-RICH SUPERNOVA REMNANT IN THE LARGE MAGELLANIC CLOUD
NASA Technical Reports Server (NTRS)
2002-01-01
This is a NASA Hubble Space Telescope image of the tattered debris of a star that exploded 3,000 years ago as a supernova. This supernova remnant, called N132D, lies 169,000 light-years away in the satellite galaxy, the Large Magellanic Cloud. A Hubble Wide Field Planetary Camera 2 image of the inner regions of the supernova remnant shows the complex collisions that take place as fast moving ejecta slam into cool, dense interstellar clouds. This level of detail in the expanding filaments could only be seen previously in much closer supernova remnants. Now, Hubble's capabilities extend the detailed study of supernovae out to the distance of a neighboring galaxy. Material thrown out from the interior of the exploded star at velocities of more than four million miles per hour (2,000 kilometers per second) plows into neighboring clouds to create luminescent shock fronts. The blue-green filaments in the image correspond to oxygen-rich gas ejected from the core of the star. The oxygen-rich filaments glow as they pass through a network of shock fronts reflected off dense interstellar clouds that surrounded the exploded star. These dense clouds, which appear as reddish filaments, also glow as the shock wave from the supernova crushes and heats the clouds. Supernova remnants provide a rare opportunity to observe directly the interiors of stars far more massive than our Sun. The precursor star to this remnant, which was located slightly below and left of center in the image, is estimated to have been 25 times the mass of our Sun. These stars 'cook' heavier elements through nuclear fusion, including oxygen, nitrogen, carbon, iron etc., and the titanic supernova explosions scatter this material back into space where it is used to create new generations of stars. This is the mechanism by which the gas and dust that formed our solar system became enriched with the elements that sustain life on this planet. Hubble spectroscopic observations will be used to determine the exact chemical composition of this nuclear- processed material, and thereby test theories of stellar evolution. The image shows a region of the remnant 50 light-years across. The supernova explosion should have been visible from Earth's southern hemisphere around 1,000 B.C., but there are no known historical records that chronicle what would have appeared as a 'new star' in the heavens. This 'true color' picture was made by superposing images taken on 9-10 August 1994 in three of the strongest optical emission lines: singly ionized sulfur (red), doubly ionized oxygen (green), and singly ionized oxygen (blue). Photo credit: Jon A. Morse (STScI) and NASA Investigating team: William P. Blair (PI; JHU), Michael A. Dopita (MSSSO), Robert P. Kirshner (Harvard), Knox S. Long (STScI), Jon A. Morse (STScI), John C. Raymond (SAO), Ralph S. Sutherland (UC-Boulder), and P. Frank Winkler (Middlebury). Image files in GIF and JPEG format may be accessed via anonymous ftp from oposite.stsci.edu in /pubinfo: GIF: /pubinfo/GIF/N132D.GIF JPEG: /pubinfo/JPEG/N132D.jpg The same images are available via World Wide Web from links in URL http://www.stsci.edu/public.html.
Postdoctoral Fellowship for Dr. Lindholm, Underwater Physiology and Medicine
2008-05-01
St Croix 2000) and/or an increased reliance on fast twitch muscle fibers that are dependent on glycogen and produce La. The changes in muscle and...resistance protocol ( isometric ) increases respiratory muscle strength but not endurance, while, a protocol designed to increase respiratory muscle endurance... muscles (RMT). RMT minimized respiratory muscle fatigue and normalized the ventilatory response to increasing C02 (C02 sensitivity) and blood C02 in C02
Optimal erasure protection for scalably compressed video streams with limited retransmission.
Taubman, David; Thie, Johnson
2005-08-01
This paper shows how the priority encoding transmission (PET) framework may be leveraged to exploit both unequal error protection and limited retransmission for RD-optimized delivery of streaming media. Previous work on scalable media protection with PET has largely ignored the possibility of retransmission. Conversely, the PET framework has not been harnessed by the substantial body of previous work on RD optimized hybrid forward error correction/automatic repeat request schemes. We limit our attention to sources which can be modeled as independently compressed frames (e.g., video frames), where each element in the scalable representation of each frame can be transmitted in one or both of two transmission slots. An optimization algorithm determines the level of protection which should be assigned to each element in each slot, subject to transmission bandwidth constraints. To balance the protection assigned to elements which are being transmitted for the first time with those which are being retransmitted, the proposed algorithm formulates a collection of hypotheses concerning its own behavior in future transmission slots. We show how the PET framework allows for a decoupled optimization algorithm with only modest complexity. Experimental results obtained with Motion JPEG2000 compressed video demonstrate that substantial performance benefits can be obtained using the proposed framework.
SHD digital cinema distribution over a long distance network of Internet2
NASA Astrophysics Data System (ADS)
Yamaguchi, Takahiro; Shirai, Daisuke; Fujii, Tatsuya; Nomura, Mitsuru; Fujii, Tetsuro; Ono, Sadayasu
2003-06-01
We have developed a prototype SHD (Super High Definition) digital cinema distribution system that can store, transmit and display eight-million-pixel motion pictures that have the image quality of a 35-mm film movie. The system contains a video server, a real-time decoder, and a D-ILA projector. Using a gigabit Ethernet link and TCP/IP, the server transmits JPEG2000 compressed motion picture data streams to the decoder at transmission speeds as high as 300 Mbps. The received data streams are decompressed by the decoder, and then projected onto a screen via the projector. With this system, digital cinema contents can be distributed over a wide-area optical gigabit IP network. However, when digital cinema contents are delivered over long distances by using a gigabit IP network and TCP, the round-trip time increases and network throughput either stops rising or diminishes. In a long-distance SHD digital cinema transmission experiment performed on the Internet2 network in October 2002, we adopted enlargement of the TCP window, multiple TCP connections, and shaping function to control the data transmission quantity. As a result, we succeeded in transmitting the SHD digital cinema content data at about 300 Mbps between Chicago and Los Angeles, a distance of more than 3000 km.
21 CFR 892.2030 - Medical image digitizer.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Medical image digitizer. 892.2030 Section 892.2030 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED... Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std.). [63 FR 23387, Apr. 29...
21 CFR 892.2040 - Medical image hardcopy device.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Medical image hardcopy device. 892.2040 Section 892.2040 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES... Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture...
NASA Astrophysics Data System (ADS)
Ferrini, V.; Fornari, D. J.; Shank, T.; Tivey, M.; Kelley, D. S.; Glickson, D.; Carbotte, S. M.; Howland, J.; Whitcomb, L. L.; Yoerger, D.
2004-12-01
Recent field programs at the East Pacific Rise and Juan de Fuca Ridge have resulted in the refinement of data processing protocols that enable the rapid creation of high-resolution (meter-scale) bathymetric maps from pencil-beam altimetric sonar data that are routinely collected during DSV Alvin dives. With the development of the appropriate processing tools, the Imagenex sonar, a permanent sensor on Alvin, can be used by a broad range of scientists permitting the analysis of various data sets within the context of high-quality bathymetric maps. The data processing protocol integrates depth data recorded with Alvin's Paroscientific pressure sensor with bathymetric soundings collected with an Imagenex 675 kHz articulating (scanning) sonar system, and high-resolution navigational data acquired with DVLNAV, which includes bottom lock Doppler sonar and long baseline (LBL) navigation. Together these data allow us, for the first time, to visualize portions of Ridge 2000 Integrated Study Sites (ISS) at 1-m vertical and horizontal resolution. These maps resolve morphological details of structures within the summit trough at scales that are relevant to biological communities (e.g. hydrothermal vents, lava pillars, trough walls), thus providing the important geologic context necessary to better understand spatial patterns associated with integrated biological-hydrothermal-geological processes. The Imagenex sonar is also a permanent sensor on the Jason2 ROV, which is also equipped with an SM2000 (200 kHz) near-bottom multibeam sonar. In the future, it is envisioned that near-bottom multibeam sonars will be standard sensors on all National Deep Submergence Facility (NDSF) vehicles. Streamlining data processing protocols makes these datasets more accessible to NDSF users and ensures broad compatibility between data formats among NDSF vehicle systems and allied vehicles (e.g. ABE). Establishing data processing protocols and software suites, routinely calibrating sensors (e.g. Paroscientific depth sensors), and ensuring good navigational benchmarks between various cruises to the Ridge 2000 ISS improves the capability and quality of rapidly produced high-resolution bathymetric maps enabling users to optimize their diving programs. This is especially important within the context of augmenting high-resolution bathymetric data collection in ISS areas (several cruises to the same area over multiple years) and investigating possible changes in seafloor topography, hydrothermal vent features and/or biological communities that are related to tectonic or volcanic events.
Compositional mining of multiple object API protocols through state abstraction.
Dai, Ziying; Mao, Xiaoguang; Lei, Yan; Qi, Yuhua; Wang, Rui; Gu, Bin
2013-01-01
API protocols specify correct sequences of method invocations. Despite their usefulness, API protocols are often unavailable in practice because writing them is cumbersome and error prone. Multiple object API protocols are more expressive than single object API protocols. However, the huge number of objects of typical object-oriented programs poses a major challenge to the automatic mining of multiple object API protocols: besides maintaining scalability, it is important to capture various object interactions. Current approaches utilize various heuristics to focus on small sets of methods. In this paper, we present a general, scalable, multiple object API protocols mining approach that can capture all object interactions. Our approach uses abstract field values to label object states during the mining process. We first mine single object typestates as finite state automata whose transitions are annotated with states of interacting objects before and after the execution of the corresponding method and then construct multiple object API protocols by composing these annotated single object typestates. We implement our approach for Java and evaluate it through a series of experiments.
Compositional Mining of Multiple Object API Protocols through State Abstraction
Mao, Xiaoguang; Qi, Yuhua; Wang, Rui; Gu, Bin
2013-01-01
API protocols specify correct sequences of method invocations. Despite their usefulness, API protocols are often unavailable in practice because writing them is cumbersome and error prone. Multiple object API protocols are more expressive than single object API protocols. However, the huge number of objects of typical object-oriented programs poses a major challenge to the automatic mining of multiple object API protocols: besides maintaining scalability, it is important to capture various object interactions. Current approaches utilize various heuristics to focus on small sets of methods. In this paper, we present a general, scalable, multiple object API protocols mining approach that can capture all object interactions. Our approach uses abstract field values to label object states during the mining process. We first mine single object typestates as finite state automata whose transitions are annotated with states of interacting objects before and after the execution of the corresponding method and then construct multiple object API protocols by composing these annotated single object typestates. We implement our approach for Java and evaluate it through a series of experiments. PMID:23844378
Old Galaxies in the Young Universe
NASA Astrophysics Data System (ADS)
2004-07-01
Very Large Telescope Unravels New Population of Very Old Massive Galaxies [1] Summary Current theories of the formation of galaxies are based on the hierarchical merging of smaller entities into larger and larger structures, starting from about the size of a stellar globular cluster and ending with clusters of galaxies. According to this scenario, it is assumed that no massive galaxies existed in the young universe. However, this view may now have to be revised. Using the multi-mode FORS2 instrument on the Very Large Telescope at Paranal, a team of Italian astronomers [2] have identified four remote galaxies, several times more massive than the Milky Way galaxy, or as massive as the heaviest galaxies in the present-day universe. Those galaxies must have formed when the Universe was only about 2,000 million years old, that is some 12,000 million years ago. The newly discovered objects may be members of a population of old massive galaxies undetected until now. The existence of such systems shows that the build-up of massive elliptical galaxies was much faster in the early Universe than expected from current theory. PR Photo 21a/04: Small Part of the K20 Field Showing the z=1.9 Elliptical Galaxy (ACS/HST). PR Photo 21b/04: Averaged Spectrum of Old Galaxies (FORS2/VLT). Hierarchical merging Galaxies are like islands in the Universe, made of stars as well as dust and gas clouds. They come in different sizes and shapes. Astronomers generally distinguish between spiral galaxies - like our own Milky Way, NGC 1232 or the famous Andromeda galaxy - and elliptical galaxies, the latter mostly containing old stars and having very little dust or gas. Some galaxies are intermediate between spirals and ellipticals and are referred to as lenticular or spheroidal galaxies. Galaxies are not only distinct in shape, they also vary in size: some may be as "light" as a stellar globular cluster in our Milky Way (i.e. they contain about the equivalent of a few million Suns) while others may be more massive than a million million Suns. Presently, more than half of the stars in the Universe are located in massive spheroidal galaxies. One of the main open questions of modern astrophysics and cosmology is how and when galaxies formed and evolved starting from the primordial gas that filled the early Universe. In the most popular current theory, galaxies in the local Universe are the result of a relatively slow process where small and less massive galaxies merge to gradually build up bigger and more massive galaxies. In this scenario, dubbed "hierarchical merging", the young Universe was populated by small galaxies with little mass, whereas the present Universe contains large, old and massive galaxies - the very last to form in the final stage of a slow assembling process. If this scenario were true, then one should not be able to find massive elliptical galaxies in the young universe. Or, in other words, due to the finite speed of light, there should be no such massive galaxies very far from us. And indeed, until now no old elliptical galaxy was known beyond a radio-galaxy at redshift 1.55 [3] that was discovered almost ten years ago. The K20 survey ESO PR Photo 21a/04 ESO PR Photo 21a/04 Part of the K20 Field, centred on the z=1.9 galaxy (ACS/HST) [Preview - JPEG: 400 x 424 pix - 45k] [Normal - JPEG: 800 x 847 pix - 712k] [Hires - JPEG: 1334 x 1413 pix - 1.3M] Caption: ESO PR Photo 21a/04 shows a small region in the K20 field surveyed by the astronomers. This region is centred on the newly discovered z=1.9 redshift galaxy. The image is based on frames acquired by the Advanced Camera for Surveys (ACS) on the Hubble Space Telescope in the framework of the GOODS Public HST Treasury Program (P.I. M. Giavalisco, STScI, Baltimore, USA). They show the real colours of the galaxies. The four old massive spheroidal galaxies discovered in this survey appear very red compared to the other faint galaxies. (Image courtesy of Piero Rosati and Bob Fosbury, ESO Garching). In order to better understand the formation process of galaxies and to verify if the hierarchical merging scenario is valid, a team of Italian and ESO astronomers [2] used ESO's Very Large Telescope as a "time machine" to do a search for very remote elliptical galaxies. However, this is not trivial. Distant elliptical galaxies, with their content of old and red stars, must be very faint objects indeed at optical wavelengths as the bulk of their light is redshifted into the infrared part of the spectrum. Remote elliptical galaxies are thus among the most difficult observational targets even for the largest telescopes; this is also why the 1.55 redshift record has persisted for so long. But this challenge did not stop the researchers. They obtained deep optical spectroscopy with the multi-mode FORS2 instrument on the VLT for a sample of 546 faint objects found in a sky area of 52 arcmin2 (or about one tenth of the area of the Full Moon) known as the K20 field, and which partly overlaps with the GOODS-South area. Their perseverance paid off and they were rewarded by the discovery of four old, massive galaxies with redshifts between 1.6 and 1.9. These galaxies are seen when the Universe was only about 25% of its present age of 13,700 million years. For one of the galaxies, the K20 team benefited also from the database of publicly available spectra in the GOODS-South area taken by the ESO/GOODS team. A new population of galaxies ESO PR Photo 21b/04 ESO PR Photo 21b/04 Averaged Spectrum of Old Galaxies (FORS2/VLT). [Preview - JPEG: 400 x 496 pix - 58k] [Normal - JPEG: 800 x 992 pix - 366k] [Hires - JPEG: 1700 x 2108 pix - 928k] Caption: ESO PR Photo 21b/04 shows the averaged spectrum (blue) of the four newly discovered old massive galaxies compared to a set of template spectra. The bottom compares it with the spectrum of a star having a surface temperature of 7200 degrees (green) and 6800 degrees (red), respectively. The upper graph makes a comparison with synthetic spectra of simulated simple stellar populations with ages of 500, 1100 and 3000 million years. This figure demonstrates that the newly found galaxies mostly contain old low-mass stars and must have formed between 1,000 and 2,000 million years earlier than the epoch at which they are now seen. The newly discovered galaxies are thus seen when the Universe was about 3,500 million years old, i.e. 10,000 million years ago. But from the spectra taken, it appears that these galaxies contain stars with ages between 1,000 and 2,000 million years. This implies that the galaxies must have formed accordingly earlier, and that they must have essentially completed their assembly at a moment when the Universe was only 1,500 to 2,500 million years old. The galaxies appear to have masses in excess of one hundred thousand million solar masses and they are therefore of sizes similar to the most massive galaxies in the present-day Universe. Complementary images taken within the GOODS ("The Great Observatories Origins Deep Survey") survey by the Hubble Space Telescope show that these galaxies have structures and shapes more or less identical to those of the present-day massive elliptical galaxies. The new observations have therefore revealed a new population of very old and massive galaxies. The existence of such massive and old spheroidal galaxies in the early Universe shows that the assembly of the present-day massive elliptical galaxies started much earlier and was much faster than predicted by the hierarchical merging theory. Says Andrea Cimatti (INAF, Firenze, Italy), leader of the team: "Our new study now raises fundamental questions about our understanding and knowledge of the processes that regulated the genesis and the evolutionary history of the Universe and its structures."
Mars Sample Handling Protocol Workshop Series
NASA Technical Reports Server (NTRS)
Rummel, John D. (Editor); Race, Margaret S. (Editor); Acevedo, Sara (Technical Monitor)
2000-01-01
This document is the report resulting from the first workshop of the series on development of the criteria for a Mars sample handling protocol. Workshop 1 was held in Bethesda, Maryland on March 20-22, 2000. This report serves to document the proceedings of Workshop 1; it summarizes relevant background information, provides an overview of the deliberations to date, and helps frame issues that will need further attention or resolution in upcoming workshops. Specific recommendations are not part of this report.
Space Internet-Embedded Web Technologies Demonstration
NASA Technical Reports Server (NTRS)
Foltz, David A.
2001-01-01
The NASA Glenn Research Center recently demonstrated the ability to securely command and control space-based assets by using the Internet and standard Internet Protocols (IP). This is a significant accomplishment because future NASA missions will benefit by using Internet standards-based protocols. The benefits include reduced mission costs and increased mission efficiency. The Internet-Based Space Command and Control System Architecture demonstrated at the NASA Inspection 2000 event proved that this communications architecture is viable for future NASA missions.
Jarrod L. Pollock; Ragan M. Callaway; Giles C. Thelen; William E. Holben
2009-01-01
Considering variation, or conditionality, in the ways that plants compete for resources, facilitate or indirectly interact with each other has been crucial for understanding the relative importance of these interactions in the organization of plant communities (Tilman 1985; Wilson & Keddy 1986; Kitzberger, Steinaker & Veblen 2000; Levine 2000; Brooker...
Epidemiological Trends of Dengue Disease in Thailand (2000–2011): A Systematic Literature Review
Limkittikul, Kriengsak; Brett, Jeremy; L'Azou, Maïna
2014-01-01
A literature survey and analysis was conducted to describe the epidemiology of dengue disease in Thailand reported between 2000 and 2011. The literature search identified 610 relevant sources, 40 of which fulfilled the inclusion criteria defined in the review protocol. Peaks in the number of cases occurred during the review period in 2001, 2002, 2008 and 2010. A shift in age group predominance towards older ages continued through the review period. Disease incidence and deaths remained highest in children aged ≤15 years and case fatality rates were highest in young children. Heterogeneous geographical patterns were observed with higher incidence rates reported in the Southern region and serotype distribution varied in time and place. Gaps identified in epidemiological knowledge regarding dengue disease in Thailand provide several avenues for future research, in particular studies of seroprevalence. Protocol registration PROSPERO CRD42012002170 PMID:25375766
Griffith, J.A.; Stehman, S.V.; Sohl, Terry L.; Loveland, Thomas R.
2003-01-01
Temporal trends in landscape pattern metrics describing texture, patch shape and patch size were evaluated in the US Middle Atlantic Coastal Plain Ecoregion. The landscape pattern metrics were calculated for a sample of land use/cover data obtained for four points in time from 1973-1992. The multiple sampling dates permit evaluation of trend, whereas availability of only two sampling dates allows only evaluation of change. Observed statistically significant trends in the landscape pattern metrics demonstrated that the sampling-based monitoring protocol was able to detect a trend toward a more fine-grained landscape in this ecoregion. This sampling and analysis protocol is being extended spatially to the remaining 83 ecoregions in the US and temporally to the year 2000 to provide a national and regional synthesis of the temporal and spatial dynamics of landscape pattern covering the period 1973-2000.
Bhullar, Indermeet S; Tepas, Joseph J; Siragusa, Daniel; Loper, Todd; Kerwin, Andrew; Frykberg, Eric R
2017-04-01
Nonoperative management (NOM) of hemodynamically stable high-grade (IV-V) blunt splenic trauma remains controversial given the high failure rates (19%) that persist despite angioembolization (AE) protocols. The NOM protocol was modified in 2011 to include mandatory AE of all grade (IV-V) injuries without contrast blush (CB) along with selective AE of grade (I-V) with CB. The purpose of this study was to determine if this new AE (NAE) protocol significantly lowered the failure rates for grade (IV-V) injuries allowing for safe observation without surgery and if the exclusion of grade III injuries allowed for the prevention of unnecessary angiograms without affecting the overall failure rates. The records of patients with blunt splenic trauma from January 2000 to October 2014 at a Level I trauma center were retrospectively reviewed. Patients were divided into two groups and failure of NOM (FNOM) rates compared: NAE protocol (2011-2014) with mandatory AE for all grade (IV-V) injuries without CB and selective AE for grade (I-V) with CB versus old AE (OAE) protocol (2000-2010) with selective AE for grade (I-V) with CB. Seven hundred twelve patients underwent NOM with 522 (73%) in the OAE group and 190 (27%) in the NAE group. Evolving from the OAE to the NAE strategy resulted in a significantly lower FNOM rate for the overall group (grade I-V) (OAE vs. NAE, 4% to 1%, p = 0.04) and the grade (IV-V) group (OAE vs. NAE, 19% vs. 3%, p = 0.01). Angiograms were avoided in 113 grade (I-III) injuries with no CB; these patients had NOM with observation alone and none failed. A protocol using mandatory AE of all high-grade (IV-V) injuries without CB and selective AE of grade (I-V) with CB may provide for optimum salvage with safe NOM of the high-grade injuries (IV-V) and limited unnecessary angiograms. Therapeutic study, level IV.
INRiM Time and Frequency Laboratory: A New Data Management System (DMS)
2010-11-01
protocols defined at international level. The INRIM TFL, at present, is equipped with a set of measuring and control devices (SAD – Sistema di...Proceedings of the 22 th European Frequency and Time Forum (EFTF), 22-25 April 2008, Toulouse, France. [2] G. Vizio, 2000, “ Sistema per l’acquisizione...dei dati del Laboratorio di Tempo e Frequenza,” Manuale Operativo INRIM – 2000. [3] R. Costa, D. Orgiazzi, G. C. Cerretto, and V. Pettiti, 2010
Quantization Distortion in Block Transform-Compressed Data
NASA Technical Reports Server (NTRS)
Boden, A. F.
1995-01-01
The popular JPEG image compression standard is an example of a block transform-based compression scheme; the image is systematically subdivided into block that are individually transformed, quantized, and encoded. The compression is achieved by quantizing the transformed data, reducing the data entropy and thus facilitating efficient encoding. A generic block transform model is introduced.
NASA Technical Reports Server (NTRS)
Stanboli, Alice
2013-01-01
Phxtelemproc is a C/C++ based telemetry processing program that processes SFDU telemetry packets from the Telemetry Data System (TDS). It generates Experiment Data Records (EDRs) for several instruments including surface stereo imager (SSI); robotic arm camera (RAC); robotic arm (RA); microscopy, electrochemistry, and conductivity analyzer (MECA); and the optical microscope (OM). It processes both uncompressed and compressed telemetry, and incorporates unique subroutines for the following compression algorithms: JPEG Arithmetic, JPEG Huffman, Rice, LUT3, RA, and SX4. This program was in the critical path for the daily command cycle of the Phoenix mission. The products generated by this program were part of the RA commanding process, as well as the SSI, RAC, OM, and MECA image and science analysis process. Its output products were used to advance science of the near polar regions of Mars, and were used to prove that water is found in abundance there. Phxtelemproc is part of the MIPL (Multi-mission Image Processing Laboratory) system. This software produced Level 1 products used to analyze images returned by in situ spacecraft. It ultimately assisted in operations, planning, commanding, science, and outreach.
Mars Sample Handling Protocol Workshop Series: Workshop 2a (Sterilization)
NASA Technical Reports Server (NTRS)
Rummel, John D. (Editor); Brunch, Carl W. (Editor); Setlow, Richard B. (Editor); DeVincenzi, Donald L. (Technical Monitor)
2001-01-01
The Space Studies Board of the National Research Council provided a series of recommendations to NASA on planetary protection requirements for future Mars sample return missions. One of the Board's key findings suggested, although current evidence of the martian surface suggests that life as we know it would not tolerate the planet's harsh environment, there remain 'plausible scenarios for extant microbial life on Mars.' Based on this conclusion, all samples returned from Mars should be considered potentially hazardous until it has been demonstrated that they are not. In response to the National Research Council's findings and recommendations, NASA has undertaken a series of workshops to address issues regarding NASA's proposed sample return missions. Work was previously undertaken at the Mars Sample Handling and Protocol Workshop 1 (March 2000) to formulate recommendations on effective methods for life detection and/or biohazard testing on returned samples. The NASA Planetary Protection Officer convened the Mars Sample Sterilization Workshop, the third in the Mars Sample Handling Protocol Workshop Series, on November 28-30, 2000 at the Holiday Inn Rosslyn Westpark, Arlington, Virginia. Because of the short timeframe between this Workshop and the second Workshop in the Series, which was convened in October 2000 in Bethesda, Maryland, they were developed in parallel, so the Sterilization Workshop and its report have therefore been designated as '2a'). The focus of Workshop 2a was to make recommendations for effective sterilization procedures for all phases of Mars sample return missions, and to answer the question of whether we can sterilize samples in such a way that the geological characteristics of the samples are not significantly altered.
VLT Images the Horsehead Nebula
NASA Astrophysics Data System (ADS)
2002-01-01
Summary A new, high-resolution colour image of one of the most photographed celestial objects, the famous "Horsehead Nebula" (IC 434) in Orion, has been produced from data stored in the VLT Science Archive. The original CCD frames were obtained in February 2000 with the FORS2 multi-mode instrument at the 8.2-m VLT KUEYEN telescope on Paranal (Chile). The comparatively large field-of-view of the FORS2 camera is optimally suited to show this extended object and its immediate surroundings in impressive detail. PR Photo 02a/02 : View of the full field around the Horsehead Nebula. PR Photo 02b/02 : Enlargement of a smaller area around the Horse's "mouth" A spectacular object ESO PR Photo 02a/02 ESO PR Photo 02a/02 [Preview - JPEG: 400 x 485 pix - 63k] [Normal - JPEG: 800 x 970 pix - 896k] [Full-Res - JPEG: 1951 x 2366 pix - 4.7M] ESO PR Photo 02b/02 ESO PR Photo 02b/02 [Preview - JPEG: 400 x 501 pix - 91k] [Normal - JPEG: 800 x 1002 pix - 888k] [Full-Res - JPEG: 1139 x 1427 pix - 1.9M] Caption : PR Photo 02a/02 is a reproduction of a composite colour image of the Horsehead Nebula and its immediate surroundings. It is based on three exposures in the visual part of the spectrum with the FORS2 multi-mode instrument at the 8.2-m KUEYEN telescope at Paranal. PR Photo 02b/02 is an enlargement of a smaller area. Technical information about these photos is available below. PR Photo 02a/02 shows the famous "Horsehead Nebula" , which is situated in the Orion molecular cloud complex. Its official name is Barnard 33 and it is a dust protrusion in the southern region of the dense dust cloud Lynds 1630 , on the edge of the HII region IC 434 . The distance to the region is about 1400 light-years (430 pc). This beautiful colour image was produced from three images obtained with the multi-mode FORS2 instrument at the second VLT Unit Telescope ( KUEYEN ), some months after it had "First Light", cf. PR 17/99. The image files were extracted from the VLT Science Archive Facility and the photo constitutes a fine example of the subsequent use of such valuable data. Details about how the photo was made and some weblinks to other pictures are available below. The comparatively large field-of-view of the FORS2 camera (nearly 7 x 7 arcmin 2 ) and the detector resolution (0.2 arcsec/pixel) make this instrument optimally suited for imaging of this extended object and its immediate surroundings. There is obviously a wealth of detail, and scientific information can be derived from the colours shown in this photo. Three predominant colours are seen in the image: red from the hydrogen (H-alpha) emission from the HII region; brown for the foreground obscuring dust; and blue-green for scattered starlight. The blue-green regions of the Horsehead Nebula correspond to regions not shadowed from the light from the stars in the H II region to the top of the picture and scatter stellar radiation towards the observer; these are thus `mountains' of dust . The Horse's `mane' is an area in which there is less dust along the line-of-sight and the background (H-alpha) emission from ionized hydrogen atoms can be seen through the foreground dust. A chaotic area At the high resolution of this image the Horsehead appears very chaotic with many wisps and filaments and diffuse dust . At the top of the figure there is a bright rim separating the dust from the HII region. This is an `ionization front' where the ionizing photons from the HII region are moving into the cloud, destroying the dust and the molecules and heating and ionizing the gas. Dust and molecules can exist in cold regions of interstellar space which are shielded from starlight by very large layers of gas and dust. Astronomers refer to elongated structures, such as the Horsehead, as `elephant trunks' (never mind the zoological confusion!) which are common on the boundaries of HII regions. They can also be seen elsewhere in Orion - another well-known example is the pillars of M16 (the "Eagle Nebula") made famous by the fine HST image - a new infrared view by VLT and ISAAC of this area was published last month, cf. PR 25/01. Such structures are only temporary as they are being constantly eroded by the expanding region of ionized gas and are destroyed on timescales of typically a few thousand years. The Horsehead as we see it today will therefore not last forever and minute changes will become observable as the time passes. The surroundings To the east of the Horsehead (at the bottom of this image) there is ample evidence for star formation in the Lynds 1630 dark cloud . Here, the reflection nebula NGC 2023 surrounds the hot B-type star HD 37903 and some Herbig Haro objects are found which represent high-speed gas outflows from very young stars with masses of around a solar mass. The HII region to the west (top of picture) is ionized by the strong radiation from the bright star Sigma Orionis , located just below the southernmost star in Orion's Belt. The chain of dust and molecular clouds are part of the Orion A and B regions (also known as Orion's `sword' ). Other images of the Horsehead Nebula The Horsehead Nebula is a favourite object for amateur astrophotographers and large numbers of images are available on the WWW. Due to its significant extension and the limited field-of-view of some professional telescopes, fewer photographs are available from today's front-line facilities, except from specialized wide-field instruments like Schmidt telescopes, etc. The links below point to a number of prominent photos obtained elsewhere and some contain further useful links to other sites with more information about this splendid sky area. "Astronomy Picture of the Day" : http://antwrp.gsfc.nasa.gov/apod/ap971025.html Hubble Heritage image : http://hubble.stsci.edu/news_.and._views/pr.cgi?2001%2B12 INT Wide-Field image : http://www.ing.iac.es/PR/science/horsehead.htm NOT image : http://www.not.iac.es/new/general/photos/astronomical/ NOAO Wide-Field image : http://www.noao.edu/outreach/press/pr01/ir0101.html Bill Arnett's site : http://www.seds.org/billa/twn/b33x.html Technical information about the photos PR Photo 02a/02 was produced from three images, obtained on February 1, 2000, with the FORS2 multi-mode instrument at the 8.2-m KUEYEN Unit Telescope and extracted from the VLT Science Archive Facility. The frames were obtained in the B-band (600 sec exposure; wavelength 429 nm; FWHM 88 nm; here rendered as blue), V-band (300 sec; 554 nm; 112 nm; green) and R-band (120 sec; 655 nm; 165 nm; red) The original pixel size is 0.2 arcsec. The photo shows the full field recorded in all three colours, approximately 6.5 x 6.7 arcmin 2. The seeing was about 0.75 arcsec. PR Photo 02b/02 is an enlargement of a smaller area, measuring 3.8 x 4.1 arcmin 2. North is to the left and east is down (the usual orientation for showing this object). The frames were recorded with a TK2048 SITe CCD and the ESO-FIERA Controller, built by the Optical Detector Team (ODT). The images were prepared by Cyril Cavadore (ESO-ODT) , by means of Prism software. ESO PR Photos 02a-b/02 may be reproduced, if credit is given the European Southern Observatory (ESO).
ESO and NSF Sign Agreement on ALMA
NASA Astrophysics Data System (ADS)
2003-02-01
Green Light for World's Most Powerful Radio Observatory On February 25, 2003, the European Southern Observatory (ESO) and the US National Science Foundation (NSF) are signing a historic agreement to construct and operate the world's largest and most powerful radio telescope, operating at millimeter and sub-millimeter wavelength. The Director General of ESO, Dr. Catherine Cesarsky, and the Director of the NSF, Dr. Rita Colwell, act for their respective organizations. Known as the Atacama Large Millimeter Array (ALMA), the future facility will encompass sixty-four interconnected 12-meter antennae at a unique, high-altitude site at Chajnantor in the Atacama region of northern Chile. ALMA is a joint project between Europe and North America. In Europe, ESO is leading on behalf of its ten member countries and Spain. In North America, the NSF also acts for the National Research Council of Canada and executes the project through the National Radio Astronomy Observatory (NRAO) operated by Associated Universities, Inc. (AUI). The conclusion of the ESO-NSF Agreement now gives the final green light for the ALMA project. The total cost of approximately 650 million Euro (or US Dollars) is shared equally between the two partners. Dr. Cesarsky is excited: "This agreement signifies the start of a great project of contemporary astronomy and astrophysics. Representing Europe, and in collaboration with many laboratories and institutes on this continent, we together look forward towards wonderful research projects. With ALMA we may learn how the earliest galaxies in the Universe really looked like, to mention but one of the many eagerly awaited opportunities with this marvellous facility". "With this agreement, we usher in a new age of research in astronomy" says Dr. Colwell. "By working together in this truly global partnership, the international astronomy community will be able to ensure the research capabilities needed to meet the long-term demands of our scientific enterprise, and that we will be able to study and understand our universe in ways that have previously been beyond our vision". The recent Presidential decree from Chile for AUI and the agreement signed in late 2002 between ESO and the Government of the Republic of Chile (cf. ESO PR 18/02) recognize the interest that the ALMA Project has for Chile, as it will deepen and strengthen the cooperation in scientific and technological matters between the parties. A joint ALMA Board has been established which oversees the realisation of the ALMA project via the management structure. This Board meets for the first time on February 24-25, 2003, at NSF in Washington and will witness this historic event. ALMA: Imaging the Light from Cosmic Dawn ESO PR Photo 06a/03 ESO PR Photo 06a/03 [Preview - JPEG: 588 x 400 pix - 52k [Normal - JPEG: 1176 x 800 pix - 192k] [Hi-Res - JPEG: 3300 x 2244 pix - 2.0M] ESO PR Photo 06b/03 ESO PR Photo 06b/03 [Preview - JPEG: 502 x 400 pix - 82k [Normal - JPEG: 1003 x 800 pix - 392k] [Hi-Res - JPEG: 2222 x 1773 pix - 3.0M] ESO PR Photo 06c/03 ESO PR Photo 06c/03 [Preview - JPEG: 474 x 400 pix - 84k [Normal - JPEG: 947 x 800 pix - 344k] [Hi-Res - JPEG: 2272 x 1920 pix - 2.0M] ESO PR Photo 06d/03 ESO PR Photo 06d/03 [Preview - JPEG: 414 x 400 pix - 69k [Normal - JPEG: 828 x 800 pix - 336k] [HiRes - JPEG: 2935 x 2835 pix - 7.4k] Captions: PR Photo 06a/03 shows an artist's view of the Atacama Large Millimeter Array (ALMA), with 64 12-m antennae. PR Photo 06b/03 is another such view, with the array arranged in a compact configuration at the high-altitude Chajnantor site. The ALMA VertexRSI prototype antennae is shown in PR Photo 06c/03 on the Antenna Test Facility (ATF) site at the NRAO Very Large Array (VLA) site near Socorro (New Mexico, USA). The future ALMA site at Llano de Chajnantor at 5000 metre altitude, some 40 km East of the village of San Pedro de Atacama (Chile) is seen in PR Photo 06d/03 - this view was obtained at 11 hrs in the morning on a crisp and clear autumn day (more views of this site are available at the Chajnantor Photo Gallery). The Atacama Large Millimeter Array (ALMA) will be one of astronomy's most powerful telescopes - providing unprecedented imaging capabilities and sensitivity in the corresponding wavelength range, many orders of magnitude greater than anything of its kind today. ALMA will be an array of 64 antennae that will work together as one telescope to study millimeter and sub-millimeter wavelength radiation from space. This radiation crosses the critical boundary between infrared and microwave radiation and holds the key to understanding such processes as planet and star formation, the formation of early galaxies and galaxy clusters, and the formation of organic and other molecules in space. "ALMA will be one of astronomy's premier tools for studying the universe" says Nobel Laureate Riccardo Giacconi, President of AUI (and former ESO Director General (1993-1999)). "The entire astronomical community is anxious to have the unprecedented power and resolution that ALMA will provide". The President of the ESO Council, Professor Piet van der Kruit, agrees: "ALMA heralds a break-through in sub-millimeter and millimeter astronomy, allowing some of the most penetrating studies the Universe ever made. It is safe to predict that there will be exciting scientific surprises when ALMA enters into operation". What is millimeter and sub-millimeter wavelength astronomy? Astronomers learn about objects in space by studying the energy emitted by those objects. Our Sun and the other stars throughout the Universe emit visible light. But these objects also emit other kinds of light waves, such as X-rays, infrared radiation, and radio waves. Some objects emit very little or no visible light, yet are strong sources at other wavelengths in the electromagnetic spectrum. Much of the energy in the Universe is present in the sub-millimeter and millimeter portion of the spectrum. This energy comes from the cold dust mixed with gas in interstellar space. It also comes from distant galaxies that formed many billions of years ago at the edges of the known universe. With ALMA, astronomers will have a uniquely powerful facility with access to this remarkable portion of the spectrum and hence, new and wonderful opportunities to learn more about those objects. Current observatories simply do not have anywhere near the necessary sensitivity and resolution to unlock the secrets that abundant sub-millimeter and millimeter wavelength radiation can reveal. It will take the unparalleled power of ALMA to fully study the cosmic emission at this wavelength and better understand the nature of the universe. Scientists from all over the world will use ALMA. They will compete for observing time by submitting proposals, which will be judged by a group of their peers on the basis of scientific merit. ALMA's unique capabilities ALMA's ability to detect remarkably faint sub-millimeter and millimeter wavelength emission and to create high-resolution images of the source of that emission gives it capabilities not found in any other astronomical instruments. ALMA will therefore be able to study phenomena previously out of reach to astronomers and astrophysicists, such as: * Very young galaxies forming stars at the earliest times in cosmic history; * New planets forming around young stars in our galaxy, the Milky Way; * The birth of new stars in spinning clouds of gas and dust; and * Interstellar clouds of gas and dust that are the nurseries of complex molecules and even organic chemicals that form the building blocks of life. How will ALMA work? All of ALMA's 64 antennae will work in concert, taking quick "snapshots" or long-term exposures of astronomical objects. Cosmic radiation from these objects will be reflected from the surface of each antenna and focussed onto highly sensitive receivers cooled to just a few degrees above absolute zero in order to suppress undesired "noise" from the surroundings. There the signals will be amplified many times, digitized, and then sent along underground fiber-optic cables to a large signal processor in the central control building. This specialized computer, called a correlator - running at 16,000 million-million operations per second - will combine all of the data from the 64 antennae to make images of remarkable quality. The extraordinary ALMA site Since atmospheric water vapor absorbs millimeter and (especially) sub-millimeter waves, ALMA must be constructed at a very high altitude in a very dry region of the earth. Extensive tests showed that the sky above the Atacama Desert of Chile has the excellent clarity and stability essential for ALMA. That is why ALMA will be built there, on Llano de Chajnantor at an altitude of 5,000 metres in the Chilean Andes. A series of views of this site, also in high-resolution suitable for reproduction, is available at the Chajnantor Photo Gallery. Timeline for ALMA June 1998: Phase 1 (Research and Development) June 1999: European/American Memorandum of Understanding February 2003: Signature of the bilateral Agreement 2004: Tests of the Prototype System 2007: Initial scientific operation of a partially completed array 2011: End of construction of the array
Entangled State Quantum Cryptography: Eavesdropping on the Ekert Protocol
DOE Office of Scientific and Technical Information (OSTI.GOV)
Naik, D. S.; Peterson, C. G.; White, A. G.
2000-05-15
Using polarization-entangled photons from spontaneous parametric down-conversion, we have implemented Ekert's quantum cryptography protocol. The near-perfect correlations of the photons allow the sharing of a secret key between two parties. The presence of an eavesdropper is continually checked by measuring Bell's inequalities. We investigated several possible eavesdropper strategies, including pseudo-quantum-nondemolition measurements. In all cases, the eavesdropper's presence was readily apparent. We discuss a procedure to increase her detectability. (c) 2000 The American Physical Society.
NASA Technical Reports Server (NTRS)
Lamarque, J.-F.; Shindell, D. T.; Naik, V.; Plummer, D.; Josse, B.; Righi, M.; Rumbold, S. T.; Schulz, M.; Skeie, R. B.; Strode, S.;
2013-01-01
The Atmospheric Chemistry and Climate Model Intercomparison Project (ACCMIP) consists of a series of time slice experiments targeting the long-term changes in atmospheric composition between 1850 and 2100, with the goal of documenting composition changes and the associated radiative forcing. In this overview paper, we introduce the ACCMIP activity, the various simulations performed (with a requested set of 14) and the associated model output. The 16 ACCMIP models have a wide range of horizontal and vertical resolutions, vertical extent, chemistry schemes and interaction with radiation and clouds. While anthropogenic and biomass burning emissions were specified for all time slices in the ACCMIP protocol, it is found that the natural emissions are responsible for a significant range across models, mostly in the case of ozone precursors. The analysis of selected present-day climate diagnostics (precipitation, temperature, specific humidity and zonal wind) reveals biases consistent with state-of-the-art climate models. The model-to- model comparison of changes in temperature, specific humidity and zonal wind between 1850 and 2000 and between 2000 and 2100 indicates mostly consistent results. However, models that are clear outliers are different enough from the other models to significantly affect their simulation of atmospheric chemistry.
Entanglement via Faraday effect - an old tool at a new job for Quantum Networks
NASA Astrophysics Data System (ADS)
Polzik, Eugene
2002-05-01
A new approach to the problem of the quantum interface between light and atoms has been developed [1,2]. The method utilizes free space dispersive interaction of pulses of light with spin polarized atomic ensembles. Entanglement between the polarization state of light and the collective spin state of atoms is established by measurement, more precisely by detection of light in certain polarization basis. In the first demonstration of this approach [3] we have generated a long-lived entangled state of two separate macroscopic atomic samples by a polarization measurement on light transmitted through the samples. We then have shown that this approach also works for mapping of a quantum state of light onto long-lived atomic spin state [4] paving the road towards realization of the quantum memory for light. Progress with other communication protocols such as atomic state teleportation and multiparty networks will be presented. 1. A. Kuzmich and E. S. Polzik, Phys. Rev. Lett. (2000) 85, 5639. 2. Lu-Ming Duan, J.I. Cirac, P. Zoller and E. S. Polzik, Phys. Rev. Lett. (2000) 85, (25), 5643. 3. B. Julsgaard, A. Kozhekin, and E. S. Polzik, Nature, 413, 400 (2001). 4. J. L. Sorensen, B. Julsgaard, C. Schori and E. S. Polzik, submitted for publication.
NASA Astrophysics Data System (ADS)
Kerner, H. R.; Bell, J. F., III; Ben Amor, H.
2017-12-01
The Mastcam color imaging system on the Mars Science Laboratory Curiosity rover acquires images within Gale crater for a variety of geologic and atmospheric studies. Images are often JPEG compressed before being downlinked to Earth. While critical for transmitting images on a low-bandwidth connection, this compression can result in image artifacts most noticeable as anomalous brightness or color changes within or near JPEG compression block boundaries. In images with significant high-frequency detail (e.g., in regions showing fine layering or lamination in sedimentary rocks), the image might need to be re-transmitted losslessly to enable accurate scientific interpretation of the data. The process of identifying which images have been adversely affected by compression artifacts is performed manually by the Mastcam science team, costing significant expert human time. To streamline the tedious process of identifying which images might need to be re-transmitted, we present an input-efficient neural network solution for predicting the perceived quality of a compressed Mastcam image. Most neural network solutions require large amounts of hand-labeled training data for the model to learn the target mapping between input (e.g. distorted images) and output (e.g. quality assessment). We propose an automatic labeling method using joint entropy between a compressed and uncompressed image to avoid the need for domain experts to label thousands of training examples by hand. We use automatically labeled data to train a convolutional neural network to estimate the probability that a Mastcam user would find the quality of a given compressed image acceptable for science analysis. We tested our model on a variety of Mastcam images and found that the proposed method correlates well with image quality perception by science team members. When assisted by our proposed method, we estimate that a Mastcam investigator could reduce the time spent reviewing images by a minimum of 70%.
Novel and Recent Synthesis and Applications of β-Lactams
NASA Astrophysics Data System (ADS)
Troisi, Luigino; Granito, Catia; Pindinelli, Emanuela
In this chapter, a comprehensive overview of the most significant and interesting contributions published from 2000 until now, concerning the preparation of novel β-lactam structures is presented. Among the different synthetic strategies available, either novel or already known but efficient and versatile methodologies are covered. The simple modifications of one or more substituents linked to the nitrogen N-1, the C-3, and the C-4 carbon atoms of the β-lactam nucleus were considered as an alternative synthetic protocol of more complex and polyfunctionalized molecules. Indeed, it is well known and extensively reviewed that the biological activity of this strained four-membered heterocycle is strictly dependent on the nature of the substituent groups that affect the reactivity towards the molecular active sites, increasing or lowering the possibility of interaction with the substrates. Finally, a synthetic survey of the most significant biological and pharmacological applications of the 2-azetidinones is reported.
Salmen, Marcus; Ewy, Gordon A; Sasson, Comilla
2012-01-01
Objective To determine whether the use of cardiocerebral resuscitation (CCR) or AHA/ERC 2005 Resuscitation Guidelines improved patient outcomes from out-of-hospital cardiac arrest (OHCA) compared to older guidelines. Design Systematic review and meta-analysis. Data sources MEDLINE, EMBASE, Web of Science and the Cochrane Library databases. We also hand-searched study references and consulted experts. Study selection Design: randomised controlled trials and observational studies. Population OHCA patients, age >17 years. Comparators ‘Control’ protocol versus ‘Study’ protocol. ‘Control’ protocol defined as AHA/ERC 2000 Guidelines for cardiopulmonary resuscitation (CPR). ‘Study’ protocol defined as AHA/ERC 2005 Guidelines for CPR, or a CCR protocol. Outcome Survival to hospital discharge. Quality High-quality or medium-quality studies, as measured by the Newcastle Ottawa Scale using predefined categories. Results Twelve observational studies met inclusion criteria. All the three studies using CCR demonstrated significantly improved survival compared to use of AHA 2000 Guidelines, as did five of the nine studies using AHA/ERC 2005 Guidelines. Pooled data demonstrate that use of a CCR protocol has an unadjusted OR of 2.26 (95% CI 1.64 to 3.12) for survival to hospital discharge among all cardiac arrest patients. Among witnessed ventricular fibrillation/ventricular tachycardia (VF/VT) patients, CCR increased survival by an OR of 2.98 (95% CI 1.92 to 4.62). Studies using AHA/ERC 2005 Guidelines showed an overall trend towards increased survival, but significant heterogeneity existed among these studies. Conclusions We demonstrate an association with improved survival from OHCA when CCR protocols or AHA/ERC 2005 Guidelines are compared to use of older guidelines. In the subgroup of patients with witnessed VF/VT, there was a threefold increase in OHCA survival when CCR was used. CCR appears to be a promising resuscitation protocol for Emergency Medical Services providers in increasing survival from OHCA. Future research will need to be conducted to directly compare AHA/ERC 2010 Guidelines with the CCR approach. PMID:23036985
DCTune Perceptual Optimization of Compressed Dental X-Rays
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Null, Cynthia H. (Technical Monitor)
1996-01-01
In current dental practice, x-rays of completed dental work are often sent to the insurer for verification. It is faster and cheaper to transmit instead digital scans of the x-rays. Further economies result if the images are sent in compressed form. DCTune is a technology for optimizing DCT (digital communication technology) quantization matrices to yield maximum perceptual quality for a given bit-rate, or minimum bit-rate for a given perceptual quality. Perceptual optimization of DCT color quantization matrices. In addition, the technology provides a means of setting the perceptual quality of compressed imagery in a systematic way. The purpose of this research was, with respect to dental x-rays, 1) to verify the advantage of DCTune over standard JPEG (Joint Photographic Experts Group), 2) to verify the quality control feature of DCTune, and 3) to discover regularities in the optimized matrices of a set of images. We optimized matrices for a total of 20 images at two resolutions (150 and 300 dpi) and four bit-rates (0.25, 0.5, 0.75, 1.0 bits/pixel), and examined structural regularities in the resulting matrices. We also conducted psychophysical studies (1) to discover the DCTune quality level at which the images became 'visually lossless,' and (2) to rate the relative quality of DCTune and standard JPEG images at various bitrates. Results include: (1) At both resolutions, DCTune quality is a linear function of bit-rate. (2) DCTune quantization matrices for all images at all bitrates and resolutions are modeled well by an inverse Gaussian, with parameters of amplitude and width. (3) As bit-rate is varied, optimal values of both amplitude and width covary in an approximately linear fashion. (4) Both amplitude and width vary in systematic and orderly fashion with either bit-rate or DCTune quality; simple mathematical functions serve to describe these relationships. (5) In going from 150 to 300 dpi, amplitude parameters are substantially lower and widths larger at corresponding bit-rates or qualities. (6) Visually lossless compression occurs at a DCTune quality value of about 1. (7) At 0.25 bits/pixel, comparative ratings give DCTune a substantial advantage over standard JPEG. As visually lossless bit-rates are approached, this advantage of necessity diminishes. We have concluded that DCTune optimized quantization matrices provide better visual quality than standard JPEG. Meaningful quality levels may be specified by means of the DCTune metric. Optimized matrices are very similar across the class of dental x-rays, suggesting the possibility of a 'class-optimal' matrix. DCTune technology appears to provide some value in the context of compressed dental x-rays.
Universal Barenco quantum gates via a tunable noncollinear interaction
NASA Astrophysics Data System (ADS)
Shi, Xiao-Feng
2018-03-01
The Barenco gate (B ) is a type of two-qubit quantum gate based on which alone universal quantum computation can be achieved. Each B is characterized by three angles (α , θ , and ϕ ), though it works in a two-qubit Hilbert space. Here we design B via a noncollinear interaction V | r1r2>< r1r3|+H .c . , where | ri> is a state that can be excited from a qubit state and V is adjustable. We present two protocols for B . The first (second) protocol consists of two (six) pulses and one (two) wait period(s), where the former causes rotations between qubit states and excited states, and the latter induces gate transformation via the noncollinear interaction. In the first protocol, the variable ϕ can be tuned by varying the phases of external controls, and the other two variables α and θ , tunable via adjustment of the wait duration, have a linear dependence on each other. Meanwhile, the first protocol can give rise to cnot and controlled-y gates. In the second protocol, α ,θ , and ϕ can be varied by changing the interaction amplitudes and wait durations, and the latter two are dependent on α nonlinearly. Both protocols can also lead to another universal gate when {α ,ϕ }={1 /4 ,1 /2 }π with appropriate parameters. Implementation of these universal gates is analyzed based on the van der Waals interaction of neutral Rydberg atoms.
A Portrait of One Hundred Thousand and One Galaxies
NASA Astrophysics Data System (ADS)
2002-08-01
Rich and Inspiring Experience with NGC 300 Images from the ESO Science Data Archive Summary A series of wide-field images centred on the nearby spiral galaxy NGC 300 , obtained with the Wide-Field Imager (WFI) on the MPG/ESO 2.2-m telescope at the La Silla Observatory , have been combined into a magnificent colour photo. These images have been used by different groups of astronomers for various kinds of scientific investigations, ranging from individual stars and nebulae in NGC 300, to distant galaxies and other objects in the background. This material provides an interesting demonstration of the multiple use of astronomical data, now facilitated by the establishment of extensively documented data archives, like the ESO Science Data Archive that now is growing rapidly and already contains over 15 Terabyte. Based on the concept of Astronomical Virtual Observatories (AVOs) , the use of archival data sets is on the rise and provides a large number of scientists with excellent opportunities for front-line investigations without having to wait for precious observing time. In addition to presenting a magnificent astronomical photo, the present account also illustrates this important new tool of the modern science of astronomy and astrophysics. PR Photo 18a/02 : WFI colour image of spiral galaxy NGC 300 (full field) . PR Photo 18b/02 : Cepheid stars in NGC 300 PR Photo 18c/02 : H-alpha image of NGC 300 PR Photo 18d/02 : Distant cluster of galaxies CL0053-37 in the NGC 300 field PR Photo 18e/02 : Dark matter distribution in CL0053-37 PR Photo 18f/02 : Distant, reddened cluster of galaxies in the NGC 300 field PR Photo 18g/02 : Distant galaxies, seen through the outskirts of NGC 300 PR Photo 18h/02 : "The View Beyond" ESO PR Photo 18a/02 ESO PR Photo 18a/02 [Preview - JPEG: 400 x 412 pix - 112k] [Normal - JPEG: 1200 x 1237 pix - 1.7M] [Hi-Res - JPEG: 4000 x 4123 pix - 20.3M] Caption : PR Photo 18a/02 is a reproduction of a colour-composite image of the nearby spiral galaxy NGC 300 and the surrounding sky field, obtained in 1999 and 2000 with the Wide-Field Imager (WFI) on the MPG/ESO 2.2-m telescope at the La Silla Observatory. See the text for details about the many different uses of this photo. Smaller areas in this large field are shown in Photos 18b-h/02 , cf. below. The High-Res version of this image has been compressed by a factor 4 (2 x 2 pixel rebinning) to reduce it to a reasonably transportable size. Technical information about this and the other photos is available at the end of this communication. Located some 7 million light-years away, the spiral galaxy NGC 300 [1] is a beautiful representative of its class, a Milky-Way-like member of the prominent Sculptor group of galaxies in the southern constellation of that name. NGC 300 is a big object in the sky - being so close, it extends over an angle of almost 25 arcmin, only slightly less than the size of the full moon. It is also relative bright, even a small pair of binoculars will unveil this magnificent spiral galaxy as a hazy glowing patch on a dark sky background. The comparatively small distance of NGC 300 and its face-on orientation provide astronomers with a wonderful opportunity to study in great detail its structure as well as its various stellar populations and interstellar medium. It was exactly for this purpose that some images of NGC 300 were obtained with the Wide-Field Imager (WFI) on the MPG/ESO 2.2-m telescope at the La Silla Observatory. This advanced 67-million pixel digital camera has already produced many impressive pictures, some of which are displayed in the WFI Photo Gallery [2]. With its large field of view, 34 x 34 arcmin 2 , the WFI is optimally suited to show the full extent of the spiral galaxy NGC 300 and its immediate surroundings in the sky, cf. PR Photo 18a/02 . NGC 300 and "Virtual Astronomy" In addition to being a beautiful sight in its own right, the present WFI-image of NGC 300 is also a most instructive showcase of how astronomers with very different research projects nowadays can make effective use of the same observations for their programmes . The idea to exploit one and the same data set is not new, but thanks to rapid technological developments it has recently developed into a very powerful tool for the astronomers in their continued quest to understand the Universe. This kind of work has now become very efficient with the advent of a fully searchable data archive from which observational data can then - after the expiry of a nominal one-year proprietary period for the observers - be made available to other astronomers. The ESO Science Data Archive was established some years ago and now encompasses more than 15 Terabyte [3]. Normally, the identification of specific data sets in such a large archive would be a very difficult and time-consuming task. However, effective projects and software "tools" like ASTROVIRTEL and Querator now allow the users quickly to "filter" large amounts of data and extract those of their specific interest. Indeed, "Archival Astronomy" has already led to many important discoveries, cf. the ASTROVIRTEL list of publications. There is no doubt that "Virtual Astronomical Observatories" will play an increasingly important role in the future, cf. ESO PR 26/01. The present wide-field images of NGC 300 provide an impressive demonstration of the enormous potential of this innovative approach. Some of the ways they were used are explained below. Cepheids in NGC 300 and the cosmic distance scale ESO PR Photo 18b/02 ESO PR Photo 18b/02 [Preview - JPEG: 468 x 400 pix - 112k] [Full-Res - JPEG: 1258 x 1083 pix - 1.6M] Caption : PR Photo 18b/02 shows some of the Cepheid type stars in the spiral galaxy NGC 300 (at the centre of the markers), as they were identified by Wolfgang Gieren and collaborators during the research programme for which the WFI images of NGC 300 were first obtained. In this area of NGC 300, there is also a huge cloud of ionized hydrogen (a "HII shell"). It measures about 2000 light-years in diameter, thus dwarfing even the enormous Tarantula Nebula in the LMC, also photographed with the WFI (cf. ESO PR Photos 14a-g/02 ). The largest versions ("normal" or "full-res") of this and the following photos are shown with their original pixel size, demonstrating the incredible amount of detail visible on one WFI image. Technical information about this photo is available below. In 1999, Wolfgang Gieren (Universidad de Concepcion, Chile) and his colleagues started a search for Cepheid-type variable stars in NGC 300. These stars constitute a key element in the measurement of distances in the Universe. It has been known since many years that the pulsation period of a Cepheid-type star depends on its intrinsic brightness (its "luminosity"). Thus, once its period has been measured, the astronomers can calculate its luminosity. By comparing this to the star's apparent brightness in the sky, and applying the well-known diminution of light with the second power of the distance, they can obtain the distance to the star. This fundamental method has allowed some of the most reliable measurements of distances in the Universe and has been essential for all kinds of astrophysics, from the closest stars to the remotest galaxies. Previous to Gieren's new project, only about a dozen Cepheids were known in NGC 300. However, by regularly obtaining wide-field WFI exposures of NGC 300 from July 1999 through January 2000 and carefully monitoring the apparent brightness of its brighter stars during that period, the astronomers detected more than 100 additional Cepheids . The brightness variations (in astronomical terminology: "light curves") could be determined with excellent precision from the WFI data. They showed that the pulsation periods of these Cepheids range from about 5 to 115 days. Some of these Cepheids are identified on PR Photo 18b/02 , in the middle of a very crowded field in NGC 300. When fully studied, these unique observational data will yield a new and very accurate distance to NGC 300, making this galaxy a future cornerstone in the calibration of the cosmic distance scale . Moreover, they will also allow to understand in more detail how the brightness of a Cepheid-type star depends on its chemical composition, currently a major uncertainty in the application of the Cepheid method to the calibration of the extragalactic distance scale. Indeed, the effect of the abundance of different elements on the luminosity of a Cepheid can be especially well measured in NGC 300 due to the existence of large variations of these abundances in the stars located in the disk of this galaxy. Gieren and his group, in collaboration with astronomers Fabio Bresolin and Rolf Kudritzki (Institute of Astronomy, Hawaii, USA) are currently measuring the variations of these chemical abundances in stars in the disk of NGC 300, by means of spectra of about 60 blue supergiant stars, obtained with the FORS multi-mode instruments at the ESO Very Large Telescope (VLT) on Paranal. These stars, that are among the optically brightest in NGC 300, were first identified in the WFI images of this galaxy obtained in different colours - the same that were used to produce PR Photo 18a/02 . The nature of those stars was later spectroscopically confirmed at the VLT. As an important byproduct of these measurements, the luminosities of the blue supergiant stars in NGC 300 will themselves be calibrated (as a new cosmic "standard candle"), taking advantage of their stellar wind properties that can be measured from the VLT spectra. The WFI Cepheid observations in NGC 300, as well as the VLT blue supergiant star observations, form part of a large research project recently initiated by Gieren and his group that is concerned with the improvement of various stellar distance indicators in nearby galaxies (the "ARAUCARIA" project ). Clues on star formation history in NGC 300 ESO PR Photo 18c/02 ESO PR Photo 18c/02 [Preview - JPEG: 440 x 400 pix - 63k] [Normal - JPEG: 1200 x 1091 pix - 664k] [Full-Res - JPEG: 5515 x 5014 pix - 14.3M] Caption : PR Photo 18c/02 displays NGC 300, as seen through a narrow optical filter (H-alpha) in the red light of hydrogen atoms. A population of intrinsically bright and young stars turned "on" just a few million years ago. Their radiation and strong stellar winds have shaped many of the clouds of ionized hydrogen gas ("HII shells") seen in this photo. The "rings" near some of the bright stars are caused by internal reflections in the telescope. Technical information about this photo is available below.. But there is much more to discover on these WFI images of NGC 300! The WFI images obtained in several broad and narrow band filters from the ultraviolet to the near-infrared spectral region (U, B, V, R, I and H-alpha) allow a detailed study of groups of heavy, hot stars (known as "OB associations") and a large number of huge clouds of ionized hydrogen ("HII shells") in this galaxy. Corresponding studies have been carried out by Gieren's group, resulting in the discovery of an amazing number of OB associations, including a number of giant associations. These investigations, taken together with the observed distribution of the pulsation periods of the Cepheids, allow to better understand the history of star formation in NGC 300. For example, three distinct peaks in the number distribution of the pulsation periods of the Cepheids seem to indicate that there have been at least three different bursts of star formation within the past 100 million years. The large number of OB associations and HII shells ( PR Photo 18c/02 ) furthermore indicate the presence of a numerous, very young stellar population in NGC 300, aged only a few million years. Dark matter and the observed shapes of distant galaxies In early 2002, Thomas Erben and Mischa Schirmer from the "Institut für Astrophysik and extraterrestrische Forschung" ( IAEF , Universität Bonn, Germany), in the course of their ASTROVIRTEL programme, identified and retrieved all available broad-band and H-alpha images of NGC 300 available in the ESO Science Data Archive. Most of these have been observed for the project by Gieren and his colleagues, described above. However, the scientific interest of the German astronomers was very different from that of their colleagues and they were not at all concerned about the main object in the field, NGC 300. In a very different approach, they instead wanted to study those images to measure the amount of dark matter in the Universe, by means of the weak gravitational lensing effect produced by distant galaxy clusters. Various observations, ranging from the measurement of internal motions ("rotation curves") in spiral galaxies to the presence of hot X-ray gas in clusters of galaxies and the motion of galaxies in those clusters, indicate that there is about ten times more matter in the Universe than what is observed in the form of stars, gas and galaxies ("luminous matter"). As this additional matter does not emit light at any wavelengths, it is commonly referred to as "dark" matter - its true nature is yet entirely unclear. Insight into the distribution of dark matter in the Universe can be gained by looking at the shapes of images of very remote galaxies, billions of light-years away, cf. ESO PR 24/00. Light from such distant objects travels vast distances through space before arriving here on Earth, and whenever it passes heavy clusters of galaxies, it is bent a little due to the associated gravitational field. Thus, in long-exposure, high-quality images, this "weak lensing" effect can be perceived as a coherent pattern of distortion of the images of background galaxies. Gravitational lensing in the NGC 300 field ESO PR Photo 18d/02 ESO PR Photo 18d/02 [Preview - JPEG: 400 x 495 pix - 82k] [Full-Res - JPEG: 1304 x 1615 pix - 3.2M] Caption : PR Photo 18d/02 shows the distant cluster of galaxies CL0053-37 , as imaged on the WFI photo of the NGC 300 sky field. The elongated distribution of the cluster galaxies, as well as the presence of two large, early-type elliptical galaxies indicate that this cluster is still in the process of formation. Some of the galaxies appear to be merging. From the measured redshift ( z = 0.1625), a distance of about 2.1 billion light-years is deduced. Technical information about this photo is available below. ESO PR Photo 18e/02 ESO PR Photo 18e/02 [Preview - JPEG: 400 x 567 pix - 89k] [Normal - JPEG: 723 x 1024 pix - 424k] Caption : PR Photo 18e/02 is a "map" of the dark matter distribution (black contours) in the cluster of galaxies CL0053-37 (shown in PR Photo 18d/02 ), as obtained from the weak lensing effects detected in the WFI images, and the X-ray flux (green contours) taken from the All-Sky Survey carried out by the ROSAT satellite observatory. The distribution of galaxies resembles the elongated, dark-matter profile. Because of ROSAT's limited image sharpness (low "angular resolution"), it cannot be entirely ruled out that the observed X-ray emission is due to an active nucleus of a galaxy in CL0053-37, or even a foreground stellar binary system in NGC 300. The WFI NGC 300 images appeared promising for gravitational lensing research because of the exceptionally long total exposure time. Although the large foreground galaxy NGC 300 would block the light of tens of thousands of galaxies in the background, a huge number of others would still be visible in the outskirts of this sky field, making a search for clusters of galaxies and associated lensing effects quite feasible. To ensure the best possible image sharpness in the combined image, and thus to obtain the most reliable measurements of the shapes of the background objects, only red (R-band) images obtained under the best seeing conditions were combined. In order to provide additional information about the colours of these faint objects, a similar approach was adopted for images in the other bands as well. The German astronomers indeed measured a significant lensing effect for one of the galaxy clusters in the field ( CL0053-37 , see PR Photo 18d/02 ); the images of background galaxies around this cluster were noticeably distorted in the direction tangential to the cluster center. Based on the measured degree of distortion, a map of the distribution of (dark) matter in this direction was constructed ( PR Photo 18e/02 ). The separation of unlensed foreground (bluer) and lensed background galaxies (redder) greatly profited from the photometric measurements done by Gieren's group in the course of their work on the Cepheids in NGC 300. Assuming that the lensed background galaxies lie at a mean redshift of 1.0, i.e. a distance of 8 billion light-years, a mass of about 2 x 10 14 solar masses was obtained for the CL0053-37 cluster. This lensing analysis in the NGC 300 field is part of the Garching-Bonn Deep Survey (GaBoDS) , a weak gravitational lensing survey led by Peter Schneider (IAEF). GaBoDS is based on exposures made with the WFI and until now a sky area of more than 12 square degrees has been imaged during very good seeing conditions. Once complete, this investigation will allow more insight into the distribution and cosmological evolution of galaxy cluster masses, which in turn provide very useful information about the structure and history of the Universe. One hundred thousand galaxies ESO PR Photo 18f/02 ESO PR Photo 18f/02 [Preview - JPEG: 400 x 526 pix - 93k] [Full-Res - JPEG: 756 x 994 pix - 1.0M] Caption : PR Photo 18f/02 shows a group of galaxies , seen on the NGC 300 images. They are all quite red and their similar colours indicate that they must be about equally distant. They probably constitute a distant cluster, now in the stage of formation. Technical information about this photo is available below. ESO PR Photo 18g/02 ESO PR Photo 18g/02 [Preview - JPEG: 469 x 400 pix - xxk] [Full-Res - JPEG: 1055 x 899 pix - 968k] Caption : PR Photo 18g/02 shows an area in the outer regions of NGC 300. Disks of spiral galaxies are usually quite "thin" (some hundred light-years), as compared to their radial extent (tens of thousands of light-years across). In areas where only small amounts of dust are present, it is possible to see much more distant galaxies right through the disk of NGC 300 , as demonstrated by this image. Technical information about this photo is available below. ESO PR Photo 18h/02 ESO PR Photo 18h/02 [Preview - JPEG: 451 x 400 pix - 89k] [Normal - JPEG: 902 x 800 pix - 856k] [Full-Res - JPEG: 2439 x 2163 pix - 6.0M] Caption : PR Photo 18h/02 is an astronomers' joy ride to infinity. Such a rarely seen view of our universe imparts a feeling of the vast distances in space. In the upper half of the image, the outer region of NGC 300 is resolved into innumerable stars, while in the lower half, myriads of galaxies - a thousand times more distant - catch the eye. In reality, many of them are very similar to NGC 300, they are just much more remote. In addition to allowing a detailed investigation of dark matter and lensing effects in this field, the present, very "deep" colour image of NGC 300 invites to perform a closer inspection of the background galaxy population itself . No less than about 100,000 galaxies of all types are visible in this amazing image. Three known quasars ([ICS96] 005342.1-375947, [ICS96] 005236.1-374352, [ICS96] 005336.9-380354) with redshifts 2.25, 2.35 and 2.75, respectively, happen to lie inside this sky field, together with many interacting galaxies, some of which feature tidal tails. There are also several groups of highly reddened galaxies - probably distant clusters in formation, cf. PR Photo 18f/02 . Others are seen right through the outer regions of NGC 300, cf. PR Photo 18g/02 . More detailed investigations of the numerous galaxies in this field are now underway. From the nearby spiral galaxy NGC 300 to objects in the young Universe, it is all there, truly an astronomical treasure trove, cf. PR Photo 18h/02 ! Notes [1]: "NGC" means "New General Catalogue" (of nebulae and clusters) that was published in 1888 by J.L.E. Dreyer in the "Memoirs of the Royal Astronomical Society". [2]: Other colour composite images from the Wide-Field Imager at the MPG/ESO 2.2-m telescope at the La Silla Observatory are available at the ESO Outreach website at http://www.eso.org/esopia"bltxt">Tarantula Nebula in the LMC, cf. ESO PR Photos 14a-g/02. [3]: 1 Terabyte = 10 12 byte = 1000 Gigabyte = 1 million million byte. Technical information about the photos PR Photo 18a/02 and all cutouts were made from 110 WFI images obtained in the B-band (total exposure time 11.0 hours, rendered as blue), 105 images in the V-band (10.4 hours, green), 42 images in the R-band (4.2 hours, red) and 21 images through a H-alpha filter (5.1 hours, red). In total, 278 images of NGC 300 have been assembled to produce this colour image, together with about as many calibration images (biases, darks and flats). 150 GB of hard disk space were needed to store all uncompressed raw data, and about 1 TB of temporary files was produced during the extensive data reduction. Parallel processing of all data sets took about two weeks on a four-processor Sun Enterprise 450 workstation. The final colour image was assembled in Adobe Photoshop. To better show all details, the overall brightness of NGC 300 was reduced as compared to the outskirts of the field. The (red) "rings" near some of the bright stars originate from the H-alpha frames - they are caused by internal reflections in the telescope. The images were prepared by Mischa Schirmer at the Institut für Astrophysik und Extraterrestrische Forschung der Universität Bonn (IAEF) by means of a software pipeline specialised for reduction of multiple CCD wide-field imaging camera data. The raw data were extracted from the public sector of the ESO Science Data Archive. The extensive observations were performed at the ESO La Silla Observatory by Wolfgang Gieren, Pascal Fouque, Frederic Pont, Hermann Boehnhardt and La Silla staff, during 34 nights between July 1999 and January 2000. Some additional observations taken during the second half of 2000 were retrieved by Mischa Schirmer and Thomas Erben from the ESO archive. CD-ROM with full-scale NGC 300 image soon available PR Photo 18a/02 has been compressed by a factor 4 (2 x 2 rebinning). For PR Photos 18b-h/02 , the largest-size versions of the images are shown at the original scale (1 pixel = 0.238 arcsec). A full-resolution TIFF-version (approx. 8000 x 8000 pix; 200 Mb) of PR Photo 18a/02 will shortly be made available by ESO on a special CD-ROM, together with some other WFI images of the same size. An announcement will follow in due time.
Successful "First Light" for VLT High-Resolution Spectrograph
NASA Astrophysics Data System (ADS)
1999-10-01
Great Research Prospects with UVES at KUEYEN A major new astronomical instrument for the ESO Very Large Telescope at Paranal (Chile), the UVES high-resolution spectrograph, has just made its first observations of astronomical objects. The astronomers are delighted with the quality of the spectra obtained at this moment of "First Light". Although much fine-tuning still has to be done, this early success promises well for new and exciting science projects with this large European research facility. Astronomical instruments at VLT KUEYEN The second VLT 8.2-m Unit Telescope, KUEYEN ("The Moon" in the Mapuche language), is in the process of being tuned to perfection before it will be "handed" over to the astronomers on April 1, 2000. The testing of the new giant telescope has been successfully completed. The latest pointing tests were very positive and, from real performance measurements covering the entire operating range of the telescope, the overall accuracy on the sky was found to be 0.85 arcsec (the RMS-value). This is an excellent result for any telescope and implies that KUEYEN (as is already the case for ANTU) will be able to acquire its future target objects securely and efficiently, thus saving precious observing time. This work has paved the way for the installation of large astronomical instruments at its three focal positions, all prototype facilities that are capable of catching the light from even very faint and distant celestial objects. The three instruments at KUEYEN are referred to by their acronyms UVES , FORS2 and FLAMES. They are all dedicated to the investigation of the spectroscopic properties of faint stars and galaxies in the Universe. The UVES instrument The first to be installed is the Ultraviolet Visual Echelle Spectrograph (UVES) that was built by ESO, with the collaboration of the Trieste Observatory (Italy) for the control software. Complete tests of its optical and mechanical components, as well as of its CCD detectors and of the complex control system, cf. ESO PR Photos 44/98 , were made in the laboratories of the ESO Headquarters in Garching (Germany) before it was fully dismounted and shipped (some parts by air, others by ship) to the ESO Paranal Observatory, 130 km south of Antofagasta (Chile). Here, the different pieces of UVES (with a total weight of 8 tons) were carefully reassembled on the Nasmyth platform of KUEYEN and made ready for real observations (see ESO PR Photos 36p-t/99 ). UVES is a complex two-channel spectrograph that has been built around two giant optical (echelle diffraction) gratings, each ruled on a 84 cm x 21 cm x 12 cm block of the ceramic material Zerodur (the same that is used for the VLT 8.2-m main mirrors) and weighing more than 60 kg. These echelle gratings finely disperse the light from celestial objects collected by the telescope into its constituent wavelengths (colours). UVES' resolving power (an optical term that indicates the ratio between a given wavelength and the smallest wavelength difference between two spectral lines that are clearly separated by the spectrograph) may reach 110,000, a very high value for an astronomical instrument of such a large size. This means for instance that even comparatively small changes in radial velocity (a few km/sec only) can be accurately measured and also that it is possible to detect the faint spectral signatures of very rare elements in celestial objects. One UVES channel is optimized for the ultraviolet and blue, the other for visual and red light. The spectra are digitally recorded by two highly efficient CCD detectors for subsequent analysis and astrophysical interpretation. By optimizing the transmission of the various optical components in its two channels, UVES has a very high efficiency all the way from the UV (wavelength about 300 nm) to the near-infrared (1000 nm or 1 µm). This guarantees that only a minimum of the precious light that is collected by KUEYEN is lost and that detailed spectra can be obtained of even quite faint objects, down to about magnitude 20 (corresponding to nearly one million times fainter than what can be perceived with the unaided eye). The possibility of doing simultaneous observations in the two channels (with a dichroic mirror) ensures a further gain in data gathering efficiency. First Observations with UVES In the evening of September 27, 1999, the ESO astronomers turned the KUEYEN telescope and - for the first time - focussed the light of stars and galaxies on the entrance aperture of the UVES instrument. This is the crucial moment of "First Light" for a new astronomical facility. The following test period will last about three weeks. Much of the time during the first observing nights was spent by functional tests of the various observation modes and by targeting "standard stars" with well-known properties in order to measure the performance of the new instrument. They showed that it is behaving very well. This marks the beginning of a period of progressive fine-tuning that will ultimately bring UVES to peak performance. The astronomers also did a few "scientific" observations during these nights, aimed at exploring the capabilities of their new spectrograph. They were eager to do so, also because UVES is the first spectrograph of this type installed at a telescope of large diameter in the southern hemisphere . Many exciting research possibilities are now opening with UVES . They include a study of the chemical history of many galaxies in the Local Group, e.g. by observing the most metal-poor (oldest) stars in the Milky Way Galaxy and by obtaining the first, extremely detailed spectra of their brightest stars in the Magellanic Clouds. Quasars and distant compact galaxies will also be among the most favoured targets of the first UVES observers, not least because their spectra carry crucial information about the density, physical state and chemical composition of the early Universe. UVES First Light: SN 1987A One of the first spectral test exposures with UVES at KUEYEN was of SN 1987A , the famous supernova that exploded in the Large Magellanic Cloud (LMC) in February 1987, and the brightest supernova of the last 400 years. ESO PR Photo 37a/99 ESO PR Photo 37a/99 [Preview - JPEG: 400 x 455 pix - 87k] [Normal - JPEG: 645 x 733 pix - 166k] Caption to ESO PR Photo 37a/99 : This is a direct image of SN1987A, flanked by two nearby stars. The distance between these two is 4.5 arcsec. The slit (2.0 arcsec wide) through which the echelle spectrum shown in PR Photo 37b/99 was obtained, is outlined. This reproduction is from a 2-min exposure through a R(ed) filter with the FORS1 multi-mode instrument at VLT ANTU, obtained in 0.55 arcsec seeing on September 20, 1998. North is up and East is left. ESO PR Photo 37b/99 ESO PR Photo 37b/99 [Preview - JPEG: 400 x 459 pix - 130k] [Normal - JPEG: 800 x 917 pix - 470k] [High-Res - JPEG: 3000 x 3439 pix - 6.5M] Caption to ESO PR Photo 37b/99 : This shows the raw image, as read from the CCD, with the recorded echelle spectrum of SN1987A. With this technique, the supernova spectrum is divided into many individual parts ( spectral orders , each of which appears as a narrow horizontal line) that together cover the wavelength interval from 479 to 682 nm (from the bottom to the top), i.e. from blue to red light. Many bright emission lines from different elements are visible, e.g. the strong H-alpha line from hydrogen near the centre of the fourth order from the top. Emission lines from the terrestrial atmosphere are seen as vertical bright lines that cover the full width of the individual horizontal bands. Since this exposure was done with the nearly Full Moon above the horizon, an underlying, faint absorption-line spectrum of reflected sunlight is also visible. The exposure time was 30 min and the seeing conditions were excellent (0.5 arcsec). ESO PR Photo 37c/99 ESO PR Photo 37c/99 [Preview - JPEG: 400 x 355 pix - 156k] [Normal - JPEG: 800 x 709 pix - 498k] [High-Res - JPEG: 1074 x 952 pix - 766k] Caption to ESO PR Photo 37c/99 : This false-colour image has been extracted from another UVES echelle spectrum of SN 1987A, similar to the one shown in PR Photo 37b/99 , but with a slit width of 1 arcsec only. The upper part shows the emission lines of nitrogen, sulfur and hydrogen, as recorded in some of the spectral orders. The pixel coordinates (X,Y) in the original frame are indicated; the red colour indicates the highest intensities. Below is a more detailed view of the complex H-alpha emission line, with the corresponding velocities and the position along the spectrograph slit indicated. Several components of this line can be distinguished. The bulk of the emission (here shown in red colour) comes from the ring surrounding the supernova; the elongated shape here is due to the differential velocity exhibited by the near (to us) and far sides of the ring. The two bright spots on either side are emission from two outer rings (not immediately visible in PR Photo 37a/99 ). The extended emission in the velocity direction originates from material inside the ring upon which the fastest moving ejecta from the supernova have impacted (As seen in VLT data obtained previously with the ANTU/ISAAC combination (cf. PR Photo 11/99 ), exciting times now lie ahead for SN 1987A. The ejecta moving at 30,000 km/s (1/10th the speed of light) have now, 12 years after the explosion, reached the ring of material and the predicted "fireworks" are about to be ignited.) Finally, there is a broad emission extending all along the spectrograph slit (here mostly yellow) upon which the ring emission is superimposed. This is not associated with the supernova itself, but is H-alpha emission by diffuse gas in the Large Magellanic Cloud (LMC) in which SN 1987A is located. UVES First Light: QSO HE2217-2818 The power of UVES is demonstrated by this two-hour test exposure of the southern quasar QSO HE2217-2818 with U-magnitude = 16.5 and a redshift of z = 2.4. It was discovered a few years ago during the Hamburg-ESO Quasar Survey , by means of photographic plates taken with the 1-m ESO Schmidt Telescope at La Silla, the other ESO astronomical site in Chile. ESO PR Photo 37d/99 ESO PR Photo 37d/99 [Preview - JPEG: 400 x 309 pix - 92k] [Normal - JPEG: 800x 618 pix - 311k] [High-Res - JPEG: 3000 x 2316 pix - 5.0M] ESO PR Photo 37e/99 ESO PR Photo 37e/99 [Preview - JPEG: 400 x 310 pix - 43k] [Normal - JPEG: 800 x 619 pix - 100k] [High-Res - JPEG: 3003 x 2324 pix - 436k] Caption to ESO PR Photo 37d/99 : This UVES echelle spectrum QSO HE2217-2818 (U-magnitude = 16.5) is recorded in different orders (the individual horizontal lines) and altogether covers the wavelength interval between 330 - 450 nm (from the bottom to the top). It illustrates the excellent capability of UVES to work in the UV-band on even faint targets. Simultaneously with this observation, UVES also recorded the adjacent spectral region 465 - 660 nm in its other channel. The broad Lyman-alpha emission from ionized hydrogen associated with the powerful energy source of the QSO is seen in the upper half of the spectrum at wavelength 413 nm. At shorter wavelengths, the dark regions in the spectrum are Lyman-alpha absorption lines from intervening, neutral hydrogen gas located along the line-of-sight at different redshifts (the so-called Lyman-alpha forest ) in the redshift interval z = 1.7 - 2.4. Note that since this exposure was done with the nearly Full Moon above the horizon, an underlying, faint absorption-line spectrum of reflected sunlight is also visible. Caption to ESO PR Photo 37e/99 : A tracing of one spectral order, corresponding to one horizontal line in the echelle spectrum displayed in PR Photo 37d/99 . It shows part of the Lyman-alpha forest in the ultraviolet spectrum of the southern quasar QSO HE2217-2818 . The absorption lines are caused by intervening, neutral hydrogen gas located at different distances along the line-of-sight towards this quasar. How to obtain ESO Press Information ESO Press Information is made available on the World-Wide Web (URL: http://www.eso.org../ ). ESO Press Photos may be reproduced, if credit is given to the European Southern Observatory.
Themanson, Jason R
2014-11-15
Social exclusion is a complex social phenomenon with powerful negative consequences. Given the impact of social exclusion on mental and emotional health, an understanding of how perceptions of social exclusion develop over the course of a social interaction is important for advancing treatments aimed at lessening the harmful costs of being excluded. To date, most scientific examinations of social exclusion have looked at exclusion after a social interaction has been completed. While this has been very helpful in developing an understanding of what happens to a person following exclusion, it has not helped to clarify the moment-to-moment dynamics of the process of social exclusion. Accordingly, the current protocol was developed to obtain an improved understanding of social exclusion by examining the patterns of event-related brain activation that are present during social interactions. This protocol allows greater precision and sensitivity in detailing the social processes that lead people to feel as though they have been excluded from a social interaction. Importantly, the current protocol can be adapted to include research projects that vary the nature of exclusionary social interactions by altering how frequently participants are included, how long the periods of exclusion will last in each interaction, and when exclusion will take place during the social interactions. Further, the current protocol can be used to examine variables and constructs beyond those related to social exclusion. This capability to address a variety of applications across psychology by obtaining both neural and behavioral data during ongoing social interactions suggests the present protocol could be at the core of a developing area of scientific inquiry related to social interactions.
Cervical and thoracic spine injury from interactions with vehicle roofs in pure rollover crashes.
Bambach, M R; Grzebieta, R H; McIntosh, A S; Mattos, G A
2013-01-01
Around one third of serious injuries sustained by belted, non-ejected occupants in pure rollover crashes occur to the spine. Dynamic rollover crash test methodologies have been established in Australia and the United States, with the aims of understanding injury potential in rollovers and establishing the basis of an occupant rollover protection crashworthiness test protocol that could be adopted by consumer new car assessment programmes and government regulators internationally. However, for any proposed test protocol to be effective in reducing the high trauma burden resulting from rollover crashes, appropriate anthropomorphic devices that replicate real-world injury mechanisms and biomechanical loads are required. To date, consensus regarding the combination of anthropomorphic device and neck injury criteria for rollover crash tests has not been reached. The aim of the present study is to provide new information pertaining to the nature and mechanisms of spine injury in pure rollover crashes, and to assist in the assessment of spine injury potential in rollover crash tests. Real-world spine injury cases that resulted from pure rollover crashes in the United States between 2000 and 2009 are identified, and compared with cadaver experiments under vertical load by other authors. The analysis is restricted to contained, restrained occupants that were injured from contact with the vehicle roof structure during a pure rollover, and the role of roof intrusion in creating potential for spine injury is assessed. Recommendations for assessing the potential for spine injury in rollover occupant protection crash test protocols are made. Copyright © 2012 Elsevier Ltd. All rights reserved.
Kotz, D; van Litsenburg, W; van Duurling, R; van Schayck, C P; Wesseling, G J
2008-01-01
To describe Dutch respiratory nurses' current smoking cessation practices, attitudes and beliefs, and to compare these with a survey from the year 2000, before the national introduction of a protocol for the treatment of nicotine and tobacco addiction (the L-MIS protocol). Questionnaire survey among all 413 registered respiratory nurses in the Netherlands in 2006. The response rate was 62%. Seventy-seven percent of the respondents reported to have "fairly good" or "good" knowledge of all steps of the L-MIS protocol. Seven out of 10 behavioural techniques for smoking cessation from the protocol were used by more than 94% of the respondents. Seventy-four percent of the respiratory nurses recommended the use of either nicotine replacement therapy (70%) or bupropion (44%). Almost two-thirds (65% of 254) perceived lack of patient's motivation as the most important barrier for smoking cessation treatment; a four-fold increase compared to the year 2000. We conclude that respiratory nurses are compliant with the L-MIS protocol. They offer intensive support and use behavioural techniques for smoking cessation more frequently than evidence-based pharmacological aids for smoking cessation. Perceived lack of patient's motivation forms the most important threat to respiratory nurses' future smoking cessation activities. International guidelines acknowledge that respiratory patients have a more urgent need to stop smoking but have more difficulty doing so. They should be offered the most intensive smoking cessation counselling in combination with pharmacotherapy. This kind of counselling may be more feasible for respiratory nurses than for physicians who often lack time. Their efforts could be increased by reimbursing pharmacological aids for smoking cessation and by developing simple tools to systematically assess motivation to quit and psychiatric co-morbidity in smoking patients.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-06
... Resident. We will not accept group or family photographs; you must include a separate photograph for each... new digital image: The image file format must be in the Joint Photographic Experts Group (JPEG) format... Web site four to six weeks before the scheduled interviews with U.S. consular officers at overseas...
A Posteriori Restoration of Block Transform-Compressed Data
NASA Technical Reports Server (NTRS)
Brown, R.; Boden, A. F.
1995-01-01
The Galileo spacecraft will use lossy data compression for the transmission of its science imagery over the low-bandwidth communication system. The technique chosen for image compression is a block transform technique based on the Integer Cosine Transform, a derivative of the JPEG image compression standard. Considered here are two known a posteriori enhancement techniques, which are adapted.
Client Location in 802.11 Networks
2007-03-01
The Encyclopedia of Networking. 1995. Alameda. 3. Forouzan, Behrouz A. TCP/IP Protocol Suite. 2nd ed. New York: Mc- Graw Hill, 2003. 4. Holt, Keith...Proceedings. April 2003: 1353-1358. 18. Willingham, Stephen . Navy Pursuing ‘Smaller, Deployable, Interactive’ Networked Systems. Nov 2000. National
Three Good Reasons for Celebrating at the ESO/ST-ECF Science Archive Facility
NASA Astrophysics Data System (ADS)
2000-12-01
Great Demand for Data from New "Virtual Observatory" Summary Due to a happy coincidence, the ESO/ST-ECF Science Archive Facility is celebrating three different milestones at the same time: * its 10th anniversary since the establishment in 1991 * the 10,000th request for data , and * the signing-up of active user number 2000 . This Archive contains over 8 Terabytes (1 Terabyte = 1 million million bytes) of valuable observational data from the NASA/ESA Hubble Space Telescope (HST), the ESO Very Large Telescope (VLT) and other ESO telescopes . Its success paves the way for the establishment of "Virtual Observatories" from which first-class data can be obtained by astronomers all over the world. This greatly enhances the opportunities for more (young) scientists to participate in front-line research. PR Photo 34/00 : Front-page of a new brochure, describing the ESO/ST-ECF Science Archive Facility. Just 10 years ago, on the 1st of January 1991, the ESO/ST-ECF (European Southern Observatory/Space Telescope-European Coordinating Facility) Science Archive Facility opened. It has since served the astronomical community with gigabyte after gigabyte of high-quality astronomical data from some of the world's leading telescopes. The Archive, which is located in Garching, just outside Munich (Germany), contains data from the 2.4-m NASA/ESA Hubble Space Telescope , as well as from several ESO telescopes: the four 8.2-m Unit Telescopes of the Very Large Telescope (VLT) at the Paranal Observatory , and the 3.5-m New Technology Telescope (NTT) , the 3.6-m telescope and the MPG/ESO 2.2-m telescope at La Silla. The Archive is a continuously developing project - in terms of amounts of data stored, the number of users and in particular because of the current dramatic development of innovative techniques for data handling and storage. In the year 2000 more than 2 Terabytes (2000 Gigabytes) of data were distributed to users worldwide. The archiving of VLT data has been described in ESO PR 10/99. Celebrating the 10th anniversary Due to a happy coincidence, the Archive passes two other milestones almost exactly at the time of its ten-year anniversary: the 10,000th request for data has just arrived, and active user number 2000 has just signed up to start using the Archive . Dataset number 10000 was requested by Danish astronomer Søren Larsen who works at the University of California (USA). He asked for images of galaxies taken with the Hubble Space Telescope and expressed great satisfaction with the material: "The extremely sharp images from Hubble have provided a quantum leap forward in our ability to study star clusters in external galaxies. We now know that some galaxies contain extremely bright young star clusters. These might constitute a "link" between open and globular clusters as we know them in the Milky Way galaxy in which we live. We are now trying to understand whether all these clusters really form in the same basic way." Active user number 2000 is Swiss astronomer Frédéric Pont , working at the Universidad de Chile: "We use observations from the ESO VLT Unit Telescopes to map the chemical and star-formation history of dwarf galaxies in the Local Group. The stars we are looking at are very faint and we simply need the large size and excellent quality of VLT to observe them in detail. With the new data, we can really move forward in this fundamental research field." ESO PR Photo 34/00 ESO PR Photo 34/00 [Preview - JPEG: 400 x 281 pix - 63k] [Normal - JPEG: 800 x 562 pix - 224k] [Full-Res - JPEG: 1024 x 714 pix - 336k] Caption : PR Photo 34/00 shows the frontpage of the new brochure that describes the ESO/ST-ECF Science Archive Facility (available in PDF version on the web). The collage shows the Hubble Space Telescope above the world's largest optical/infrared telescope, the Very Large Telescope (VLT). To celebrate this special occasion, a 4-page brochure has been prepared that describes the Archive and its various services. The brochure can be requested from ESO or ST-ECF and is now available in PDF format on the web. As a small token, the two astronomers will receive a commemorative version of the photo that accompanies this release. The ASTROVIRTEL initiative One of the major new initiatives undertaken by ESO and ST-ECF in connection with the ESO/ST-ECF Science Archive is ASTROVIRTEL (Accessing Astronomical Archives as Virtual Telescopes) , cf. ESO PR 09/00. It is a project aimed at helping scientists to cope efficiently with the massive amounts of data now becoming available from the world's leading telescopes and so to exploit the true potential of the Archive treasures. ASTROVIRTEL represents the European effort in an area that many astronomers considers one of the most important developments within observing astronomy in the past decade. The future The head of the ESO/ST-ECF Science Archive Facility , Benoît Pirenne , believes that the future holds exciting challenges: "Due to the many improvements of the ESO, NASA and ESA telescopes and instruments expected in the coming years, we anticipate a tremendous increase in the amount of data to be archived and re-distributed. It will not be too long before we will have to start counting storage space in Petabytes (1 Petabyte = 1,000 Terabytes). We are now trying to figure out how to best prepare for this new era." But he is also concerned with maintaining and further enhancing the astronomical value of the data that are made available to the users: "Apart from improving the data storage, we need to invest much effort in building automatic software that will help users with the tedious pre-processing and 'cleaning' of the data, thereby allowing them to focus more on scientific than technical problems."
A Web Tool for Generating High Quality Machine-readable Biological Pathways.
Ramirez-Gaona, Miguel; Marcu, Ana; Pon, Allison; Grant, Jason; Wu, Anthony; Wishart, David S
2017-02-08
PathWhiz is a web server built to facilitate the creation of colorful, interactive, visually pleasing pathway diagrams that are rich in biological information. The pathways generated by this online application are machine-readable and fully compatible with essentially all web-browsers and computer operating systems. It uses a specially developed, web-enabled pathway drawing interface that permits the selection and placement of different combinations of pre-drawn biological or biochemical entities to depict reactions, interactions, transport processes and binding events. This palette of entities consists of chemical compounds, proteins, nucleic acids, cellular membranes, subcellular structures, tissues, and organs. All of the visual elements in it can be interactively adjusted and customized. Furthermore, because this tool is a web server, all pathways and pathway elements are publicly accessible. This kind of pathway "crowd sourcing" means that PathWhiz already contains a large and rapidly growing collection of previously drawn pathways and pathway elements. Here we describe a protocol for the quick and easy creation of new pathways and the alteration of existing pathways. To further facilitate pathway editing and creation, the tool contains replication and propagation functions. The replication function allows existing pathways to be used as templates to create or edit new pathways. The propagation function allows one to take an existing pathway and automatically propagate it across different species. Pathways created with this tool can be "re-styled" into different formats (KEGG-like or text-book like), colored with different backgrounds, exported to BioPAX, SBGN-ML, SBML, or PWML data exchange formats, and downloaded as PNG or SVG images. The pathways can easily be incorporated into online databases, integrated into presentations, posters or publications, or used exclusively for online visualization and exploration. This protocol has been successfully applied to generate over 2,000 pathway diagrams, which are now found in many online databases including HMDB, DrugBank, SMPDB, and ECMDB.
Optimizing Cloud Based Image Storage, Dissemination and Processing Through Use of Mrf and Lerc
NASA Astrophysics Data System (ADS)
Becker, Peter; Plesea, Lucian; Maurer, Thomas
2016-06-01
The volume and numbers of geospatial images being collected continue to increase exponentially with the ever increasing number of airborne and satellite imaging platforms, and the increasing rate of data collection. As a result, the cost of fast storage required to provide access to the imagery is a major cost factor in enterprise image management solutions to handle, process and disseminate the imagery and information extracted from the imagery. Cloud based object storage offers to provide significantly lower cost and elastic storage for this imagery, but also adds some disadvantages in terms of greater latency for data access and lack of traditional file access. Although traditional file formats geoTIF, JPEG2000 and NITF can be downloaded from such object storage, their structure and available compression are not optimum and access performance is curtailed. This paper provides details on a solution by utilizing a new open image formats for storage and access to geospatial imagery optimized for cloud storage and processing. MRF (Meta Raster Format) is optimized for large collections of scenes such as those acquired from optical sensors. The format enables optimized data access from cloud storage, along with the use of new compression options which cannot easily be added to existing formats. The paper also provides an overview of LERC a new image compression that can be used with MRF that provides very good lossless and controlled lossy compression.
Rate-distortion optimized tree-structured compression algorithms for piecewise polynomial images.
Shukla, Rahul; Dragotti, Pier Luigi; Do, Minh N; Vetterli, Martin
2005-03-01
This paper presents novel coding algorithms based on tree-structured segmentation, which achieve the correct asymptotic rate-distortion (R-D) behavior for a simple class of signals, known as piecewise polynomials, by using an R-D based prune and join scheme. For the one-dimensional case, our scheme is based on binary-tree segmentation of the signal. This scheme approximates the signal segments using polynomial models and utilizes an R-D optimal bit allocation strategy among the different signal segments. The scheme further encodes similar neighbors jointly to achieve the correct exponentially decaying R-D behavior (D(R) - c(o)2(-c1R)), thus improving over classic wavelet schemes. We also prove that the computational complexity of the scheme is of O(N log N). We then show the extension of this scheme to the two-dimensional case using a quadtree. This quadtree-coding scheme also achieves an exponentially decaying R-D behavior, for the polygonal image model composed of a white polygon-shaped object against a uniform black background, with low computational cost of O(N log N). Again, the key is an R-D optimized prune and join strategy. Finally, we conclude with numerical results, which show that the proposed quadtree-coding scheme outperforms JPEG2000 by about 1 dB for real images, like cameraman, at low rates of around 0.15 bpp.
Pantanowitz, Liron; Liu, Chi; Huang, Yue; Guo, Huazhang; Rohde, Gustavo K
2017-01-01
The quality of data obtained from image analysis can be directly affected by several preanalytical (e.g., staining, image acquisition), analytical (e.g., algorithm, region of interest [ROI]), and postanalytical (e.g., computer processing) variables. Whole-slide scanners generate digital images that may vary depending on the type of scanner and device settings. Our goal was to evaluate the impact of altering brightness, contrast, compression, and blurring on image analysis data quality. Slides from 55 patients with invasive breast carcinoma were digitized to include a spectrum of human epidermal growth factor receptor 2 (HER2) scores analyzed with Visiopharm (30 cases with score 0, 10 with 1+, 5 with 2+, and 10 with 3+). For all images, an ROI was selected and four parameters (brightness, contrast, JPEG2000 compression, out-of-focus blurring) then serially adjusted. HER2 scores were obtained for each altered image. HER2 scores decreased with increased illumination, higher compression ratios, and increased blurring. HER2 scores increased with greater contrast. Cases with HER2 score 0 were least affected by image adjustments. This experiment shows that variations in image brightness, contrast, compression, and blurring can have major influences on image analysis results. Such changes can result in under- or over-scoring with image algorithms. Standardization of image analysis is recommended to minimize the undesirable impact such variations may have on data output.
Helioviewer.org: An Open-source Tool for Visualizing Solar Data
NASA Astrophysics Data System (ADS)
Hughitt, V. Keith; Ireland, J.; Schmiedel, P.; Dimitoglou, G.; Mueller, D.; Fleck, B.
2009-05-01
As the amount of solar data available to scientists continues to increase at faster and faster rates, it is important that there exist simple tools for navigating this data quickly with a minimal amount of effort. By combining heterogeneous solar physics datatypes such as full-disk images and coronagraphs, along with feature and event information, Helioviewer offers a simple and intuitive way to browse multiple datasets simultaneously. Images are stored in a repository using the JPEG 2000 format and tiled dynamically upon a client's request. By tiling images and serving only the portions of the image requested, it is possible for the client to work with very large images without having to fetch all of the data at once. Currently, Helioviewer enables users to browse the entire SOHO data archive, updated hourly, as well as data feature/event catalog data from eight different catalogs including active region, flare, coronal mass ejection, type II radio burst data. In addition to a focus on intercommunication with other virtual observatories and browsers (VSO, HEK, etc), Helioviewer will offer a number of externally-available application programming interfaces (APIs) to enable easy third party use, adoption and extension. Future functionality will include: support for additional data-sources including TRACE, SDO and STEREO, dynamic movie generation, a navigable timeline of recorded solar events, social annotation, and basic client-side image processing.
Analysis of signal-dependent sensor noise on JPEG 2000-compressed Sentinel-2 multi-spectral images
NASA Astrophysics Data System (ADS)
Uss, M.; Vozel, B.; Lukin, V.; Chehdi, K.
2017-10-01
The processing chain of Sentinel-2 MultiSpectral Instrument (MSI) data involves filtering and compression stages that modify MSI sensor noise. As a result, noise in Sentinel-2 Level-1C data distributed to users becomes processed. We demonstrate that processed noise variance model is bivariate: noise variance depends on image intensity (caused by signal-dependency of photon counting detectors) and signal-to-noise ratio (SNR; caused by filtering/compression). To provide information on processed noise parameters, which is missing in Sentinel-2 metadata, we propose to use blind noise parameter estimation approach. Existing methods are restricted to univariate noise model. Therefore, we propose extension of existing vcNI+fBm blind noise parameter estimation method to multivariate noise model, mvcNI+fBm, and apply it to each band of Sentinel-2A data. Obtained results clearly demonstrate that noise variance is affected by filtering/compression for SNR less than about 15. Processed noise variance is reduced by a factor of 2 - 5 in homogeneous areas as compared to noise variance for high SNR values. Estimate of noise variance model parameters are provided for each Sentinel-2A band. Sentinel-2A MSI Level-1C noise models obtained in this paper could be useful for end users and researchers working in a variety of remote sensing applications.
Ozone depleting substances: a key forcing of the Brewer-Dobson circulation
NASA Astrophysics Data System (ADS)
Abalos, M.; Polvani, L. M.; Garcia, R. R.; Kinnison, D. E.; Randel, W. J.
2017-12-01
In contrast with monotonically-increasing greenhouse gases (GHG), Ozone Depleting Substances (ODS) peak approximately on the year 2000 and decrease thereafter, thanks to the Montreal Protocol. We examine the influence of these anthropogenic emissions on the Brewer-Dobson circulation (BDC) using specifically designed runs of the Community Earth System Model - Whole Atmosphere Community Climate Model (CESM-WACCM). Consistent with previous works, we find a dominant role of ODSs on the observed BDC acceleration up to 2000 in the SH summer, through dynamical changes induced by the ozone hole. We extend the analyses to quantify the influence of ODSs on the BDC for different regions and seasons, and compare the model results to observational estimates. Finally, we show that ODSs will substantially reduce the GHG-induced BDC acceleration in the future. Specifically, the trends in stratospheric mean age of air will be 4 times smaller in the period 2000-2080 as compared to the period 1965-2000.
ICCE/ICCAI 2000 Full & Short Papers (Interactive Learning Environments).
ERIC Educational Resources Information Center
2000
This document contains the full and short papers on interactive learning environments from ICCE/ICCAI 2000 (International Conference on Computers in Education/International Conference on Computer-Assisted Instruction) covering the following topics: a CAL system for appreciation of 3D shapes by surface development; a constructivist virtual physics…
Password-only authenticated three-party key exchange with provable security in the standard model.
Nam, Junghyun; Choo, Kim-Kwang Raymond; Kim, Junghwan; Kang, Hyun-Kyu; Kim, Jinsoo; Paik, Juryon; Won, Dongho
2014-01-01
Protocols for password-only authenticated key exchange (PAKE) in the three-party setting allow two clients registered with the same authentication server to derive a common secret key from their individual password shared with the server. Existing three-party PAKE protocols were proven secure under the assumption of the existence of random oracles or in a model that does not consider insider attacks. Therefore, these protocols may turn out to be insecure when the random oracle is instantiated with a particular hash function or an insider attack is mounted against the partner client. The contribution of this paper is to present the first three-party PAKE protocol whose security is proven without any idealized assumptions in a model that captures insider attacks. The proof model we use is a variant of the indistinguishability-based model of Bellare, Pointcheval, and Rogaway (2000), which is one of the most widely accepted models for security analysis of password-based key exchange protocols. We demonstrated that our protocol achieves not only the typical indistinguishability-based security of session keys but also the password security against undetectable online dictionary attacks.
Zhang, Nan; Membreno, Edward; Raj, Susan; Zhang, Hongjie; Khan, Liakot A; Gobel, Verena
2017-10-03
The four C. elegans excretory canals are narrow tubes extended through the length of the animal from a single cell, with almost equally far extended intracellular endotubes that build and stabilize the lumen with a membrane and submembraneous cytoskeleton of apical character. The excretory cell expands its length approximately 2,000 times to generate these canals, making this model unique for the in vivo assessment of de novo polarized membrane biogenesis, intracellular lumen morphogenesis and unicellular tubulogenesis. The protocol presented here shows how to combine standard labeling, gain- and loss-of-function genetic or RNA interference (RNAi)-, and microscopic approaches to use this model to visually dissect and functionally analyze these processes on a molecular level. As an example of a labeling approach, the protocol outlines the generation of transgenic animals with fluorescent fusion proteins for live analysis of tubulogenesis. As an example of a genetic approach, it highlights key points of a visual RNAi-based interaction screen designed to modify a gain-of-function cystic canal phenotype. The specific methods described are how to: label and visualize the canals by expressing fluorescent proteins; construct a targeted RNAi library and strategize RNAi screening for the molecular analysis of canal morphogenesis; visually assess modifications of canal phenotypes; score them by dissecting fluorescence microscopy; characterize subcellular canal components at higher resolution by confocal microscopy; and quantify visual parameters. The approach is useful for the investigator who is interested in taking advantage of the C. elegans excretory canal for identifying and characterizing genes involved in the phylogenetically conserved processes of intracellular lumen and unicellular tube morphogenesis.
Morozesk, Mariana; Franqui, Lidiane S; Mansano, Adrislaine S; Martinez, Diego Stéfani T; Fernandes, Marisa N
2018-05-05
The widespread production and application of carbon nanotubes (CNT) have raising concerns about their release into the environment and, the joint toxicity of CNT with pre-existing contaminants needs to be assessed. This is the first study that investigated the co-exposure of oxidized multiwalled carbon nanotubes (ox-MWCNT) and cadmium (Cd) using a zebrafish liver cell line (ZFL). Two in vitro co-exposure protocols differing by the order of ox-MWCNT interaction with Cd and fetal bovine serum (FBS) proteins were evaluated. Ox-MWCNT was physical and chemical characterized and its adsorption capacity and colloidal stability in cell culture medium was determined in both protocols. Cytotoxicity was investigated by MTT, neutral red, trypan blue, lactate dehydrogenase assays and the necrosis and apoptosis events were determined using flow cytometer. The Cd presence in medium did not interfere in the protein corona composition of MWCNT but the order of interaction of FBS and Cd interfered in its colloidal stability and metal adsorption rate. The ox-MWCNT increased Cd toxicity at low concentration probably by a "Trojan horse" and/or synergistic effect, and induced apoptosis and necrosis in ZFL cells. Although it was not observed differences of toxicity between protocols, the interaction of ox-MWCNT first with Cd led to its precipitation in cell culture medium and, as a consequence, to a possible false viability result by neutral red assay. Taken together, it was evident that the order of compounds interactions disturbs the colloidal stability and affects the in vitro toxicological assays. Considering that Protocol A showed more ox-MWCNT stability after interaction with Cd, this protocol is recommended to be adopted in future studies. Copyright © 2018 Elsevier B.V. All rights reserved.
Forensic Analysis of Digital Image Tampering
2004-12-01
analysis of when each method fails, which Chapter 4 discusses. Finally, a test image containing an invisible watermark using LSB steganography is...2.2 – Example of invisible watermark using Steganography Software F5 ............. 8 Figure 2.3 – Example of copy-move image forgery [12...Figure 3.11 – Algorithm for JPEG Block Technique ....................................................... 54 Figure 3.12 – “Forged” Image with Result
Video segmentation for post-production
NASA Astrophysics Data System (ADS)
Wills, Ciaran
2001-12-01
Specialist post-production is an industry that has much to gain from the application of content-based video analysis techniques. However the types of material handled in specialist post-production, such as television commercials, pop music videos and special effects are quite different in nature from the typical broadcast material which many video analysis techniques are designed to work with; shots are short and highly dynamic, and the transitions are often novel or ambiguous. We address the problem of scene change detection and develop a new algorithm which tackles some of the common aspects of post-production material that cause difficulties for past algorithms, such as illumination changes and jump cuts. Operating in the compressed domain on Motion JPEG compressed video, our algorithm detects cuts and fades by analyzing each JPEG macroblock in the context of its temporal and spatial neighbors. Analyzing the DCT coefficients directly we can extract the mean color of a block and an approximate detail level. We can also perform an approximated cross-correlation between two blocks. The algorithm is part of a set of tools being developed to work with an automated asset management system designed specifically for use in post-production facilities.
Privacy-preserving photo sharing based on a public key infrastructure
NASA Astrophysics Data System (ADS)
Yuan, Lin; McNally, David; Küpçü, Alptekin; Ebrahimi, Touradj
2015-09-01
A significant number of pictures are posted to social media sites or exchanged through instant messaging and cloud-based sharing services. Most social media services offer a range of access control mechanisms to protect users privacy. As it is not in the best interest of many such services if their users restrict access to their shared pictures, most services keep users' photos unprotected which makes them available to all insiders. This paper presents an architecture for a privacy-preserving photo sharing based on an image scrambling scheme and a public key infrastructure. A secure JPEG scrambling is applied to protect regional visual information in photos. Protected images are still compatible with JPEG coding and therefore can be viewed by any one on any device. However, only those who are granted secret keys will be able to descramble the photos and view their original versions. The proposed architecture applies an attribute-based encryption along with conventional public key cryptography, to achieve secure transmission of secret keys and a fine-grained control over who may view shared photos. In addition, we demonstrate the practical feasibility of the proposed photo sharing architecture with a prototype mobile application, ProShare, which is built based on iOS platform.
Storage, retrieval, and edit of digital video using Motion JPEG
NASA Astrophysics Data System (ADS)
Sudharsanan, Subramania I.; Lee, D. H.
1994-04-01
In a companion paper we describe a Micro Channel adapter card that can perform real-time JPEG (Joint Photographic Experts Group) compression of a 640 by 480 24-bit image within 1/30th of a second. Since this corresponds to NTSC video rates at considerably good perceptual quality, this system can be used for real-time capture and manipulation of continuously fed video. To facilitate capturing the compressed video in a storage medium, an IBM Bus master SCSI adapter with cache is utilized. Efficacy of the data transfer mechanism is considerably improved using the System Control Block architecture, an extension to Micro Channel bus masters. We show experimental results that the overall system can perform at compressed data rates of about 1.5 MBytes/second sustained and with sporadic peaks to about 1.8 MBytes/second depending on the image sequence content. We also describe mechanisms to access the compressed data very efficiently through special file formats. This in turn permits creation of simpler sequence editors. Another advantage of the special file format is easy control of forward, backward and slow motion playback. The proposed method can be extended for design of a video compression subsystem for a variety of personal computing systems.
Improved compression technique for multipass color printers
NASA Astrophysics Data System (ADS)
Honsinger, Chris
1998-01-01
A multipass color printer prints a color image by printing one color place at a time in a prescribed order, e.g., in a four-color systems, the cyan plane may be printed first, the magenta next, and so on. It is desirable to discard the data related to each color plane once it has been printed, so that data from the next print may be downloaded. In this paper, we present a compression scheme that allows the release of a color plane memory, but still takes advantage of the correlation between the color planes. The compression scheme is based on a block adaptive technique for decorrelating the color planes followed by a spatial lossy compression of the decorrelated data. A preferred method of lossy compression is the DCT-based JPEG compression standard, as it is shown that the block adaptive decorrelation operations can be efficiently performed in the DCT domain. The result of the compression technique are compared to that of using JPEG on RGB data without any decorrelating transform. In general, the technique is shown to improve the compression performance over a practical range of compression ratios by at least 30 percent in all images, and up to 45 percent in some images.
Analysis of In Vivo Chromatin and Protein Interactions of Arabidopsis Transcript Elongation Factors.
Pfab, Alexander; Antosz, Wojciech; Holzinger, Philipp; Bruckmann, Astrid; Griesenbeck, Joachim; Grasser, Klaus D
2017-01-01
A central step to elucidate the function of proteins commonly comprises the analysis of their molecular interactions in vivo. For nuclear regulatory proteins this involves determining protein-protein interactions as well as mapping of chromatin binding sites. Here, we present two protocols to identify protein-protein and chromatin interactions of transcript elongation factors (TEFs) in Arabidopsis. The first protocol (Subheading 3.1) describes protein affinity-purification coupled to mass spectrometry (AP-MS) that utilizes suspension cultured cells as experimental system. This approach provides an unbiased view of proteins interacting with epitope-tagged TEFs. The second protocol (Subheading 3.2) depicts details about a chromatin immunoprecipitation (ChIP) procedure to characterize genomic binding sites of TEFs. These methods should be valuable tools for the analysis of a broad variety of nuclear proteins.
Quantum counterfactual communication without a weak trace
NASA Astrophysics Data System (ADS)
Arvidsson-Shukur, D. R. M.; Barnes, C. H. W.
2016-12-01
The classical theories of communication rely on the assumption that there has to be a flow of particles from Bob to Alice in order for him to send a message to her. We develop a quantum protocol that allows Alice to perceive Bob's message "counterfactually"; that is, without Alice receiving any particles that have interacted with Bob. By utilizing a setup built on results from interaction-free measurements, we outline a communication protocol whereby the information travels in the opposite direction of the emitted particles. In comparison to previous attempts on such protocols, this one is such that a weak measurement at the message source would not leave a weak trace that could be detected by Alice's receiver. While some interaction-free schemes require a large number of carefully aligned beam splitters, our protocol is realizable with two or more beam splitters. We demonstrate this protocol by numerically solving the time-dependent Schrödinger equation for a Hamiltonian that implements this quantum counterfactual phenomenon.
Quantum gates by inverse engineering of a Hamiltonian
NASA Astrophysics Data System (ADS)
Santos, Alan C.
2018-01-01
Inverse engineering of a Hamiltonian (IEH) from an evolution operator is a useful technique for the protocol of quantum control with potential applications in quantum information processing. In this paper we introduce a particular protocol to perform IEH and we show how this scheme can be used to implement a set of quantum gates by using minimal quantum resources (such as entanglement, interactions between more than two qubits or auxiliary qubits). Remarkably, while previous protocols request three-qubit interactions and/or auxiliary qubits to implement such gates, our protocol requires just two-qubit interactions and no auxiliary qubits. By using this approach we can obtain a large class of Hamiltonians that allow us to implement single and two-qubit gates necessary for quantum computation. To conclude this article we analyze the performance of our scheme against systematic errors related to amplitude noise, where we show that the free parameters introduced in our scheme can be useful for enhancing the robustness of the protocol against such errors.
Can Commercial Digital Cameras Be Used as Multispectral Sensors? A Crop Monitoring Test.
Lebourgeois, Valentine; Bégué, Agnès; Labbé, Sylvain; Mallavan, Benjamin; Prévot, Laurent; Roux, Bruno
2008-11-17
The use of consumer digital cameras or webcams to characterize and monitor different features has become prevalent in various domains, especially in environmental applications. Despite some promising results, such digital camera systems generally suffer from signal aberrations due to the on-board image processing systems and thus offer limited quantitative data acquisition capability. The objective of this study was to test a series of radiometric corrections having the potential to reduce radiometric distortions linked to camera optics and environmental conditions, and to quantify the effects of these corrections on our ability to monitor crop variables. In 2007, we conducted a five-month experiment on sugarcane trial plots using original RGB and modified RGB (Red-Edge and NIR) cameras fitted onto a light aircraft. The camera settings were kept unchanged throughout the acquisition period and the images were recorded in JPEG and RAW formats. These images were corrected to eliminate the vignetting effect, and normalized between acquisition dates. Our results suggest that 1) the use of unprocessed image data did not improve the results of image analyses; 2) vignetting had a significant effect, especially for the modified camera, and 3) normalized vegetation indices calculated with vignetting-corrected images were sufficient to correct for scene illumination conditions. These results are discussed in the light of the experimental protocol and recommendations are made for the use of these versatile systems for quantitative remote sensing of terrestrial surfaces.
[Recommendations in neonatal resuscitation].
2004-01-01
The recommendations for neonatal resuscitation are not always based on sufficient scientific evidence and thus expert consensus based on current research, knowledge, and experience are useful for formulating practical protocols that are easy to follow. The latest recommendations, in 2000, modified previously published recommendations and are included in the present text.
ERIC Educational Resources Information Center
Callahan, Emily H.; Gillis, Jennifer M.; Romanczyk, Raymond G.; Mattson, Richard E.
2011-01-01
Many treatment programs for individuals with an autism spectrum disorder (ASD) target social skills, and there is growing attention directed toward the development of specific interventions to improve social skills and social interactions in this population (Hestenes & Carroll, 2000; Strain & Hoyson, 2000). However, there are limited tools…
Automating Security Protocol Analysis
2004-03-01
language that allows easy representation of pattern interaction. Using CSP, Lowe tests whether a protocol achieves authentication. In the case of...only to correctly code whatever protocol they intend to evaluate. The tool, OCaml 3.04 [1], translates the protocol into Horn clauses and then...model protocol transactions. One example of automated modeling software is Maude [19]. Maude was the intended language for this research, but Java
"First Light" for HARPS at La Silla
NASA Astrophysics Data System (ADS)
2003-03-01
"First Light" for HARPS at La Silla Advanced Planet-Hunting Spectrograph Passes First Tests With Flying Colours Summary The initial commissioning period of the new HARPS spectrograph (High Accuracy Radial Velocity Planet Searcher) of the 3.6-m telescope at the ESO La Silla Observatory has been successfully accomplished in the period February 11 - 27, 2003. This new instrument is optimized to detect planets in orbit around other stars ("exoplanets") by means of accurate (radial) velocity measurements with an unequalled precision of 1 meter per second . This high sensitivity makes it possible to detect variations in the motion of a star at this level, caused by the gravitational pull of one or more orbiting planets, even relatively small ones. "First Light" occurred on February 11, 2003, during the first night of tests. The instrument worked flawlessly and was fine-tuned during subsequent nights, achieving the predicted performance already during this first test run. The measurement of accurate stellar radial velocities is a very efficient way to search for planets around other stars. More than one hundred extrasolar planets have so far been detected , providing an increasingly clear picture of a great diversity of exoplanetary systems . However, current technical limitations have so far prevented the discovery around solar-type stars of exoplanets that are much less massive than Saturn, the second-largest planet in the solar system. HARPS will break through this barrier and will carry this fundamental exploration towards detection of exoplanets with masses like Uranus and Neptune. Moreover, in the case of low-mass stars - like Proxima Centauri, cf. ESO PR 05/03 - HARPS will have the unique capability to detect big "telluric" planets with only a few times the mass of the Earth . The HARPS instrument is being offered to the research community in the ESO member countries, already from October 2003 . PR Photo 08a/03 : The large optical grating of the HARPS spectrograph . PR Photo 08b/03 : The HARPS spectrograph . PR Photo 08c/03 : HARPS spectrum of the star HD100623 ("raw"). PR Photo 08d/03 : Extracted spectral tracing of the star HD100623 . PR Photo 08e/03 : Measured stability of HARPS. The HARPS Spectrograph ESO PR Photo 08a/03 ESO PR Photo 08a/03 [Preview - JPEG: 449 x 400 pix - 58k [Normal - JPEG: 897 x 800 pix - 616k] [Full-Res - JPEG: 1374 x 1226 pix - 1.3M] ESO PR Photo 08b/03 ESO PR Photo 08b/03 [Preview - JPEG: 500 x 400 pix - 83k [Normal - JPEG: 999 x 800 pix - 727k] [Full-Res - JPEG: 1600 x 1281 pix - 1.3M] Captions : PR Photo 08a/03 and PR Photo 08b/03 show the HARPS spectrograph during laboratory tests. The vacuum tank is open so that some of the high-precision components inside can be seen. On PR Photo 08a/03 , the large optical grating by which the incoming stellar light is dispersed is visible on the top of the bench; it measures 200 x 800 mm. HARPS is a unique fiber-fed "echelle" spectrograph able to record at once the visible range of a stellar spectrum (wavelengths from 380 - 690 nm) with very high spectral resolving power (better than R = 100,000 ). Any light losses inside the instrument caused by reflections of the starlight in the various optical components (mirrors and gratings), have been minimised and HARPS therefore works very efficiently . First observations ESO PR Photo 08c/03 ESO PR Photo 08c/03 [Preview - JPEG: 400 x 490 pix - 52k [Normal - JPEG: 800 x 980 pix - 362k] [Full-Res - JPEG: 1976 x 1195 pix - 354k] ESO PR Photo 08d/03 ESO PR Photo 08d/03 [Preview - JPEG: 485 x 400 pix - 53k [Normal - JPEG: 969X x 800 pix - 160k] Captions : PR Photo 08c/03 displays a HARPS untreated ("raw") exposure of the star HD100623 , of the comparatively cool stellar spectral type K0V. The frame shows the complete image as recorded with the 4000 x 4000 pixel CCD detector in the focal plane of the spectrograph. The horizontal white lines correspond to the stellar spectrum, divided into 70 adjacent spectral bands which together cover the entire visible wavelength range from 380 to 690 nm. Some of the stellar absorption lines are seen as dark horizontal features; they are the spectral signatures of various chemical elements in the star's upper layers ("atmosphere"). Bright emission lines from the heavy element thorium are visible between the bands - they are exposed by a lamp in the spectrograph to calibrate the wavelengths. This allows measuring any instrumental drift, thereby guaranteeing the exceedingly high precision that qualifies HARPS. PR Photo 08d/03 displays a small part of the spectrum of the star HD100623 following on-line data extraction (in astronomical terminology: "reduction") of the previous raw frame, shown in PR Photo 08c/03 . Several deep absorption lines are clearly visible. During the first commissioning period in February 2003, the high efficiency of HARPS was clearly demonstrated by observations of a G6V-type star of magnitude 8. This star is similar to, but slightly less heavy than our Sun and about 5 times fainter than the faintest stars visible with the unaided eye. During an exposure lasting only one minute, a signal-to-noise ratio (S/N) of 45 per pixel was achieved - this allows to determine the star's radial velocity with an uncertainty of only ~1 m/s! . For comparison, the velocity of a briskly walking person is about 2 m/s. A main performance goal of the HARPS instrument has therefore been reached, already at this early moment. This result also demonstrates an impressive gain in efficiency of no less than about 75 times as compared to that achievable with its predecessor CORALIE. That instrument has been operating very successfully at the 1.2-m Swiss Leonard Euler telescope at La Silla and has discovered several exoplanets during the past years, see for instance ESO Press Releases ( PR 18/98 , PR 13/00 and PR 07/01 ). In practice, this means that this new planet searcher at La Silla can now investigate many more stars in a given observing time and consequently with much increased probability for success. Extraordinary stability ESO PR Photo 08e/03 ESO PR Photo 08e/03 [Preview - JPEG: 478 x 400 pix - 38k [Normal - JPEG: 955 x 800 pix - 111k] Captions : PR Photo 08e/03 is a powerful demonstration of the extraordinary stability of the HARPS spectrograph. It plots the instrumentally induced velocity change, as measured during one night (9 consecutive hours) in the commissioning period. The drift of the instrument is determined by computing the exact position of the Thorium emission lines. As can be seen, the drift is of the order of 1 m/s during 9 hours and is measured with an accuracy of only 20 cm/s. The goal of measuring velocities of stars with an accuracy comparable to that of a pedestrian has required extraordinary efforts for the design and construction of this instrument. Indeed, HARPS is the most stable spectrograph ever built for astronomical applications . A crucial measure in this respect is the location of the HARPS spectrograph in a climatized room in the telescope building. The starlight captured by the 3.6-m telescope is guided to the instrument through a very efficient optical fibre from the telescope's Cassegrain focus. Moreover, the spectrograph is placed inside a vacuum tank to reduce to a minimum any movement of the sensitive optical elements because of changes in pressure and temperature. The temperature of the critical components of HARPS itself is kept very stable, with less than 0.005 degree variation and the spectrum therefore drifts by less than 2 m/s per night. This is a very small value - 1 m/s corresponds to a displacement of the stellar spectrum on the CCD detector by about 1/1000 the size of one CCD pixel, which is equivalent to 15 nm or only about 150 silicon atoms! This drift is continuously measured by means of a Thorium spectrum which is simultaneously recorded on the detector with an accuracy of only 20 cm/s. PR Photo 08e/03 illustrates two fundamental issues: HARPS performs with an overall stability never before reached by any other astronomical spectrograph , and it is possible to measure any nightly drift with an accuracy never achieved before [1]. During this first commissioning period in February 2003, all instrument functions were tested, as well as the complete data flow system hard- and software. Already during the second test night, the data-reduction pipeline was used to obtain the extracted and wavelength-calibrated spectra in a completely automatic way. The first spectra obtained with HARPS will now allow the construction of templates needed to compute the radial velocities of different types of stars with the best efficiency. The second commissioning period in June will then be used to achieve the optimal performance of this new, very powerful instrument. Astronomers in the ESO community will have the opportunity to observe with HARPS from October 1, 2003. Other research opportunities opening This superb radial velocity machine will also play an important role for the study of stellar interiors by asteroseismology. Oscillation modes were recently discovered in the nearby solar-type star Alpha Centauri A from precise radial velocity measurements carried out with CORALIE (see ESO PR 15/01 ). HARPS is able to carry out similar measurements on fainter stars, thus reaching a much wider range of masses, spectral characteristics and ages. Michel Mayor , Director of the Geneva Observatory and co-discoverer of the first known exoplanet, is confident: "With HARPS operating so well already during the first test nights, there is every reason to believe that we shall soon see some breakthroughs in this field also" . The HARPS Consortium HARPS has been designed and built by an international consortium of research institutes, led by the Observatoire de Genève (Switzerland) and including Observatoire de Haute-Provence (France), Physikalisches Institut der Universität Bern (Switzerland), the Service d'Aeronomie (CNRS, France), as well as ESO La Silla and ESO Garching . The HARPS consortium has been granted 100 observing nights per year during a 5-year period at the ESO 3.6-m telescope to perform what promises to be the most ambitious systematic search for exoplanets so far implemented worldwide . The project team is directed by Michel Mayor (Principal Investigator), Didier Queloz (Mission Scientist), Francesco Pepe (Project Managers Consortium) and Gero Rupprecht (ESO representative).
NASA Technical Reports Server (NTRS)
Clement, Bradley J.; Barrett, Anthony C.
2003-01-01
Interacting agents that interleave planning and execution must reach consensus on their commitments to each other. In domains where agents have varying degrees of interaction and different constraints on communication and computation, agents will require different coordination protocols in order to efficiently reach consensus in real time. We briefly describe a largely unexplored class of real-time, distributed planning problems (inspired by interacting spacecraft missions), new challenges they pose, and a general approach to solving the problems. These problems involve self-interested agents that have infrequent communication but collaborate on joint activities. We describe a Shared Activity Coordination (SHAC) framework that provides a decentralized algorithm for negotiating the scheduling of shared activities in a dynamic environment, a soft, real-time approach to reaching consensus during execution with limited communication, and a foundation for customizing protocols for negotiating planner interactions. We apply SHAC to a realistic simulation of interacting Mars missions and illustrate the simplicity of protocol development.
Continual coordination through shared activities
NASA Technical Reports Server (NTRS)
Clement, Bradley J.; Barrett, Anthony C.
2003-01-01
Interacting agents that interleave planning and execution must reach consensus on their commitments to each other. In domains where agents have varying degrees of interaction and different constraints on communication and computation, agents will require different coordination protocols in order to efficiently reach consensus in real time. We briefly describe a largely unexplored class of realtime, distributed planning problems (inspired by interacting spacecraft missions), new challenges they pose, and a general approach to solving the problems. These problems involve self-interested agents that have infrequent communication but collaborate on joint activities. We describe a Shared Activity Coordination (SHAC) framework that provides a decentralized algorithm for negotiating the scheduling of shared activities over the lifetimes of separate missions, a soft, real-time approach to reaching consensus during execution with limited communication, and a foundation for customizing protocols for negotiating planner interactions. We apply SHAC to a realistic simulation of interacting Mars missions and illustrate the simplicity of protocol development.
Sympathetic Nerve Activity and Heart Rate Variability During Severe Hemorrhagic Shock in Sheep
2007-01-01
2000, Boebingen, Germany). 2.3. Experimental protocol After a steady nerve signal was obtained (verified visually and by auscultation ) the experimental...both visually and by auscultation . Automatic amplitude-based detection of sym- pathetic bursts was performed with WinCPRS software (Absolute Aliens Oy
INFORMATION MANAGEMENT AND RELATED QUALITY ASSURANCE FOR A LARGE SCALE, MULTI-SITE RESEARCH PROJECT
During the summer of 2000, as part of a U.S. Environmental Protection Agency study designed to improve microbial water quality monitoring protocols at public beaches, over 11,000 water samples were collected at five selected beaches across the country. At each beach, samples wer...
Usability and Instructional Design Heuristics for E-Learning Evaluation.
ERIC Educational Resources Information Center
Reeves, Thomas C.; Benson, Lisa; Elliott, Dean; Grant, Michael; Holschuh, Doug; Kim, Beaumie; Kim, Hyeonjin; Lauber, Erick; Loh, Sebastian
Heuristic evaluation is a methodology for investigating the usability of software originally developed by Nielsen (1993, 2000). Nielsen's protocol was modified and refined for evaluating e-learning programs by participants in a doctoral seminar held at the University of Georgia in 2001. The modifications primarily involved expanding Nielsen's…
Targeted Capture and High-Throughput Sequencing Using Molecular Inversion Probes (MIPs).
Cantsilieris, Stuart; Stessman, Holly A; Shendure, Jay; Eichler, Evan E
2017-01-01
Molecular inversion probes (MIPs) in combination with massively parallel DNA sequencing represent a versatile, yet economical tool for targeted sequencing of genomic DNA. Several thousand genomic targets can be selectively captured using long oligonucleotides containing unique targeting arms and universal linkers. The ability to append sequencing adaptors and sample-specific barcodes allows large-scale pooling and subsequent high-throughput sequencing at relatively low cost per sample. Here, we describe a "wet bench" protocol detailing the capture and subsequent sequencing of >2000 genomic targets from 192 samples, representative of a single lane on the Illumina HiSeq 2000 platform.
A Unified Steganalysis Framework
2013-04-01
contains more than 1800 images of different scenes. In the experiments, we used four JPEG based steganography techniques: Out- guess [13], F5 [16], model...also compressed these images again since some of the steganography meth- ods are double compressing the images . Stego- images are generated by embedding...randomly chosen messages (in bits) into 1600 grayscale images using each of the four steganography techniques. A random message length was determined
Air Force Institute of Technology Research Report 2008
2009-05-01
Chapter) Instructor of the Year, March 2008. PETERSON , GILBERT L. Air Force Junior Scientist of the Year, September 2008. RAINES, RICHARD A...DIRECTORATE RODRIGUEZ, BENJAMIN M., II, JPEG Steganography Embedding Methods. AFIT/DEE/ENG/08-20. Faculty Advisor: Dr. Gilbert L. Peterson . Sponsor...Faculty Advisor: Dr. Gilbert L. Peterson . Sponsor: AFRL/RY. GIRARD, JASON A., Material Perturbations to Enhance Performance of the Theile Half-Width
Belaghzal, Houda; Dekker, Job; Gibcus, Johan H
2017-07-01
Chromosome conformation capture-based methods such as Hi-C have become mainstream techniques for the study of the 3D organization of genomes. These methods convert chromatin interactions reflecting topological chromatin structures into digital information (counts of pair-wise interactions). Here, we describe an updated protocol for Hi-C (Hi-C 2.0) that integrates recent improvements into a single protocol for efficient and high-resolution capture of chromatin interactions. This protocol combines chromatin digestion and frequently cutting enzymes to obtain kilobase (kb) resolution. It also includes steps to reduce random ligation and the generation of uninformative molecules, such as unligated ends, to improve the amount of valid intra-chromosomal read pairs. This protocol allows for obtaining information on conformational structures such as compartment and topologically associating domains, as well as high-resolution conformational features such as DNA loops. Copyright © 2017 Elsevier Inc. All rights reserved.
Recommended features of protocols for long-term ecological monitoring
Oakley, Karen L.; Boudreau, Susan L.; Humphrey, Sioux-Z
2001-01-01
In 1991, the National Park Service (NPS) selected seven parks to serve as prototypes for development of a long-term ecological monitoring program. Denali National Park and Preserve was one of the prototype parks selected. The principal focus of this national program was to detect and document resource changes and to understand the forces driving those changes. One of the major tasks of each prototype park was to develop monitoring protocols. In this paper, we discuss some lessons learned and what we believe to be the most important features of protocols.One of the many lessons we have learned is that monitoring protocols vary greatly in content and format. This variation leads to confusion about what information protocols should contain and how they should be formatted. Problems we have observed in existing protocols include (1) not providing enough detail, (2) omitting critical topics (such as data management), and (3) mixing explanation with instructions. Once written, protocols often sit on the shelf to collect dust, allowing methods changes to occur without being adequately considered, tested, or documented. Because a lengthy and costly research effort is often needed to develop protocols, a vision of what the final product should look like is helpful. Based on our involvement with the prototype monitoring program for Denali (Oakley and Boudreau 2000), we recommend key features of protocols, including a scheme for linking protocols to data in the data management system and for tracking protocol revisions. A protocol system is crucial for producing long-term data sets of known quality that meet program objectives.
Kennedy, Robert E.; Cohen, Warren B.; Kirschbaum, Alan A.; Haunreiter, Erik
2007-01-01
Background and Objectives As part of the National Park Service's larger goal of developing long-term monitoring programs in response to the Natural Resource Challenge of 2000, the parks of the North Coast and Cascades Network (NCCN) have determined that monitoring of landscape dynamics is necessary to track ecosystem health (Weber and others, 2005). Landscape dynamics refer to a broad suite of ecological, geomorphological, and anthropogenic processes occurring across broad spatial scales. The NCCN has sought protocols that would leverage remote-sensing technologies to aid in monitoring landscape dynamics.
Bohari, Mohammed H; Sastry, G Narahari
2012-09-01
Efficient drug discovery programs can be designed by utilizing existing pools of knowledge from the already approved drugs. This can be achieved in one way by repositioning of drugs approved for some indications to newer indications. Complex of drug to its target gives fundamental insight into molecular recognition and a clear understanding of putative binding site. Five popular docking protocols, Glide, Gold, FlexX, Cdocker and LigandFit have been evaluated on a dataset of 199 FDA approved drug-target complexes for their accuracy in predicting the experimental pose. Performance for all the protocols is assessed at default settings, with root mean square deviation (RMSD) between the experimental ligand pose and the docked pose of less than 2.0 Å as the success criteria in predicting the pose. Glide (38.7 %) is found to be the most accurate in top ranked pose and Cdocker (58.8 %) in top RMSD pose. Ligand flexibility is a major bottleneck in failure of docking protocols to correctly predict the pose. Resolution of the crystal structure shows an inverse relationship with the performance of docking protocol. All the protocols perform optimally when a balanced type of hydrophilic and hydrophobic interaction or dominant hydrophilic interaction exists. Overall in 16 different target classes, hydrophobic interactions dominate in the binding site and maximum success is achieved for all the docking protocols in nuclear hormone receptor class while performance for the rest of the classes varied based on individual protocol.
NASA Technical Reports Server (NTRS)
James, Jeffrey M.; Sanderson, Penelope M.; Seidler, Karen S.
1990-01-01
As modern transport environments become increasingly complex, issues such as crew communication, interaction with automation, and workload management have become crucial. Much research is being focused on holistic aspects of social and cognitive behavior, such as the strategies used to handle workload, the flow of information, the scheduling of tasks, the verbal and non-verbal interactions between crew members. Traditional laboratory performance measures no longer sufficiently meet the needs of researchers addressing these issues. However observational techniques are better equipped to capture the type of data needed and to build models of the requisite level of sophistication. Presented here is SHAPA, an interactive software tool for performing both verbal and non-verbal protocol analysis. It has been developed with the idea of affording the researchers the closest possible degree of engagement with protocol data. The researcher can configure SHAPA to encode protocols using any theoretical framework or encoding vocabulary that is desired. SHAPA allows protocol analysis to be performed at any level of analysis, and it supplies a wide variety of tools for data aggregation, manipulation. The output generated by SHAPA can be used alone or in combination with other performance variables to get a rich picture of the influences on sequences of verbal or nonverbal behavior.
"First Light" for the VLT Interferometer
NASA Astrophysics Data System (ADS)
2001-03-01
Excellent Fringes From Bright Stars Prove VLTI Concept Summary Following the "First Light" for the fourth of the 8.2-m telescopes of the VLT Observatory on Paranal in September 2000, ESO scientists and engineers have just successfully accomplished the next major step of this large project. On March 17, 2001, "First Fringes" were obtained with the VLT Interferometer (VLTI) - this important event corresponds to the "First Light" for an astronomical telescope. At the VLTI, it occurred when the infrared light from the bright star Sirius was captured by two small telescopes and the two beams were successfully combined in the subterranean Interferometric Laboratory to form the typical pattern of dark and bright lines known as " interferometric fringes ". This proves the success of the robust VLTI concept, in particular of the "Delay Line". On the next night, the VLTI was used to perform a scientific measurement of the angular diameter of another comparatively bright star, Alpha Hydrae ( Alphard ); it was found to be 0.00929±0.00017 arcsec . This corresponds to the angular distance between the two headlights of a car as seen from a distance of approx. 35,000 kilometres. The excellent result was obtained during a series of observations, each lasting 2 minutes, and fully confirming the impressive predicted abilities of the VLTI . This first observation with the VLTI is a monumental technological achievement, especially in terms of accuracy and stability . It crucially depends on the proper combination and functioning of a large number of individual opto-mechnical and electronic elements. This includes the test telescopes that capture the starlight, continuous and extremely precise adjustment of the various mirrors that deflect the light beams as well as the automatic positioning and motion of the Delay Line carriages and, not least, the optimal tuning of the VLT INterferometer Commissionning Instrument (VINCI). These initial observations prove the overall concept for the VLTI . It was first envisaged in the early 1980's and has been continuously updated, as new technologies and materials became available during the intervening period. The present series of functional tests will go on for some time and involve many different configurations of the small telescopes and the instrument. It is then expected that the first combination of light beams from two of the VLT 8.2-m telescopes will take place in late 2001 . According to current plans, regular science observations will start from 2002, when the European and international astronomical community will have access to the full interferometric facility and the specially developed VLTI instrumentation now under construction. A wide range of scientific investigations will then become possible, from the search for planets around nearby stars, to the study of energetic processes at the cores of distant galaxies. With its superior angular resolution (image sharpness), the VLT is now beginning to open a new era in observational optical and infrared astronomy. The ambition of ESO is to make this type of observations available to all astronomers, not just the interferometry specialists. Video Clip 03/01 : Various video scenes related to the VLTI and the "First Fringes". PR Photo 10a/01 : "First Fringes" from the VLTI on the computer screen. PR Photo 10b/01 : Celebrating the VLTI "First Fringes" . PR Photo 10c/01 : Overview of the VLT Interferometer . PR Photo 10d/01 : Interferometric observations: Fringes from two stars of different angular size . PR Photo 10e/01 : Interferometric observations: Change of fringes with increasing baseline . PR Photo 10f/01 : Aerial view of the installations for the VLTI on the Paranal platform. PR Photo 10g/01 : Stations for the VLTI Auxiliary Telescopes. PR Photo 10h/01 : A test siderostat in place for observations. PR Photo 10i/01 : A test siderostat ( close-up ). PR Photo 10j/01 : One of the Delay Line carriages in the Interferometric Tunnel. PR Photo 10k/01 : The VINCI instrument in the Interferometric Laboratory. PR Photo 10l/01 : The VLTI Control Room . "First Fringes at the VLTI": A great moment! First light of the VLT Interferometer - PR Video Clip 03/01 [MPEG - x.xMb] ESO PR Video Clip 03/01 "First Light of the VLT Interferometer" (March 2001) (5025 frames/3:21x min) [MPEG Video+Audio; 144x112 pix; 6.9Mb] [MPEG Video+Audio; 320x240 pix; 13.7Mb] [RealMedia; streaming; 34kps] [RealMedia; streaming; 200kps] ESO Video Clip 03/01 provides a quick overview of the various elements of the VLT Interferometer and the important achievement of "First Fringes". The sequence is: General view of the Paranal observing platform. The "stations" for the VLTI Auxiliary Telescopes. Statement by the Manager of the VLT project, Massimo Tarenghi . One of the VLTI test telescopes ("siderostats") is being readied for observations. The Delay Line carriages in the Interferometric Tunnel move. The VINCI instrument in the Interferometric Laboratory is adjusted. Platform at sunset, before the observations. Astronomers and engineers prepare for the first observations in the VLTI Control Room in the Interferometric Building. "Interferometric Fringes" on the computer screen. Concluding statements by Andreas Glindemann , VLTI Project Leader, and Massimo Tarenghi . Distant view of the installations at Paranal at sunset (on March 1, 2001). The moment of "First Fringes" at the VLTI occurred in the evening of March 17, 2001 . The bright star Sirius was observed with two small telescopes ("siderostats"), specially constructed for this purpose during the early VLTI test phases. ESO PR Video Clip 03/01 includes related scenes and is based on a more comprehensive documentation, now available as ESO Video News Reel No. 12. The star was tracked by the two telescopes and the light beams were guided via the Delay Lines in the Interferometric Tunnel to the VINCI instrument [1] at the Interferometric Laboratory. The path lengths were continuously adjusted and it was possible to keep them stable to within 1 wavelength (2.2 µm, or 0.0022 mm) over a period of at least 2 min. Next night, several other stars were observed, enabling the ESO astronomers and engineers in the Control Room to obtain stable fringe patterns more routinely. With the special software developed, they also obtained 'on-line' an accurate measurement of the angular diameter of a star. This means that the VLTI delivered its first valid scientific result, already during this first test . First observation with the VLTI ESO PR Photo 10a/01 ESO PR Photo 10a/01 [Preview - JPEG: 400 x 315 pix - 96k] [Normal - JPEG: 800 x 630 pix - 256k] [Hi-Res - JPEG: 3000 x 2400 pix - 1.7k] ESO PR Photo 10b/01 ESO PR Photo 10b/01 [Preview - JPEG: 400 x 218 pix - 80k] [Normal - JPEG: 800 x 436 pix - 204k] Caption : PR Photo 10a/01 The "first fringes" obtained with the VLTI, as seen on the computer screen during the observation (upper right window). The fringe pattern arises when the light beams from two small telescopes are brought together in the VINCI instrument. The pattern itself contains information about the angular extension of the observed object, here the bright star Sirius . More details about the interpretation of this pattern is given in Appendix A. PR Photo 10b/01 : Celebrating the moment of "First Fringes" at the VLTI. At the VLTI control console (left to right): Pierre Kervella , Vincent Coudé du Foresto , Philippe Gitton , Andreas Glindemann , Massimo Tarenghi , Anders Wallander , Roberto Gilmozzi , Markus Schoeller and Bill Cotton . Bertrand Koehler was also present and took the photo. Technical information about PR Photo 10a/01 is available below. Following careful adjustment of all of the various components of the VLTI, the first attempt to perform a real observation was initiated during the night of March 16-17, 2001. "Fringes" were actually acquired during several seconds, leading to further optimization of the Delay Line optics. The next night, March 17-18, stable fringes were obtained on the bright stars Sirius and Lambda Velorum . The following night, the first scientifically valid results were obtained during a series of observations of six stars. One of these, Alpha Hydrae , was measured twice, with an interval of 15 minutes between the 2-min integrations. The measured diameters were highly consistent, with a mean of 0.00929±0.00017 arcsec. This new VLTI measurement is in full agreement with indirect (photometric) estimates of about 0.009 arcsec. The overall performance of the VLTI was excellent already in this early stage. For example, the interferometric efficiency ('contrast' on a stellar point source) was measured to be 87% and stable to within 1.3% over several days. This performance will be further improved following additional tuning. The entire operation of the VLTI was performed remotely from the Control Room, as this will also be the case in the future. Another great advantage of the VLTI concept is the possibility to analyse the data at the control console. This is one of the key features of the VLTI that contributes to make it a very user-friendly facility. Overview of the VLT Interferometer ESO PR Photo 10c/01 ESO PR Photo 10c/01 [Preview - JPEG: 400 x 410 pix - 60k] [Normal - JPEG: 800 x 820 pix - 124k] [Hi-Res - JPEG: 3000 x 3074 pix - 680k] Caption : PR Photo 10c/01 Overview of the VLT Interferometer, with the various elements indicated. In this case, the light beams from two of the 8.2-m telescopes are combined. The VINCI instrument that was used for the present test, is located at the common focus in the Interferometric Laboratory. The interferometric principle is based on the phase-stable combination of light beams from two or more telescopes at a common interferometric focus , cf. PR Photo 10c/01 . The light from a celestial object is captured simultaneously by two or more telescopes. For the first tests, two "siderostats" with 40-cm aperture are used; later on, two or more 8.2-m Unit Telescopes will be used, as well as several moving 1.8-m Auxiliary Telescopes (ATs), now under construction at the AMOS factory in Belgium. Via several mirrors and through the Delay Line, that continuously compensates for changes in the path length introduced by the Earth's rotation as well as by other effects (e.g., atmospheric turbulence), the light beams are guided towards the interferometric instrument VINCI at the common interferometric focus. It is located in the subterranean Interferometric Laboratory , at the centre of the observing platform on the top of the Paranal mountain. Photos of some of the VLTI elements are shown in Appendix B. The interferometric technique allows achieving images, as sharp as those of a telescope with a diameter equivalent to the largest distance between the telescopes in the interferometer. For the VLTI, this distance is about 200 metres, resulting in a resolution of 0.001 arcsec in the near-infrared spectral region (at 1 µm wavelength), or 0.0005 arcsec in visual light (500 nm). The latter measure corresponds to about 2 metres on the surface of the Moon. The VLTI instruments The installation and putting into operation of the VLTI at Paranal is a gradual process that will take several years. While the present "First Fringe" event is of crucial importance, the full potential of the VLTI will only be reached some years from now. This will happen with the successive installation of a number of highly specialised instruments, like the near-infrared/red VLTI focal instrument (AMBER) , the Mid-Infrared interferometric instrument for the VLTI (MIDI) and the instrument for Phase-Referenced Imaging and Microarcsecond Astrometry (PRIMA). Already next year, the three 1.8-m Auxiliary Telescopes that will be fully devoted to interferometric observations, will arrive at Paranal. Ultimately, it will be possible to combine the light beams from all the large and small telescopes. Great research promises Together, they will be able to achieve an unprecedented image sharpness (angular resolution) in the optical/infrared wavelength region, and thanks to the great light-collecting ability of the VLT Unit Telescopes, also for observations of quite faint objects. This will make it possible to carry out many different front-line scientific studies, beyond the reach of other instruments. There are many promising research fields that will profit from VLTI observations, of which the following serve as particularly interesting examples: * The structure and composition of the outer solar system, by studies of individual moons, Trans-Neptunian Objects and comets. * The direct detection and imaging of exoplanets in orbit around other stars. * The formation of star clusters and their evolution, from images and spectra of very young objects. * Direct views of the surface structures of stars other than the Sun. * Measuring accurate distances to the most prominent "stepping stones" in the extragalactic distance scale, e.g., galactic Cepheid stars, the Large Magellanic Cloud and globular clusters. * Direct investigations of the physical mechanisms responsible for stellar pulsation, mass loss and dust formation in stellar envelopes and evolution to the Planetary Nebula and White Dwarf stages. * Close-up studies of interacting binary stars to better understand their mass transfer mechanisms and evolution. * Studies of the structure of the circum-stellar environment of stellar black holes and neutron stars. * The evolution of the expanding shells of unstable stars like novae and supernovae and their interaction with the interstellar medium. * Studying the structure and evolution of stellar and galactic nuclear accretion disks and the associated features, e.g., jets and dust tori. * With images and spectra of the innermost regions of the Milky Way galaxy, to investigate the nature of the nucleus surrounding the central black hole. Clearly, there will be no lack of opportunities for trailblazing research with the VLTI. The "First Fringes" constitute a very important milestone in this direction. Appendix A: How does it work? ESO PR Photo 10d/01 ESO PR Photo 10d/01 [Preview - JPEG: 400 x 290 pix - 24k] [Normal - JPEG: 800 x 579 pix - 68k] [Hi-Res - JPEG: 3000 x 2170 pix - 412k] ESO PR Photo 10e/01 ESO PR Photo 10e/01 [Preview - JPEG: 400 x 219 pix - 32k] [Normal - JPEG: 800 x 438 pix - 64k] [Hi-Res - JPEG: 3000 x 1644 pix - 336k] Caption : PR Photo 10d/01 demonstrates in a schematic way, how the images of two stars of different angular size (left) will look like, with a single telescope (middle) and with an interferometer like the VLTI (right). Whereas there is little difference with one telescope, the fringe patterns at the interferometer are quite different. Conversely, the appearance of this pattern provides a measure of the star's angular diameter. In PR Photo 10e/01 , interferometric observations of a single star are shown, as the distance between the two telescopes is gradually increased. The observed pattern at the focal plane clearly changes, and the "fringes" disappear completely. See the text for more details. The principle behind interferometry is the "coherent optical interference" of light beams from two or more telescopes, due to the wave nature of light. The above illustrations serve to explain what the astronomers observe in the simplest case, that of a single star with a certain angular size, and how this can be translated into a measurement of this size. In PR Photo 10d/01 , the difference between two stars of different diameter is illustrated. While the image of the smaller star displays strong interference effects (i.e., a well visible fringe pattern), those of the larger star are much less prominent. The "visibility" of the fringes is therefore a direct measure of the size; the stronger they appear (the "larger the contrast"), the smaller is the star. If the distance between the two telescopes is increased when a particular star is observed ( PR Photo 10e/01 ), then the fringes become less and less prominent. At a certain distance, the fringe pattern disppears completely. This distance is directly related to the angular size of the star. Appendix B: Elements of the VLT Interferometer Contrary to other large astronomical telescopes, the VLT was designed from the beginning with the use of interferometry as a major goal . For this reason, the four 8.2-m Unit Telescopes were positioned in a quasi-trapezoidal configuration and several moving 1.8-m telescopes were included into the overall VLT concept, cf. PR Photo 10f/01 . The photos below show some of the key elements of the VLT Interferometer during the present observations. They include the siderostats , 40-cm telescopes that serve to capture the light from a comparatively bright star ( Photos 10g-i/01 ), the Delay Lines ( Photo 10j/01 ), and the VINCI instrument ( Photo 10k/01) Earlier information about the development and construction of the individual elements of the VLTI is available as ESO PR 04/98 , ESO PR 14/00 and ESO PR Photos 26a-e/00.
Password-Only Authenticated Three-Party Key Exchange with Provable Security in the Standard Model
Nam, Junghyun; Kim, Junghwan; Kang, Hyun-Kyu; Kim, Jinsoo; Paik, Juryon
2014-01-01
Protocols for password-only authenticated key exchange (PAKE) in the three-party setting allow two clients registered with the same authentication server to derive a common secret key from their individual password shared with the server. Existing three-party PAKE protocols were proven secure under the assumption of the existence of random oracles or in a model that does not consider insider attacks. Therefore, these protocols may turn out to be insecure when the random oracle is instantiated with a particular hash function or an insider attack is mounted against the partner client. The contribution of this paper is to present the first three-party PAKE protocol whose security is proven without any idealized assumptions in a model that captures insider attacks. The proof model we use is a variant of the indistinguishability-based model of Bellare, Pointcheval, and Rogaway (2000), which is one of the most widely accepted models for security analysis of password-based key exchange protocols. We demonstrated that our protocol achieves not only the typical indistinguishability-based security of session keys but also the password security against undetectable online dictionary attacks. PMID:24977229
RICE bounds on cosmogenic neutrino fluxes and interactions
NASA Astrophysics Data System (ADS)
Hussain, Shahid
2005-04-01
Assuming standard model interactions we calculate shower rates induced by cosmogenic neutrinos in ice, and we bound the cosmogenic neutrino fluxes using RICE 2000-2004 results. Next we assume new interactions due to extra- dimensional, low-scale gravity (i.e. black hole production and decay; graviton mediated deep inelastic scattering) and calculate enhanced shower rates induced by cosmogenic neutrinos in ice. With the help of RICE 2000-2004 results, we survey bounds on low scale gravity parameters for a range of cosmogenic neutrino flux models.
Creating COMFORT: A Communication-Based Model for Breaking Bad News
ERIC Educational Resources Information Center
Villagran, Melinda; Goldsmith, Joy; Wittenberg-Lyles, Elaine; Baldwin, Paula
2010-01-01
This study builds upon existing protocols for breaking bad news (BBN), and offers an interaction-based approach to communicating comfort to patients and their families. The goal was to analyze medical students' (N = 21) videotaped standardized patient BBN interactions after completing an instructional unit on a commonly used BBN protocol, commonly…
Lee, Wing-Sham; Rudd, Jason J; Kanyuka, Kostya
2015-06-01
Virus-induced gene silencing (VIGS) has emerged as a powerful reverse genetic technology in plants supplementary to stable transgenic RNAi and, in certain species, as a viable alternative approach for gene functional analysis. The RNA virus Barley stripe mosaic virus (BSMV) was developed as a VIGS vector in the early 2000s and since then it has been used to study the function of wheat genes. Several variants of BSMV vectors are available, with some requiring in vitro transcription of infectious viral RNA, while others rely on in planta production of viral RNA from DNA-based vectors delivered to plant cells either by particle bombardment or Agrobacterium tumefaciens. We adapted the latest generation of binary BSMV VIGS vectors for the identification and study of wheat genes of interest involved in interactions with Zymoseptoria tritici and here present detailed and the most up-to-date protocols. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Clark, Daniel L; Connors, Bret A; Handa, Rajash K; Evan, Andrew P
2011-12-01
The purpose of this study was to determine if pretreatment of porcine kidneys with low-energy shock waves (SWs) prior to delivery of a clinical dose of 2,000 SWs reduces or prevents shock wave lithotripsy (SWL)-induced acute oxidative stress and inflammation in the treated kidney. Pigs (7-8 weeks old) received 2,000 SWs at 24 kV (120 SW/min) with or without pretreatment with 100 SWs at 12 kV/2 Hz to the lower pole calyx of one kidney using the HM3. Four hours post-treatment, selected samples of renal tissue were frozen for analysis of cytokine, interleukin-6 (IL-6), and stress response protein, heme oxygenase-1 (HO-1). Urine samples were taken before and after treatment for analysis of tumor necrosis factor-α (TNF-α). Treatment with 2,000 SWs with or without pretreatment caused a statistically significant elevation of HO-1 and IL-6 in the renal medulla localized to the focal zone of the lithotripter. However, the increase in HO-1 and IL-6 was significantly reduced using the pretreatment protocol compared to no pretreatment. Urinary excretion of TNF-α increased significantly (p < 0.05) from baseline for pigs receiving 2,000 SWs alone; however, this effect was completely abolished with the pretreatment protocol. We conclude that pretreatment of the kidney with a low dose of low-energy SWs prior to delivery of a clinical dose of SWs reduces, but does not completely prevent, SWL-induced acute renal oxidative stress and inflammation.
HCFC-142b emissions in China: An inventory for 2000 to 2050 basing on bottom-up and top-down methods
NASA Astrophysics Data System (ADS)
Han, Jiarui; Li, Li; Su, Shenshen; Hu, Jianxin; Wu, Jing; Wu, Yusheng; Fang, Xuekun
2014-05-01
1-Chloro-1,1-difluoroethane (HCFC-142b) is both ozone depleting substance included in the Montreal Protocol on Substances that Deplete the Ozone Layer (Montreal Protocol) and potent greenhouse gas with high global warming potential. As one of the major HCFC-142b consumption and production countries in the world, China's control action will contribute to both mitigating climate change and protecting ozone layer. Estimating China's HCFC-142b emission is a crucial step for understanding its emission status, drawing up phasing-out plan and evaluating mitigation effect. Both the bottom-up and top-down method were adopted in this research to estimate HCFC-142b emissions from China. Results basing on different methods were compared to test the effectiveness of two methods and validate inventory's reliability. Firstly, a national bottom-up emission inventory of HCFC-142b for China during 2000-2012 was established based on the 2006 IPCC Guidelines for National Greenhouse Gas Inventories and the Montreal Protocol, showing that in contrast to the downward trend revealed by existing results, HCFC-142b emissions kept increasing from 0.1 kt/yr in 2000 to the peak of 14.4 kt/yr in 2012. Meanwhile a top-down emission estimation was also developed using interspecies correlation method. By correlating atmospheric mixing ratio data of HCFC-142b and reference substance HCFC-22 sampled from four representative cities (Beijing, Hangzhou, Lanzhou and Guangzhou, for northern, eastern, western and southern China, respectively), China's HCFC-142b emission in 2012 was calculated. It was 16.24(13.90-18.58) kt, equivalent to 1.06 kt ODP and 37 Tg CO2-eq, taking up 9.78% (ODP) of total HCFCs emission in China or 30.5% of global HCFC-142b emission. This result was 12.7% higher than that in bottom-up inventory. Possible explanations were discussed. The consistency of two results lend credit to methods effectiveness and results reliability. Finally, future HCFC-142b emission was projected to 2050. Emission might experience a continuous increase from 14.9 kt/yr to 97.2 kt/yr under business-as-usual (BAU) scenario, while a 90% reduction would be obtained by fulfilling the Montreal Protocol, namely an accumulative mitigation of 1578 kt from 2013 to 2050, equal to 103 kt ODP, and 3504 Tg CO2 emissions. Therefore, China will contribute tremendously to the worldwide ozone protection and global warming mitigation by successfully phasing out HCFC-142b according to the Montreal Protocol schedule.
Kasenda, Benjamin; von Elm, Erik; You, John J; Blümle, Anette; Tomonaga, Yuki; Saccilotto, Ramon; Amstutz, Alain; Bengough, Theresa; Meerpohl, Joerg J; Stegert, Mihaela; Olu, Kelechi K; Tikkinen, Kari A O; Neumann, Ignacio; Carrasco-Labra, Alonso; Faulhaber, Markus; Mulla, Sohail M; Mertz, Dominik; Akl, Elie A; Bassler, Dirk; Busse, Jason W; Ferreira-González, Ignacio; Lamontagne, Francois; Nordmann, Alain; Gloy, Viktoria; Raatz, Heike; Moja, Lorenzo; Ebrahim, Shanil; Schandelmaier, Stefan; Sun, Xin; Vandvik, Per O; Johnston, Bradley C; Walter, Martin A; Burnand, Bernard; Schwenkglenks, Matthias; Hemkens, Lars G; Bucher, Heiner C; Guyatt, Gordon H; Briel, Matthias
2016-06-01
Little is known about publication agreements between industry and academic investigators in trial protocols and the consistency of these agreements with corresponding statements in publications. We aimed to investigate (i) the existence and types of publication agreements in trial protocols, (ii) the completeness and consistency of the reporting of these agreements in subsequent publications, and (iii) the frequency of co-authorship by industry employees. We used a retrospective cohort of randomized clinical trials (RCTs) based on archived protocols approved by six research ethics committees between 13 January 2000 and 25 November 2003. Only RCTs with industry involvement were eligible. We investigated the documentation of publication agreements in RCT protocols and statements in corresponding journal publications. Of 647 eligible RCT protocols, 456 (70.5%) mentioned an agreement regarding publication of results. Of these 456, 393 (86.2%) documented an industry partner's right to disapprove or at least review proposed manuscripts; 39 (8.6%) agreements were without constraints of publication. The remaining 24 (5.3%) protocols referred to separate agreement documents not accessible to us. Of those 432 protocols with an accessible publication agreement, 268 (62.0%) trials were published. Most agreements documented in the protocol were not reported in the subsequent publication (197/268 [73.5%]). Of 71 agreements reported in publications, 52 (73.2%) were concordant with those documented in the protocol. In 14 of 37 (37.8%) publications in which statements suggested unrestricted publication rights, at least one co-author was an industry employee. In 25 protocol-publication pairs, author statements in publications suggested no constraints, but 18 corresponding protocols documented restricting agreements. Publication agreements constraining academic authors' independence are common. Journal articles seldom report on publication agreements, and, if they do, statements can be discrepant with the trial protocol.
NASA Astrophysics Data System (ADS)
2004-02-01
Finland will become the eleventh member state of the European Southern Observatory (ESO) [1]. Today, during a ceremony at the ESO Headquarters in Garching (Germany), a corresponding Agreement was signed by the Finnish Minister of Education and Science, Ms. Tuula Haatainen and the ESO Director General, Dr. Catherine Cesarsky, in the presence of other high officials from Finland and the ESO member states (see Video Clip 02/04 below). Following subsequent ratification by the Finnish Parliament of the ESO Convention and the associated protocols [2], it is foreseen that Finland will formally join ESO on July 1, 2004. Uniting European Astronomy ESO PR Photo 03/04 ESO PR Photo 03/04 Caption : Signing of the Finland-ESO Agreement on February 9, 2004, at the ESO Headquarters in Garching (Germany). At the table, the ESO Director General, Dr. Catherine Cesarsky, and the Finnish Minister of Education and Science, Ms. Tuula Haatainen . [Preview - JPEG: 400 x 499 pix - 52k] [Normal - JPEG: 800 x 997 pix - 720k] [Full Res - JPEG: 2126 x 2649 pix - 2.9M] The Finnish Minister of Education and Science, Ms. Tuula Haatainen, began her speech with these words: "On behalf of Finland, I am happy and proud that we are now joining the European Southern Observatory, one of the most successful megaprojects of European science. ESO is an excellent example of the potential of European cooperation in science, and along with the ALMA project, more and more of global cooperation as well." She also mentioned that besides science ESO offers many technological challenges and opportunities. And she added: "In Finland we will try to promote also technological and industrial cooperation with ESO, and we hope that the ESO side will help us to create good working relations. I am confident that Finland's membership in ESO will be beneficial to both sides." Dr. Catherine Cesarsky, ESO Director General, warmly welcomed the Finnish intention to join ESO. "With the accession of their country to ESO, Finnish astronomers, renowned for their expertise in many frontline areas, will have new, exciting opportunities for working on research programmes at the frontiers of modern astrophysics." "This is indeed the right time to join ESO", she added. "The four 8.2-m VLT Unit Telescopes with their many first-class instruments are working with unsurpassed efficiency at Paranal, probing the near and distant Universe and providing European astronomers with a goldmine of unique astronomical data. The implementation of the VLT Interferometer is progressing well and last year we entered into the construction phase of the intercontinental millimetre- and submillimetre-band Atacama Large Millimeter Array. And the continued design studies for gigantic optical/infrared telescopes like OWL are progressing fast. Wonderful horizons are indeed opening for the coming generations of European astronomers!" She was seconded by the President of the ESO Council, Professor Piet van der Kruit, "This is a most important step in the continuing evolution of ESO. By having Finland become a member of ESO, we welcome a country that has put in place a highly efficient and competitive innovation system with one of the fastest growths of research investment in the EU area. I have no doubt that the Finnish astronomers will not only make the best scientific use of ESO facilities but that they will also greatly contribute through their high quality R&D to technological developments which will benefit the whole ESO community. " Notes [1]: Current ESO member countries are Belgium, Denmark, France, Germany, Italy, the Netherlands, Portugal, Sweden, Switzerland and the United Kindgdom. [2]: The ESO Convention was established in 1962 and specifies the goals of ESO and the means to achieve these, e.g., "The Governments of the States parties to this convention... desirous of jointly creating an observatory equipped with powerful instruments in the Southern hemisphere and accordingly promoting and organizing co-operation in astronomical research..." (from the Preamble to the ESO Convention).
NASA Astrophysics Data System (ADS)
Shpenst, V. A.; Vasiliev, B. Y.; Kalashnikov, O. V.; Oleynikova, A. M.
2018-05-01
The article covers a consideration of various state-of-the-art industrial data transfer protocols, e.g. Modbus, Profibus, Industrial Ethernet and CAN. Their pros and cons are analyzed and conclusions made on advisability of the use of each protocol. It is shown that for the arrangement of effective telecommunication interaction of microprocessor devices of different types in the composition of multi-motor electric drives, it is advisable to use highlevel CAN-protocols, such as CANopen and DeviceNet.
Social Protocols for Agile Virtual Teams
NASA Astrophysics Data System (ADS)
Picard, Willy
Despite many works on collaborative networked organizations (CNOs), CSCW, groupware, workflow systems and social networks, computer support for virtual teams is still insufficient, especially support for agility, i.e. the capability of virtual team members to rapidly and cost efficiently adapt the way they interact to changes. In this paper, requirements for computer support for agile virtual teams are presented. Next, an extension of the concept of social protocol is proposed as a novel model supporting agile interactions within virtual teams. The extended concept of social protocol consists of an extended social network and a workflow model.
ERIC Educational Resources Information Center
Desoete, Annemie
2008-01-01
Third grade elementary school children solved tests on mathematical reasoning and numerical facility. Metacognitive skillfulness was assessed through think aloud protocols, prospective and retrospective child ratings, teacher questionnaires, calibration measures and EPA2000. In our dataset metacognition has a lot in common with intelligence, but…
An investigation of condition mapping and plot proportion calculation issues
Demetrios Gatziolis
2007-01-01
A systematic examination of Forest Inventory and Analysis condition data collected under the annual inventory protocol in the Pacific Northwest region between 2000 and 2004 revealed the presence of errors both in condition topology and plot proportion computations. When plots were compiled to generate population estimates, proportion errors were found to cause...
A remote sensing protocol for identifying rangelands with degraded productive capacity
Matthew C. Reeves; L. Scott Bagget
2014-01-01
Rangeland degradation is a growing problem throughout the world. An assessment process for com-paring the trend and state of vegetation productivity to objectively derived reference conditions wasdeveloped. Vegetation productivity was estimated from 2000 to 2012 using annual maximum Normalized Difference Vegetation Index (NDVI) from the MODIS satellite platform. Each...
Dynamic Protocol Reverse Engineering: A Grammatical Inference Approach
2008-03-01
domain-specific languages”. OOPSLA ’05: Companion to the 20th annual ACM SIGPLAN conference on Object-oriented programming, systems, languages, and...Representation to k-TSS Lan- guage Models”. Computación y Sistemas , 3(4):273–244, 2000. ISSN 1405-5546. 256. Trakhtenbrot, B.A. and Y.M. Barzdin. Finite
Vascular plant and vertebrate inventories in Sonoran Desert National Parks
Cecilia A. Schmidt; Eric W. Albrecht; Brian F. Powell; William L. Halvorson
2005-01-01
Biological inventories are important for natural resource management and interpretation, and can form a foundation for long-term monitoring programs. We inventoried vascular plants and vertebrates in nine National Parks in southern Arizona and western New Mexico from 2000 to 2004 using repeatable designs, commonly accepted methods, and standardized protocols. At...
Dörge, Petra; Meissner, Barbara; Zimmermann, Martin; Möricke, Anja; Schrauder, André; Bouquin, Jean-Pierre; Schewe, Denis; Harbott, Jochen; Teigler-Schlegel, Andrea; Ratei, Richard; Ludwig, Wolf-Dieter; Koehler, Rolf; Bartram, Claus R; Schrappe, Martin; Stanulla, Martin; Cario, Gunnar
2013-03-01
IKZF1 gene deletions have been associated with a poor outcome in pediatric precursor B-cell acute lymphoblastic leukemia. To assess the prognostic relevance of IKZF1 deletions for patients treated on Berlin-Frankfurt-Münster Study Group trial ALL-BFM 2000, we screened 694 diagnostic acute lymphoblastic leukemia samples by Multiplex Ligation-dependent Probe Amplification. Patients whose leukemic cells bore IKZF1 deletions had a lower 5-year event-free survival (0.69±0.05 vs. 0.85±0.01; P<0.0001) compared to those without, mainly due to a higher cumulative incidence of relapses (0.21±0.04 vs. 0.10±0.01; P=0.001). Although IKZF1 deletions were significantly associated with the P2RY8-CRLF2 rearrangement, their prognostic value was found to be independent from this association. Thus, IKZF1 deletion is an independent predictor of treatment outcome and a strong candidate marker for integration in future treatment stratification strategies on ALL-BFM protocols. Clinicaltrials.gov identifier: NCT00430118.
Sharma, Manish S; Vohra, Ashma; Thomas, Ponnamma; Kapil, Arti; Suri, Ashish; Chandra, P Sarat; Kale, Shashank S; Mahapatra, Ashok K; Sharma, Bhawani S
2009-06-01
Although the use of prophylactic antibiotics has been shown to significantly decrease the incidence of meningitis after neurosurgery, its effect on extra-neurosurgical-site infections has not been documented. The authors explore the effect of risk-stratified, protocol-based perioperative antibiotic prophylaxis on nosocomial infections in an audit of 31 927 consecutive routine and emergency neurosurgical procedures. Infection rates were objectively quantified by bacteriological positivity on culture of cerebrospinal fluid (CSF), blood, urine, wound swab, and tracheal aspirate samples derived from patients with clinicoradiological features of sepsis. Infections were recorded as pulmonary, wound, blood, CSF, and urinary. The total numbers of hospital-acquired infections and the number of patients infected were also recorded. A protocol of perioperative antibiotic prophylaxis of variable duration stratified by patient risk factors was introduced in 2000, which was chosen as the historical turning point. The chi test was used to compare infection rates. A P value of <0.05 was considered significant. A total of 31 927 procedures were performed during the study period 1994-2006; 5171 culture-proven hospital-acquired infections (16.2%) developed in 3686 patients (11.6%). The most common infections were pulmonary (4.4%), followed by bloodstream (3.5%), urinary (3.0%), CSF (2.9%), and wound (2.5%). The incidence of positive tracheal, CSF, blood, wound, and urine cultures decreased significantly after 2000. Chemoprophylaxis, however, altered the prevalent bacterial flora and may have led to the emergence of methicillin-resistant Staphylococcus aureus. A risk-stratified protocol of perioperative antibiotic prophylaxis may help to significantly decrease not only neurosurgical, but also extra-neurosurgical-site body fluid bacteriological culture positivity.
Nestor, Sean M; Gibson, Erin; Gao, Fu-Qiang; Kiss, Alex; Black, Sandra E
2013-02-01
Hippocampal volumetry derived from structural MRI is increasingly used to delineate regions of interest for functional measurements, assess efficacy in therapeutic trials of Alzheimer's disease (AD) and has been endorsed by the new AD diagnostic guidelines as a radiological marker of disease progression. Unfortunately, morphological heterogeneity in AD can prevent accurate demarcation of the hippocampus. Recent developments in automated volumetry commonly use multi-template fusion driven by expert manual labels, enabling highly accurate and reproducible segmentation in disease and healthy subjects. However, there are several protocols to define the hippocampus anatomically in vivo, and the method used to generate atlases may impact automatic accuracy and sensitivity - particularly in pathologically heterogeneous samples. Here we report a fully automated segmentation technique that provides a robust platform to directly evaluate both technical and biomarker performance in AD among anatomically unique labeling protocols. For the first time we test head-to-head the performance of five common hippocampal labeling protocols for multi-atlas based segmentation, using both the Sunnybrook Longitudinal Dementia Study and the entire Alzheimer's Disease Neuroimaging Initiative 1 (ADNI-1) baseline and 24-month dataset. We based these atlas libraries on the protocols of (Haller et al., 1997; Killiany et al., 1993; Malykhin et al., 2007; Pantel et al., 2000; Pruessner et al., 2000), and a single operator performed all manual tracings to generate de facto "ground truth" labels. All methods distinguished between normal elders, mild cognitive impairment (MCI), and AD in the expected directions, and showed comparable correlations with measures of episodic memory performance. Only more inclusive protocols distinguished between stable MCI and MCI-to-AD converters, and had slightly better associations with episodic memory. Moreover, we demonstrate that protocols including more posterior anatomy and dorsal white matter compartments furnish the best voxel-overlap accuracies (Dice Similarity Coefficient=0.87-0.89), compared to expert manual tracings, and achieve the smallest sample sizes required to power clinical trials in MCI and AD. The greatest distribution of errors was localized to the caudal hippocampus and the alveus-fimbria compartment when these regions were excluded. The definition of the medial body did not significantly alter accuracy among more comprehensive protocols. Voxel-overlap accuracies between automatic and manual labels were lower for the more pathologically heterogeneous Sunnybrook study in comparison to the ADNI-1 sample. Finally, accuracy among protocols appears to significantly differ the most in AD subjects compared to MCI and normal elders. Together, these results suggest that selection of a candidate protocol for fully automatic multi-template based segmentation in AD can influence both segmentation accuracy when compared to expert manual labels and performance as a biomarker in MCI and AD. Copyright © 2012 Elsevier Inc. All rights reserved.
Nestor, Sean M.; Gibson, Erin; Gao, Fu-Qiang; Kiss, Alex; Black, Sandra E.
2012-01-01
Hippocampal volumetry derived from structural MRI is increasingly used to delineate regions of interest for functional measurements, assess efficacy in therapeutic trials of Alzheimer’s disease (AD) and has been endorsed by the new AD diagnostic guidelines as a radiological marker of disease progression. Unfortunately, morphological heterogeneity in AD can prevent accurate demarcation of the hippocampus. Recent developments in automated volumetry commonly use multitemplate fusion driven by expert manual labels, enabling highly accurate and reproducible segmentation in disease and healthy subjects. However, there are several protocols to define the hippocampus anatomically in vivo, and the method used to generate atlases may impact automatic accuracy and sensitivity – particularly in pathologically heterogeneous samples. Here we report a fully automated segmentation technique that provides a robust platform to directly evaluate both technical and biomarker performance in AD among anatomically unique labeling protocols. For the first time we test head-to-head the performance of five common hippocampal labeling protocols for multi-atlas based segmentation, using both the Sunnybrook Longitudinal Dementia Study and the entire Alzheimer’s Disease Neuroimaging Initiative 1 (ADNI-1) baseline and 24-month dataset. We based these atlas libraries on the protocols of (Haller et al., 1997; Killiany et al., 1993; Malykhin et al., 2007; Pantel et al., 2000; Pruessner et al., 2000), and a single operator performed all manual tracings to generate de facto “ground truth” labels. All methods distinguished between normal elders, mild cognitive impairment (MCI), and AD in the expected directions, and showed comparable correlations with measures of episodic memory performance. Only more inclusive protocols distinguished between stable MCI and MCI-to-AD converters, and had slightly better associations with episodic memory. Moreover, we demonstrate that protocols including more posterior anatomy and dorsal white matter compartments furnish the best voxel-overlap accuracies (Dice Similarity Coefficient = 0.87–0.89), compared to expert manual tracings, and achieve the smallest sample sizes required to power clinical trials in MCI and AD. The greatest distribution of errors was localized to the caudal hippocampus and alveus-fimbria compartment when these regions were excluded. The definition of the medial body did not significantly alter accuracy among more comprehensive protocols. Voxel-overlap accuracies between automatic and manual labels were lower for the more pathologically heterogeneous Sunnybrook study in comparison to the ADNI-1 sample. Finally, accuracy among protocols appears to significantly differ the most in AD subjects compared to MCI and normal elders. Together, these results suggest that selection of a candidate protocol for fully automatic multi-template based segmentation in AD can influence both segmentation accuracy when compared to expert manual labels and performance as a biomarker in MCI and AD. PMID:23142652
Capacity is the Wrong Paradigm
2002-01-01
short, steganography values detection over ro- bustness, whereas watermarking values robustness over de - tection.) Hiding techniques for JPEG images ...world length of the code. D: If the algorithm is known, this method is trivially de - tectable if we are sending images (with no encryption). If we are...implications of the work of Chaitin and Kolmogorov on algorithmic complex- ity [5]. We have also concentrated on screen images in this paper and have not
Aladin Lite: Lightweight sky atlas for browsers
NASA Astrophysics Data System (ADS)
Boch, Thomas
2014-02-01
Aladin Lite is a lightweight version of the Aladin tool, running in the browser and geared towards simple visualization of a sky region. It allows visualization of image surveys (JPEG multi-resolution HEALPix all-sky surveys) and permits superimposing tabular (VOTable) and footprints (STC-S) data. Aladin Lite is powered by HTML5 canvas technology and is easily embeddable on any web page and can also be controlled through a Javacript API.
Cloud Intrusion Detection and Repair (CIDAR)
2016-02-01
form for VLC , Swftools-png2swf, Swftools-jpeg2swf, Dillo and GIMP. The superscript indicates the bit width of each expression atom. “sext(v, w... challenges in input rectification is the need to deal with nested fields. In general, input formats are in tree structures containing arbitrarily...length indicator constraints is challeng - ing, because of the presence of nested fields in hierarchical input format. For example, an integer field may
NASA Astrophysics Data System (ADS)
Yabuta, Kenichi; Kitazawa, Hitoshi; Tanaka, Toshihisa
2006-02-01
Recently, monitoring cameras for security have been extensively increasing. However, it is normally difficult to know when and where we are monitored by these cameras and how the recorded images are stored and/or used. Therefore, how to protect privacy in the recorded images is a crucial issue. In this paper, we address this problem and introduce a framework for security monitoring systems considering the privacy protection. We state requirements for monitoring systems in this framework. We propose a possible implementation that satisfies the requirements. To protect privacy of recorded objects, they are made invisible by appropriate image processing techniques. Moreover, the original objects are encrypted and watermarked into the image with the "invisible" objects, which is coded by the JPEG standard. Therefore, the image decoded by a normal JPEG viewer includes the objects that are unrecognized or invisible. We also introduce in this paper a so-called "special viewer" in order to decrypt and display the original objects. This special viewer can be used by limited users when necessary for crime investigation, etc. The special viewer allows us to choose objects to be decoded and displayed. Moreover, in this proposed system, real-time processing can be performed, since no future frame is needed to generate a bitstream.
NASA Astrophysics Data System (ADS)
Kim, Christopher Y.
1999-05-01
Endoscopic images p lay an important role in describing many gastrointestinal (GI) disorders. The field of radiology has been on the leading edge of creating, archiving and transmitting digital images. With the advent of digital videoendoscopy, endoscopists now have the ability to generate images for storage and transmission. X-rays can be compressed 30-40X without appreciable decline in quality. We reported results of a pilot study using JPEG compression of 24-bit color endoscopic images. For that study, the result indicated that adequate compression ratios vary according to the lesion and that images could be compressed to between 31- and 99-fold smaller than the original size without an appreciable decline in quality. The purpose of this study was to expand upon the methodology of the previous sty with an eye towards application for the WWW, a medium which would expand both clinical and educational purposes of color medical imags. The results indicate that endoscopists are able to tolerate very significant compression of endoscopic images without loss of clinical image quality. This finding suggests that even 1 MB color images can be compressed to well under 30KB, which is considered a maximal tolerable image size for downloading on the WWW.
ERIC Educational Resources Information Center
Wahid, Wazira Ali Abdul; Ahmed, Eqbal Sulaiman; Wahid, Muntaha Ali Abdul
2015-01-01
This issue expresses a research study based on the online interactions of English teaching specially conversation through utilizing VOIP (Voice over Internet Protocol) and cosmopolitan online theme. Data has been achieved by interviews. Simplifiers indicate how oral tasks require to be planned upon to facilitate engagement models propitious to…
Sayson, Bryan; Popurs, Marioara Angela Moisa; Lafek, Mirafe; Berkow, Ruth; Stockler-Ipsiroglu, Sylvia; van Karnebeek, Clara D M
2015-05-01
Intellectual developmental disorders (IDD(1)), characterized by a significant impairment in cognitive function and behavior, affect 2.5% of the population and are associated with considerable morbidity and healthcare costs. Inborn errors of metabolism (IEM) currently constitute the largest group of genetic defects presenting with IDD, which are amenable to causal therapy. Recently, we created an evidence-based 2-tiered diagnostic protocol (TIDE protocol); the first tier is a 'screening step' applied in all patients, comprising routinely performed, wide available metabolic tests in blood and urine, while second-tier tests are more specific and based on the patient's phenotype. The protocol is supported by an app (www.treatable-ID.org). To retrospectively examine the cost- and time-effectiveness of the TIDE protocol in patients identified with a treatable IEM at the British Columbia Children's Hospital. We searched the database for all IDD patients diagnosed with a treatable IEM, during the period 2000-2009 in our academic institution. Data regarding the patient's clinical phenotype, IEM, diagnostic tests and interval were collected. Total costs and time intervals associated with all testing and physician consultations actually performed were calculated and compared to the model of the TIDE protocol. Thirty-one patients (16 males) were diagnosed with treatable IDD during the period 2000-2009. For those identifiable via the 1st tier (n=20), the average cost savings would have been $311.17 CAD, and for those diagnosed via a second-tier test (n=11) $340.14 CAD. Significant diagnostic delay (mean 9 months; range 1-29 months) could have been avoided in 9 patients with first-tier diagnoses, had the TIDE protocol been used. For those with second-tier treatable IDD, diagnoses could have been more rapidly achieved with the use of the Treatable IDD app allowing for specific searches based on signs and symptoms. The TIDE protocol for treatable forms of IDD appears effective reducing diagnostic delay and unnecessary costs. Larger prospective studies, currently underway, are needed to prove that standard screening for treatable conditions in patients with IDD is time- and cost-effective, and most importantly will preserve brain function by timely diagnosis enabling initiation of causal therapy. Copyright © 2015 Elsevier Inc. All rights reserved.
Zakaria, Golam Abu; Schütte, Wilhelm
2003-01-01
The determination of absorbed dose to water for high-energy photon and electron beams is performed in Germany according to the dosimetry protocol DIN 6800-2 (1997). At an international level, the main protocols used are the AAPM dosimetry protocol TG-51 (1999) and the IAEA Code of Practice TRS-398 (2000). The present paper systematically compares these three dosimetry protocols, and identifies similarities and differences. The investigations were performed using 4 and 10 MV photon beams, as well as 6, 8, 9, 10, 12 and 14 MeV electron beams. Two cylindrical and two plane-parallel type chambers were used for measurements. In general, the discrepancies among the three protocols were 1.0% for photon beams and 1.6% for electron beams. Comparative measurements in the context of measurement technical control (MTK) with TLD showed a deviation of less than 1.3% between the measurements obtained according to protocols DIN 6800-2 and MTK (exceptions: 4 MV photons with 2.9% and 6 MeV electrons with 2.4%). While only cylindrical chambers were used for photon beams, measurements of electron beams were performed using both cylindrical and plane-parallel chambers (the latter used after a cross-calibration to a cylindrical chamber, as required by the respective dosimetry protocols). Notably, unlike recommended in the corresponding protocols, we found out that cylindrical chambers can be used also for energies from 6 to 10 MeV.
Compression techniques in tele-radiology
NASA Astrophysics Data System (ADS)
Lu, Tianyu; Xiong, Zixiang; Yun, David Y.
1999-10-01
This paper describes a prototype telemedicine system for remote 3D radiation treatment planning. Due to voluminous medical image data and image streams generated in interactive frame rate involved in the application, the importance of deploying adjustable lossy to lossless compression techniques is emphasized in order to achieve acceptable performance via various kinds of communication networks. In particular, the compression of the data substantially reduces the transmission time and therefore allows large-scale radiation distribution simulation and interactive volume visualization using remote supercomputing resources in a timely fashion. The compression algorithms currently used in the software we developed are JPEG and H.263 lossy methods and Lempel-Ziv (LZ77) lossless methods. Both objective and subjective assessment of the effect of lossy compression methods on the volume data are conducted. Favorable results are obtained showing that substantial compression ratio is achievable within distortion tolerance. From our experience, we conclude that 30dB (PSNR) is about the lower bound to achieve acceptable quality when applying lossy compression to anatomy volume data (e.g. CT). For computer simulated data, much higher PSNR (up to 100dB) is expectable. This work not only introduces such novel approach for delivering medical services that will have significant impact on the existing cooperative image-based services, but also provides a platform for the physicians to assess the effects of lossy compression techniques on the diagnostic and aesthetic appearance of medical imaging.
The Helioviewer Project: Solar Data Visualization and Exploration
NASA Astrophysics Data System (ADS)
Hughitt, V. Keith; Ireland, J.; Müller, D.; García Ortiz, J.; Dimitoglou, G.; Fleck, B.
2011-05-01
SDO has only been operating a little over a year, but in that short time it has already transmitted hundreds of terabytes of data, making it impossible for data providers to maintain a complete archive of data online. By storing an extremely efficiently compressed subset of the data, however, the Helioviewer project has been able to maintain a continuous record of high-quality SDO images starting from soon after the commissioning phase. The Helioviewer project was not designed to deal with SDO alone, however, and continues to add support for new types of data, the most recent of which are STEREO EUVI and COR1/COR2 images. In addition to adding support for new types of data, improvements have been made to both the server-side and client-side products that are part of the project. A new open-source JPEG2000 (JPIP) streaming server has been developed offering a vastly more flexible and reliable backend for the Java/OpenGL application JHelioviewer. Meanwhile the web front-end, Helioviewer.org, has also made great strides both in improving reliability, and also in adding new features such as the ability to create and share movies on YouTube. Helioviewer users are creating nearly two thousand movies a day from the over six million images that are available to them, and that number continues to grow each day. We provide an overview of recent progress with the various Helioviewer Project components and discuss plans for future development.
Trahearn, Nicholas; Tsang, Yee Wah; Cree, Ian A; Snead, David; Epstein, David; Rajpoot, Nasir
2017-06-01
Automation of downstream analysis may offer many potential benefits to routine histopathology. One area of interest for automation is in the scoring of multiple immunohistochemical markers to predict the patient's response to targeted therapies. Automated serial slide analysis of this kind requires robust registration to identify common tissue regions across sections. We present an automated method for co-localized scoring of Estrogen Receptor and Progesterone Receptor (ER/PR) in breast cancer core biopsies using whole slide images. Regions of tumor in a series of fifty consecutive breast core biopsies were identified by annotation on H&E whole slide images. Sequentially cut immunohistochemical stained sections were scored manually, before being digitally scanned and then exported into JPEG 2000 format. A two-stage registration process was performed to identify the annotated regions of interest in the immunohistochemistry sections, which were then scored using the Allred system. Overall correlation between manual and automated scoring for ER and PR was 0.944 and 0.883, respectively, with 90% of ER and 80% of PR scores within in one point or less of agreement. This proof of principle study indicates slide registration can be used as a basis for automation of the downstream analysis for clinically relevant biomarkers in the majority of cases. The approach is likely to be improved by implantation of safeguarding analysis steps post registration. © 2016 International Society for Advancement of Cytometry. © 2016 International Society for Advancement of Cytometry.
Pantanowitz, Liron; Liu, Chi; Huang, Yue; Guo, Huazhang; Rohde, Gustavo K.
2017-01-01
Introduction: The quality of data obtained from image analysis can be directly affected by several preanalytical (e.g., staining, image acquisition), analytical (e.g., algorithm, region of interest [ROI]), and postanalytical (e.g., computer processing) variables. Whole-slide scanners generate digital images that may vary depending on the type of scanner and device settings. Our goal was to evaluate the impact of altering brightness, contrast, compression, and blurring on image analysis data quality. Methods: Slides from 55 patients with invasive breast carcinoma were digitized to include a spectrum of human epidermal growth factor receptor 2 (HER2) scores analyzed with Visiopharm (30 cases with score 0, 10 with 1+, 5 with 2+, and 10 with 3+). For all images, an ROI was selected and four parameters (brightness, contrast, JPEG2000 compression, out-of-focus blurring) then serially adjusted. HER2 scores were obtained for each altered image. Results: HER2 scores decreased with increased illumination, higher compression ratios, and increased blurring. HER2 scores increased with greater contrast. Cases with HER2 score 0 were least affected by image adjustments. Conclusion: This experiment shows that variations in image brightness, contrast, compression, and blurring can have major influences on image analysis results. Such changes can result in under- or over-scoring with image algorithms. Standardization of image analysis is recommended to minimize the undesirable impact such variations may have on data output. PMID:28966838
Progressive data transmission for anatomical landmark detection in a cloud.
Sofka, M; Ralovich, K; Zhang, J; Zhou, S K; Comaniciu, D
2012-01-01
In the concept of cloud-computing-based systems, various authorized users have secure access to patient records from a number of care delivery organizations from any location. This creates a growing need for remote visualization, advanced image processing, state-of-the-art image analysis, and computer aided diagnosis. This paper proposes a system of algorithms for automatic detection of anatomical landmarks in 3D volumes in the cloud computing environment. The system addresses the inherent problem of limited bandwidth between a (thin) client, data center, and data analysis server. The problem of limited bandwidth is solved by a hierarchical sequential detection algorithm that obtains data by progressively transmitting only image regions required for processing. The client sends a request to detect a set of landmarks for region visualization or further analysis. The algorithm running on the data analysis server obtains a coarse level image from the data center and generates landmark location candidates. The candidates are then used to obtain image neighborhood regions at a finer resolution level for further detection. This way, the landmark locations are hierarchically and sequentially detected and refined. Only image regions surrounding landmark location candidates need to be trans- mitted during detection. Furthermore, the image regions are lossy compressed with JPEG 2000. Together, these properties amount to at least 30 times bandwidth reduction while achieving similar accuracy when compared to an algorithm using the original data. The hierarchical sequential algorithm with progressive data transmission considerably reduces bandwidth requirements in cloud-based detection systems.
Han, Ruizhen; He, Yong; Liu, Fei
2012-01-01
This paper presents a feasibility study on a real-time in field pest classification system design based on Blackfin DSP and 3G wireless communication technology. This prototype system is composed of remote on-line classification platform (ROCP), which uses a digital signal processor (DSP) as a core CPU, and a host control platform (HCP). The ROCP is in charge of acquiring the pest image, extracting image features and detecting the class of pest using an Artificial Neural Network (ANN) classifier. It sends the image data, which is encoded using JPEG 2000 in DSP, to the HCP through the 3G network at the same time for further identification. The image transmission and communication are accomplished using 3G technology. Our system transmits the data via a commercial base station. The system can work properly based on the effective coverage of base stations, no matter the distance from the ROCP to the HCP. In the HCP, the image data is decoded and the pest image displayed in real-time for further identification. Authentication and performance tests of the prototype system were conducted. The authentication test showed that the image data were transmitted correctly. Based on the performance test results on six classes of pests, the average accuracy is 82%. Considering the different live pests’ pose and different field lighting conditions, the result is satisfactory. The proposed technique is well suited for implementation in field pest classification on-line for precision agriculture. PMID:22736996
Han, Ruizhen; He, Yong; Liu, Fei
2012-01-01
This paper presents a feasibility study on a real-time in field pest classification system design based on Blackfin DSP and 3G wireless communication technology. This prototype system is composed of remote on-line classification platform (ROCP), which uses a digital signal processor (DSP) as a core CPU, and a host control platform (HCP). The ROCP is in charge of acquiring the pest image, extracting image features and detecting the class of pest using an Artificial Neural Network (ANN) classifier. It sends the image data, which is encoded using JPEG 2000 in DSP, to the HCP through the 3G network at the same time for further identification. The image transmission and communication are accomplished using 3G technology. Our system transmits the data via a commercial base station. The system can work properly based on the effective coverage of base stations, no matter the distance from the ROCP to the HCP. In the HCP, the image data is decoded and the pest image displayed in real-time for further identification. Authentication and performance tests of the prototype system were conducted. The authentication test showed that the image data were transmitted correctly. Based on the performance test results on six classes of pests, the average accuracy is 82%. Considering the different live pests' pose and different field lighting conditions, the result is satisfactory. The proposed technique is well suited for implementation in field pest classification on-line for precision agriculture.
Power Tools for Talking: Custom Protocols Enrich Coaching Conversations
ERIC Educational Resources Information Center
Pomerantz, Francesca; Ippolito, Jacy
2015-01-01
Discussion-based protocols--an "agreed upon set of discussion or observation rules that guide coach/teacher/student work, discussion, and interactions" (Ippolito & Lieberman, 2012, p. 79)--can help focus and structure productive professional learning discussions. However, while protocols are slowly growing into essential elements of…
Quantum Counterfactual Information Transmission Without a Weak Trace
NASA Astrophysics Data System (ADS)
Arvidsson Shukur, David; Barnes, Crispin
The classical theories of communication rely on the assumption that there has to be a flow of particles from Bob to Alice in order for him to send a message to her. We have developed a quantum protocol that allows Alice to perceive Bob's message ``counterfactually''. That is, without Alice receiving any particles that have interacted with Bob. By utilising a setup built on results from interaction-free measurements and the quantum Zeno effect, we outline a communication protocol in which the information travels in the opposite direction of the emitted particles. In comparison to previous attempts on such protocols, this one is such that a weak measurement at the message source would not leave a weak trace that could be detected by Alice's receiver. Whilst some interaction-free schemes require a large number of carefully aligned beam-splitters, our protocol is realisable with two or more beam-splitters. Furthermore, we outline how Alice's obtained classical Fisher information between a weak variable at Bob's laboratory is negligible in our scheme. We demonstrate this protocol by numerically solving the time-dependent Schrödinger Equation (TDSE) for a Hamiltonian that implements this quantum counterfactual phenomenon.
A Laboratory Exercise to Illustrate Protein-Membrane Interactions
ERIC Educational Resources Information Center
Weers, Paul M. M.; Prenner, Elmar J.; Curic, Spomenka; Lohmeier-Vogel, Elke M.
2016-01-01
The laboratory protocol presented here takes about 3 hours to perform and investigates protein and lipid interactions. Students first purify His6-tagged human apolipoprotein A-I (apoA-I) with Ni-NTA affinity resin in a simple batch protocol and prepare multilamellar vesicles (MLV) from pre-dried phospholipid films. When apoA-I is added to the MLV,…
An Approach to Model Based Testing of Multiagent Systems
Nadeem, Aamer
2015-01-01
Autonomous agents perform on behalf of the user to achieve defined goals or objectives. They are situated in dynamic environment and are able to operate autonomously to achieve their goals. In a multiagent system, agents cooperate with each other to achieve a common goal. Testing of multiagent systems is a challenging task due to the autonomous and proactive behavior of agents. However, testing is required to build confidence into the working of a multiagent system. Prometheus methodology is a commonly used approach to design multiagents systems. Systematic and thorough testing of each interaction is necessary. This paper proposes a novel approach to testing of multiagent systems based on Prometheus design artifacts. In the proposed approach, different interactions between the agent and actors are considered to test the multiagent system. These interactions include percepts and actions along with messages between the agents which can be modeled in a protocol diagram. The protocol diagram is converted into a protocol graph, on which different coverage criteria are applied to generate test paths that cover interactions between the agents. A prototype tool has been developed to generate test paths from protocol graph according to the specified coverage criterion. PMID:25874263
Can Commercial Digital Cameras Be Used as Multispectral Sensors? A Crop Monitoring Test
Lebourgeois, Valentine; Bégué, Agnès; Labbé, Sylvain; Mallavan, Benjamin; Prévot, Laurent; Roux, Bruno
2008-01-01
The use of consumer digital cameras or webcams to characterize and monitor different features has become prevalent in various domains, especially in environmental applications. Despite some promising results, such digital camera systems generally suffer from signal aberrations due to the on-board image processing systems and thus offer limited quantitative data acquisition capability. The objective of this study was to test a series of radiometric corrections having the potential to reduce radiometric distortions linked to camera optics and environmental conditions, and to quantify the effects of these corrections on our ability to monitor crop variables. In 2007, we conducted a five-month experiment on sugarcane trial plots using original RGB and modified RGB (Red-Edge and NIR) cameras fitted onto a light aircraft. The camera settings were kept unchanged throughout the acquisition period and the images were recorded in JPEG and RAW formats. These images were corrected to eliminate the vignetting effect, and normalized between acquisition dates. Our results suggest that 1) the use of unprocessed image data did not improve the results of image analyses; 2) vignetting had a significant effect, especially for the modified camera, and 3) normalized vegetation indices calculated with vignetting-corrected images were sufficient to correct for scene illumination conditions. These results are discussed in the light of the experimental protocol and recommendations are made for the use of these versatile systems for quantitative remote sensing of terrestrial surfaces. PMID:27873930
VLTI First Fringes with Two Auxiliary Telescopes at Paranal
NASA Astrophysics Data System (ADS)
2005-03-01
World's Largest Interferometer with Moving Optical Telescopes on Track Summary The Very Large Telescope Interferometer (VLTI) at Paranal Observatory has just seen another extension of its already impressive capabilities by combining interferometrically the light from two relocatable 1.8-m Auxiliary Telescopes. Following the installation of the first Auxiliary Telescope (AT) in January 2004 (see ESO PR 01/04), the second AT arrived at the VLT platform by the end of 2004. Shortly thereafter, during the night of February 2 to 3, 2005, the two high-tech telescopes teamed up and quickly succeeded in performing interferometric observations. This achievement heralds an era of new scientific discoveries. Both Auxiliary Telescopes will be offered from October 1, 2005 to the community of astronomers for routine observations, together with the MIDI instrument. By the end of 2006, Paranal will be home to four operational ATs that may be placed at 30 different positions and thus be combined in a very large number of ways ("baselines"). This will enable the VLTI to operate with enormous flexibility and, in particular, to obtain extremely detailed (sharp) images of celestial objects - ultimately with a resolution that corresponds to detecting an astronaut on the Moon. PR Photo 07a/05: Paranal Observing Platform with AT1 and AT2 PR Photo 07b/05: AT1 and AT2 with Open Domes PR Photo 07c/05: Evening at Paranal with AT1 and AT2 PR Photo 07d/05: AT1 and AT2 under the Southern Sky PR Photo 07e/05: First Fringes with AT1 and AT2 PR Video Clip 01/05: Two ATs at Paranal (Extract from ESO Newsreel 15) A Most Advanced Device ESO PR Video 01/05 ESO PR Video 01/05 Two Auxiliary Telescopes at Paranal [QuickTime: 160 x 120 pix - 37Mb - 4:30 min] [QuickTime: 320 x 240 pix - 64Mb - 4:30 min] ESO PR Photo 07a/05 ESO PR Photo 07a/05 [Preview - JPEG: 493 x400 pix - 44k] [Normal - JPEG: 985 x 800 pix - 727k] [HiRes - JPEG: 5000 x 4060 pix - 13.8M] Captions: ESO PR Video Clip 01/05 is an extract from ESO Video Newsreel 15, released on March 14, 2005. It provides an introduction to the VLT Interferometer (VLTI) and the two Auxiliary Telescopes (ATs) now installed at Paranal. ESO PR Photo 07a/05 shows the impressive ensemble at the summit of Paranal. From left to right, the enclosure of VLT Antu, Kueyen and Melipal, AT1, the VLT Survey Telescope (VST) in the background, AT2 and VLT Yepun. Located at the summit of the 2,600-m high Cerro Paranal in the Atacama Desert (Chile), ESO's Very Large Telescope (VLT) is at the forefront of astronomical technology and is one of the premier facilities in the world for optical and near-infrared observations. The VLT is composed of four 8.2-m Unit Telescope (Antu, Kueyen, Melipal and Yepun). They have been progressively put into service together with a vast suite of the most advanced astronomical instruments and are operated every night in the year. Contrary to other large astronomical telescopes, the VLT was designed from the beginning with the use of interferometry as a major goal. The href="/instruments/vlti">VLT Interferometer (VLTI) combines starlight captured by two 8.2- VLT Unit Telescopes, dramatically increasing the spatial resolution and showing fine details of a large variety of celestial objects. The VLTI is arguably the world's most advanced optical device of this type. It has already demonstrated its powerful capabilities by addressing several key scientific issues, such as determining the size and the shape of a variety of stars (ESO PR 22/02, PR 14/03 and PR 31/03), measuring distances to stars (ESO PR 25/04), probing the innermost regions of the proto-planetary discs around young stars (ESO PR 27/04) or making the first detection by infrared interferometry of an extragalactic object (ESO PR 17/03). "Little Brothers" ESO PR Photo 07b/05 ESO PR Photo 07b/05 [Preview - JPEG: 597 x 400 pix - 47k] [Normal - JPEG: 1193 x 800 pix - 330k] [HiRes - JPEG: 5000 x 3354 pix - 10.0M] ESO PR Photo 07c/05 ESO PR Photo 07c/05 [Preview - JPEG: 537 x 400 pix - 31k] [Normal - JPEG: 1074 x 800 pix - 555k] [HiRes - JPEG: 3000 x 2235 pix - 6.0M] ESO PR Photo 07d/05 ESO PR Photo 07d/05 [Preview - JPEG: 400 x 550 pix - 60k] [Normal - JPEG: 800 x 1099 pix - 946k] [HiRes - JPEG: 2414 x 3316 pix - 11.0M] Captions: ESO PR Photo 07b/05 shows VLTI Auxiliary Telescopes 1 and 2 (AT1 and AT2) in the early evening light, with the spherical domes opened and ready for observations. In ESO PR Photo 07c/05, the same scene is repeated later in the evening, with three of the large telescope enclosures in the background. This photo and ESO PR Photo 07c/05 which is a time-exposure with AT1 and AT2 under the beautiful night sky with the southern Milky Way band were obtained by ESO staff member Frédéric Gomté. However, most of the time the large telescopes are used for other research purposes. They are therefore only available for interferometric observations during a limited number of nights every year. Thus, in order to exploit the VLTI each night and to achieve the full potential of this unique setup, some other (smaller), dedicated telescopes were included into the overall VLT concept. These telescopes, known as the VLTI Auxiliary Telescopes (ATs), are mounted on tracks and can be placed at precisely defined "parking" observing positions on the observatory platform. From these positions, their light beams are fed into the same common focal point via a complex system of reflecting mirrors mounted in an underground system of tunnels. The Auxiliary Telescopes are real technological jewels. They are placed in ultra-compact enclosures, complete with all necessary electronics, an air conditioning system and cooling liquid for thermal control, compressed air for enclosure seals, a hydraulic plant for opening the dome shells, etc. Each AT is also fitted with a transporter that lifts the telescope and relocates it from one station to another. It moves around with its own housing on the top of Paranal, almost like a snail. Moreover, these moving ultra-high precision telescopes, each weighing 33 tonnes, fulfill very stringent mechanical stability requirements: "The telescopes are unique in the world", says Bertrand Koehler, the VLTI AT Project Manager. "After being relocated to a new position, the telescope is repositioned to a precision better than one tenth of a millimetre - that is, the size of a human hair! The image of the star is stabilized to better than thirty milli-arcsec - this is how we would see an object of the same size as one of the VLT enclosures on the Moon. Finally, the path followed by the light inside the telescope after bouncing on ten mirrors is stable to better than a few nanometres, which is the size of about one hundred atoms." A World Premiere ESO PR Photo 07e/05 ESO PR Photo 07e/05 "First Fringes" with two ATs [Preview - JPEG: 400 x 559 pix - 61k] [Normal - JPEG: 800 x 1134 pix - 357k] Caption: ESO PR Photo 07e/05 The "First Fringes" obtained with the first two VLTI Auxiliary Telescopes, as seen on the computer screen during the observation. The fringe pattern arises when the light beams from the two 1.8-m telescopes are brought together inside the VINCI instrument. The pattern itself contains information about the angular extension of the observed object, here the 6th-magnitude star HD62082. The fringes are acquired by moving a mirror back and forth around the position of equal path length for the two telescopes. One such scan can be seen in the third row window. This pattern results from the raw interferometric signals (the last two rows) after calibration and filtering using the photometric signals (the 4th and 5th row). The first two rows show the spectrum of the fringe pattern signal. More details about the interpretation of this pattern is given in Appendix A of PR 06/01. The possibility to move the ATs around and thus to perform observations with a large number of different telescope configurations ensures a great degree of flexibility, unique for an optical interferometric installation of this size and crucial for its exceptional performance. The ATs may be placed at 30 different positions and thus be combined in a very large number of ways. If the 8.2-m VLT Unit Telescopes are also taken into account, no less than 254 independent pairings of two telescopes ("baselines"), different in length and/or orientation, are available. Moreover, while the largest possible distance between two 8.2-m telescopes (ANTU and YEPUN) is about 130 metres, the maximal distance between two ATs may reach 200 metres. As the achievable image sharpness increases with telescope separation, interferometric observations with the ATs positioned at the extreme positions will therefore yield sharper images than is possible by combining light from the large telescopes alone. All of this will enable the VLTI to obtain exceedingly detailed (sharp) and very complete images of celestial objects - ultimately with a resolution that corresponds to detecting an astronaut on the Moon. Auxiliary Telescope no. 1 (AT1) was installed on the observatory's platform in January 2004. Now, one year later, the second of the four to be delivered, has been integrated into the VLTI. The installation period lasted two months and ended around midnight during the night of February 2-3, 2005. With extensive experience from the installation of AT1, the team of engineers and astronomers were able to combine the light from the two Auxiliary Telescopes in a very short time. In fact, following the necessary preparations, it took them only five minutes to adjust this extremely complex optical system and successfully capture the "First Fringes" with the VINCI test instrument! The star which was observed is named HD62082 and is just at the limit of what can be observed with the unaided eye (its visual magnitude is 6.2). The fringes were as clear as ever, and the VLTI control system kept them stable for more than one hour. Four nights later this exercise was repeated successfully with the mid-infrared science instrument MIDI. Fringes on the star Alphard (Alpha Hydrae) were acquired on February 7 at 4:05 local time. For Roberto Gilmozzi, Director of ESO's La Silla Paranal Observatory, "this is a very important new milestone. The introduction of the Auxiliary Telescopes in the development of the VLT Interferometer will bring interferometry out of the specialist experiment and into the domain of common user instrumentation for every astronomer in Europe. Without doubt, it will enormously increase the potentiality of the VLTI." With two more telescopes to be delivered within a year to the Paranal Observatory, ESO cements its position as world-leader in ground-based optical astronomy, providing Europe's scientists with the tools they need to stay at the forefront in this exciting science. The VLT Interferometer will, for example, allow astronomers to study details on the surface of stars or to probe proto-planetary discs and other objects for which ultra-high precision imaging is required. It is premature to speculate on what the Very Large Telescope Interferometer will soon discover, but it is easy to imagine that there may be quite some surprises in store for all of us.
ERIC Educational Resources Information Center
Love, Tyler S.; Wells, John G.; Parkes, Kelly A.
2017-01-01
A modified Reformed Teaching Observation Protocol (RTOP) (Piburn & Sawada, 2000) instrument was used to separately examine eight technology and engineering (T&E) educators' teaching of science, and T&E content and practices, as called for by the "Standards for Technological Literacy: Content for the Study of Technology"…
VLT Data Flow System Begins Operation
NASA Astrophysics Data System (ADS)
1999-06-01
Building a Terabyte Archive at the ESO Headquarters The ESO Very Large Telescope (VLT) is the sum of many sophisticated parts. The site at Cerro Paranal in the dry Atacama desert in Northern Chile is one of the best locations for astronomical observations from the surface of the Earth. Each of the four 8.2-m telescopes is a technological marvel with self-adjusting optics placed in a gigantic mechanical structure of the utmost precision, continuously controlled by advanced soft- and hardware. A multitude of extremely complex instruments with sensitive detectors capture the faint light from distant objects in the Universe and record the digital data fast and efficiently as images and spectra, with a minimum of induced noise. And now the next crucial link in this chain is in place. A few nights ago, following an extended test period, the VLT Data Flow System began providing the astronomers with a steady stream of high-quality, calibrated image and spectral data, ready to be interpreted. The VLT project has entered into a new phase with a larger degree of automation. Indeed, the first 8.2-m Unit Telescope, ANTU, with the FORS1 and ISAAC instruments, has now become a true astronomy machine . A smooth flow of data through the entire system ESO PR Photo 25a/99 ESO PR Photo 25a/99 [Preview - JPEG: 400 x 292 pix - 104k] [Normal - JPEG: 800 x 584 pix - 264k] [High-Res - JPEG: 3000 x 2189 pix - 1.5M] Caption to ESO PR Photo 25a/99 : Simplified flow diagramme for the VLT Data Flow System . It is a closed-loop software system which incorporates various subsystems that track the flow of data all the way from the submission of proposals to storage of the acquired data in the VLT Science Archive Facility. The DFS main components are: Program Handling, Observation Handling, Telescope Control System, Science Archive, Pipeline and Quality Control. Arrows indicate lines of feedback. Already from the start of this project more than ten years ago, the ESO Very Large Telescope was conceived as a complex digital facility to explore the Universe. In order for astronomers to be able to use this marvellous research tool in the most efficient manner possible, the VLT computer software and hardware systems must guarantee a smooth flow of scientific information through the entire system. This process starts when the astronomers submit well-considered proposals for observing time and it ends with large volumes of valuable astronomical data being distributed to the international astronomical community. For this, ESO has produced an integrated collection of software and hardware, known as the VLT Data Flow System (DFS) , that manages and facilitates the flow of scientific information within the VLT Observatory. Early information about this new concept was published as ESO Press Release 12/96 and extensive tests were first carried out at ESOs 3.5-m New Technology Telescope (NTT) at La Silla, cf. ESO Press Release 03/97 [1]. The VLT DFS is a complete (end-to-end) system that guarantees the highest data quality by optimization of the observing process and repeated checks that identify and eliminate any problems. It also introduces automatic calibration of the data, i.e. the removal of external effects introduced by the atmospheric conditions at the time of the observations, as well as the momentary state of the telescope and the instruments. From Proposals to Observations In order to obtain observing time with ESO telescopes, also with the VLT, astronomers must submit a detailed observing proposal to the ESO Observing Programmes Committee (OPC) . It meets twice a year and ranks the proposals according to scientific merit. More than 1000 proposals are submitted each year, mostly by astronomers from the ESO members states and Chile; the competition is fierce and only a fraction of the total demand for observing time can be fulfilled. During the submission of observing proposals, DFS software tools available over the World Wide Web enable the astronomers to simulate their proposed observations and provide accurate estimates of the amount of telescope time they will need to complete their particular scientific programme. Once the proposals have been reviewed by the OPC and telescope time is awarded by the ESO management according to the recommendation by this Committee, the successful astronomers begin to assemble detailed descriptions of their intended observations (e.g. position in the sky, time and duration of the observation, the instrument mode, etc.) in the form of computer files called Observation Blocks (OBs) . The software to make OBs is distributed by ESO and used by the astronomers at their home institutions to design their observing programs well before the observations are scheduled at the telescope. The OBs can then be directly executed by the VLT and result in an increased efficiency in the collection of raw data (images, spectra) from the science instruments on the VLT. The activation (execution) of OBs can be done by the astronomer at the telescope on a particular set of dates ( visitor mode operation) or it can be done by ESO science operations astronomers at times which are optimally suited for the particular scientific programme ( service mode operation). An enormous VLT Data Archive ESO PR Photo 25b/99 ESO PR Photo 25b/99 [Preview - JPEG: 400 x 465 pix - 160k] [Normal - JPEG: 800 x 929 pix - 568k] [High-Res - JPEG: 3000 x 3483 pix - 5.5M] Caption to ESO PR Photo 25b/99 : The first of several DVD storage robot at the VLT Data Archive at the ESO headquarters include 1100 DVDs (with a total capacity of about 16 Terabytes) that may be rapidly accessed by the archive software system, ensuring fast availbility of the requested data. The raw data generated at the telescope are stored by an archive system that sends these data regularly back to ESO headquarters in Garching (Germany) in the form of CD and DVD ROM disks. While the well-known Compact Disks (CD ROMs) store about 600 Megabytes (600,000,000 bytes) each, the new Digital Versatile Disks (DVD ROMs) - of the same physical size - can store up 3.9 Gigabytes (3,900,000,000 bytes) each, or over 6 times more. The VLT will eventually produce more than 20 Gigabytes (20,000,000,000 bytes) of astronomical data every night, corresponding to about 10 million pages of text [2]. Some of these data also pass through "software pipelines" that automatically remove the instrumental effects on the data and deliver data products to the astronomer that can more readily be turned into scientific results. Ultimately these data are stored in a permanent Science Archive Facility at ESO headquarters which is jointly operated by ESO and the Space Telescope European Coordinating Facility (ST-ECF). From here, data are distributed to astronomers on CD ROMs and over the World Wide Web. The archive facility is being developed to enable astronomers to "mine" the large volumes of data that will be collected from the VLT in the coming years. Within the first five years of operations the VLT is expected to produce around 100 Terabytes (100,000,000,000,000 bytes) of data. It is difficult to visualize this enormous amount of information. However, it corresponds to the content of 50 million books of 1000 pages each; they would occupy some 2,500 kilometres of bookshelves! The VLT Data Flow System enters into operation ESO PR Photo 25c/99 ESO PR Photo 25c/99 [Preview - JPEG: 400 x 444 pix - 164k] [Normal - JPEG: 800 x 887 pix - 552k] [High-Res - JPEG: 3000 x 3327 pix - 6.4M] Caption to ESO PR Photo 25c/99 : Astronomers from ESO Data Flow Operations Group at work with the VLT Archive. Science operations with the first VLT 8.2-m telescope ( ANTU ) began on April 1, 1999. Following the first call for proposals to use the VLT in October 1998, the OPC met in December and the observing schedule was finalized early 1999. The related Observation Blocks were prepared by the astronomers in February and March. Service-mode observations began in April and by late May the first scientific programs conducted by ESO science operations were completed. Raw data, instrument calibration information and the products of pipeline processing from these programs have now been assembled and packed onto CD ROMs by ESO science operations staff. On June 15 the first CD ROMs were delivered to astronomers in the ESO community. This event marks the closing of the data flow loop at the VLT for the first time and the successful culmination of more than 5 years of hard work by ESO engineers and scientists to implement a system for efficient and effective scientific data flow. This was achieved by a cross-organization science operations team involving staff in Chile and Europe. With the VLT Data Flow System, a wider research community will have access to the enormous wealth of data from the VLT. It will help astronomers to keep pace with the new technologies and extensive capabilities of the VLT and so obtain world-first scientific results and new insights into the universe. Notes [1] A more technical description of the VLT Data Flow System is available in Chapter 10 of the VLT Whitebook. [2] By definition, one "normal printed page" contains 2,000 characters. How to obtain ESO Press Information ESO Press Information is made available on the World-Wide Web (URL: http://www.eso.org../ ). ESO Press Photos may be reproduced, if credit is given to the European Southern Observatory.
Wesołowska-Andersen, A; Borst, L; Dalgaard, M D; Yadav, R; Rasmussen, K K; Wehner, P S; Rasmussen, M; Ørntoft, T F; Nordentoft, I; Koehler, R; Bartram, C R; Schrappe, M; Sicheritz-Ponten, T; Gautier, L; Marquart, H; Madsen, H O; Brunak, S; Stanulla, M; Gupta, R; Schmiegelow, K
2015-02-01
Childhood acute lymphoblastic leukemia survival approaches 90%. New strategies are needed to identify the 10-15% who evade cure. We applied targeted, sequencing-based genotyping of 25 000 to 34 000 preselected potentially clinically relevant single-nucleotide polymorphisms (SNPs) to identify host genome profiles associated with relapse risk in 352 patients from the Nordic ALL92/2000 protocols and 426 patients from the German Berlin-Frankfurt-Munster (BFM) ALL2000 protocol. Patients were enrolled between 1992 and 2008 (median follow-up: 7.6 years). Eleven cross-validated SNPs were significantly associated with risk of relapse across protocols. SNP and biologic pathway level analyses associated relapse risk with leukemia aggressiveness, glucocorticosteroid pharmacology/response and drug transport/metabolism pathways. Classification and regression tree analysis identified three distinct risk groups defined by end of induction residual leukemia, white blood cell count and variants in myeloperoxidase (MPO), estrogen receptor 1 (ESR1), lamin B1 (LMNB1) and matrix metalloproteinase-7 (MMP7) genes, ATP-binding cassette transporters and glucocorticosteroid transcription regulation pathways. Relapse rates ranged from 4% (95% confidence interval (CI): 1.6-6.3%) for the best group (72% of patients) to 76% (95% CI: 41-90%) for the worst group (5% of patients, P<0.001). Validation of these findings and similar approaches to identify SNPs associated with toxicities may allow future individualized relapse and toxicity risk-based treatments adaptation.
A Steganographic Embedding Undetectable by JPEG Compatibility Steganalysis
2002-01-01
itd.nrl.navy.mil Abstract. Steganography and steganalysis of digital images is a cat- and-mouse game. In recent work, Fridrich, Goljan and Du introduced a method...proposed embedding method. 1 Introduction Steganography and steganalysis of digital images is a cat-and-mouse game. Ever since Kurak and McHugh’s seminal...paper on LSB embeddings in images [10], various researchers have published work on either increasing the payload, im- proving the resistance to
C2 Failures: A Taxonomy and Analysis
2013-06-01
2, pp. 171-199. Huber, Reiner, Tor Langsaeter, Petra Eggenhofer, Fernando Freire, Antonio Grilo, Anne-Marie Grisogono, Jose Martine , Jens Roemer... Martin (2012). Mission Command White Paper. Washington, D.C.: U.S. Department of Defense, Office of the Chairman of the Joint Chiefs of Staff. http...e1352384704110.jpeg?w=625&h=389 The Punchline “What we’ve got here, is failure to communicate” Strother Martin as “The Captain,” Cool Hand Luke, (Warner
Ellis, Alisha M.; Marot, Marci E.; Wheaton, Cathryn J.; Bernier, Julie C.; Smith, Christopher G.
2016-02-03
This report is an archive for sedimentological data derived from the surface sediment of Chincoteague Bay. Data are available for the spring (March/April 2014) and fall (October 2014) samples collected. Downloadable data are provided as Excel spreadsheets and as JPEG files. Additional files include ArcGIS shapefiles of the sampling sites, detailed results of sediment grain-size analyses, and formal Federal Geographic Data Committee metadata (data downloads).
On LSB Spatial Domain Steganography and Channel Capacity
2008-03-21
reveal the hidden information should not be taken as proof that the image is now clean. The survivability of LSB type spatial domain steganography ...the mindset that JPEG compressing an image is sufficient to destroy the steganography for spatial domain LSB type stego. We agree that JPEGing...modeling of 2 bit LSB steganography shows that theoretically there is non-zero stego payload possible even though the image has been JPEGed. We wish to
Orr, Asuka A; Gonzalez-Rivera, Juan C; Wilson, Mark; Bhikha, P Reena; Wang, Daiqi; Contreras, Lydia M; Tamamis, Phanourios
2018-02-01
There are over 150 currently known, highly diverse chemically modified RNAs, which are dynamic, reversible, and can modulate RNA-protein interactions. Yet, little is known about the wealth of such interactions. This can be attributed to the lack of tools that allow the rapid study of all the potential RNA modifications that might mediate RNA-protein interactions. As a promising step toward this direction, here we present a computational protocol for the characterization of interactions between proteins and RNA containing post-transcriptional modifications. Given an RNA-protein complex structure, potential RNA modified ribonucleoside positions, and molecular mechanics parameters for capturing energetics of RNA modifications, our protocol operates in two stages. In the first stage, a decision-making tool, comprising short simulations and interaction energy calculations, performs a fast and efficient search in a high-throughput fashion, through a list of different types of RNA modifications categorized into trees according to their structural and physicochemical properties, and selects a subset of RNA modifications prone to interact with the target protein. In the second stage, RNA modifications that are selected as recognized by the protein are examined in-detail using all-atom simulations and free energy calculations. We implement and experimentally validate this protocol in a test case involving the study of RNA modifications in complex with Escherichia coli (E. coli) protein Polynucleotide Phosphorylase (PNPase), depicting the favorable interaction between 8-oxo-7,8-dihydroguanosine (8-oxoG) RNA modification and PNPase. Further advancement of the protocol can broaden our understanding of protein interactions with all known RNA modifications in several systems. Copyright © 2018 Elsevier Inc. All rights reserved.
Male sexual dysfunctions: immersive virtual reality and multimedia therapy.
Optale, Gabriele; Pastore, Massimiliano; Marin, Silvia; Bordin, Diego; Nasta, Alberto; Pianon, Carlo
2004-01-01
The study describes a therapeutic approach using psycho-dynamic psychotherapy integrating virtual environment (VE) for resolving impotence or better erectile dysfunction (ED) of presumably psychological or mixed origin and premature ejaculation (PE). The plan for therapy consists of 12 sessions (15 if a sexual partner was involved) over a 25-week period on the ontogenetic development of male sexual identity, and the methods involved the use of a laptop PC, joystick, Virtual Reality (VR) helmet with miniature television screen showing a new specially-designed CD-ROM programs using Virtools with Windows 2000 and an audio CD. This study was composed of 30 patients, 15 (10 suffering from ED and 5 PE) plus 15 control patients (10 ED and 5 PE), that underwent the same therapeutic protocol but used an old VR helmet to interact with the old VE using a PC Pentium 133 16 Mb RAM. We also compared this study with another study we carried out on 160 men affected by sexual disorders, underwent the same therapeutic protocol, but treated using a VE created (in Superscape VRT 5.6) using always Windows 2000 with portable tools. Comparing the groups of patients affected by ED and PE, there emerged a significant positive results value without any important differences among the different VE used. However, we had a % increase of undesirable physical reactions during the more realistic 15-minute VR experience using Virtools development kit. Psychotherapy alone normally requires long periods of treatment in order to resolve sexual dysfunctions. Considering the particular way in which full-immersion VR involves the subject who experiences it (he is totally unobserved and in complete privacy), we hypothesise that this methodological approach might speed up the therapeutic psycho-dynamic process, which eludes cognitive defences and directly stimulates the subconscious, and that better results could be obtained in the treatment of these sexual disorders. This method can be used by any psychotherapist and it can be used alone or associated with pharmacotherapy prescribed by the urologist/andrologist as part of a therapeutic alliance.
NASA Astrophysics Data System (ADS)
Bai, Jing; Wen, Guoguang; Rahmani, Ahmed
2018-04-01
Leaderless consensus for the fractional-order nonlinear multi-agent systems is investigated in this paper. At the first part, a control protocol is proposed to achieve leaderless consensus for the nonlinear single-integrator multi-agent systems. At the second part, based on sliding mode estimator, a control protocol is given to solve leaderless consensus for the the nonlinear single-integrator multi-agent systems. It shows that the control protocol can improve the systems' convergence speed. At the third part, a control protocol is designed to accomplish leaderless consensus for the nonlinear double-integrator multi-agent systems. To judge the systems' stability in this paper, two classic continuous Lyapunov candidate functions are chosen. Finally, several worked out examples under directed interaction topology are given to prove above results.
Sharper and Deeper Views with MACAO-VLTI
NASA Astrophysics Data System (ADS)
2003-05-01
"First Light" with Powerful Adaptive Optics System for the VLT Interferometer Summary On April 18, 2003, a team of engineers from ESO celebrated the successful accomplishment of "First Light" for the MACAO-VLTI Adaptive Optics facility on the Very Large Telescope (VLT) at the Paranal Observatory (Chile). This is the second Adaptive Optics (AO) system put into operation at this observatory, following the NACO facility ( ESO PR 25/01 ). The achievable image sharpness of a ground-based telescope is normally limited by the effect of atmospheric turbulence. However, with Adaptive Optics (AO) techniques, this major drawback can be overcome so that the telescope produces images that are as sharp as theoretically possible, i.e., as if they were taken from space. The acronym "MACAO" stands for "Multi Application Curvature Adaptive Optics" which refers to the particular way optical corrections are made which "eliminate" the blurring effect of atmospheric turbulence. The MACAO-VLTI facility was developed at ESO. It is a highly complex system of which four, one for each 8.2-m VLT Unit Telescope, will be installed below the telescopes (in the Coudé rooms). These systems correct the distortions of the light beams from the large telescopes (induced by the atmospheric turbulence) before they are directed towards the common focus at the VLT Interferometer (VLTI). The installation of the four MACAO-VLTI units of which the first one is now in place, will amount to nothing less than a revolution in VLT interferometry . An enormous gain in efficiency will result, because of the associated 100-fold gain in sensitivity of the VLTI. Put in simple words, with MACAO-VLTI it will become possible to observe celestial objects 100 times fainter than now . Soon the astronomers will be thus able to obtain interference fringes with the VLTI ( ESO PR 23/01 ) of a large number of objects hitherto out of reach with this powerful observing technique, e.g. external galaxies. The ensuing high-resolution images and spectra will open entirely new perspectives in extragalactic research and also in the studies of many faint objects in our own galaxy, the Milky Way. During the present period, the first of the four MACAO-VLTI facilties was installed, integrated and tested by means of a series of observations. For these tests, an infrared camera was specially developed which allowed a detailed evaluation of the performance. It also provided some first, spectacular views of various celestial objects, some of which are shown here. PR Photo 12a/03 : View of the first MACAO-VLTI facility at Paranal PR Photo 12b/03 : The star HIC 59206 (uncorrected image). PR Photo 12c/03 : HIC 59206 (AO corrected image) PR Photo 12e/03 : HIC 69495 (AO corrected image) PR Photo 12f/03 : 3-D plot of HIC 69495 images (without and with AO correction) PR Photo 12g/03 : 3-D plot of the artificially dimmed star HIC 74324 (without and with AO correction) PR Photo 12d/03 : The MACAO-VLTI commissioning team at "First Light" PR Photo 12h/03 : K-band image of the Galactic Center PR Photo 12i/03 : K-band image of the unstable star Eta Carinae PR Photo 12j/03 : K-band image of the peculiar star Frosty Leo MACAO - the Multi Application Curvature Adaptive Optics facility ESO PR Photo 12a/03 ESO PR Photo 12a/03 [Preview - JPEG: 408 x 400 pix - 56k [Normal - JPEG: 815 x 800 pix - 720k] Captions : PR Photo 12a/03 is a front view of the first MACAO-VLTI unit, now installed at the 8.2-m VLT KUEYEN telescope. Adaptive Optics (AO) systems work by means of a computer-controlled deformable mirror (DM) that counteracts the image distortion induced by atmospheric turbulence. It is based on real-time optical corrections computed from image data obtained by a "wavefront sensor" (a special camera) at very high speed, many hundreds of times each second. The ESO Multi Application Curvature Adaptive Optics (MACAO) system uses a 60-element bimorph deformable mirror (DM) and a 60-element curvature wavefront sensor, with a "heartbeat" of 350 Hz (times per second). With this high spatial and temporal correcting power, MACAO is able to nearly restore the theoretically possible ("diffraction-limited") image quality of an 8.2-m VLT Unit Telescope in the near-infrared region of the spectrum, at a wavelength of about 2 µm. The resulting image resolution (sharpness) of the order of 60 milli-arcsec is an improvement by more than a factor of 10 as compared to standard seeing-limited observations. Without the benefit of the AO technique, such image sharpness could only be obtained if the telescope were placed above the Earth's atmosphere. The technical development of MACAO-VLTI in its present form was begun in 1999 and with project reviews at 6 months' intervals, the project quickly reached cruising speed. The effective design is the result of a very fruitful collaboration between the AO department at ESO and European industry which contributed with the diligent fabrication of numerous high-tech components, including the bimorph DM with 60 actuators, a fast-reaction tip-tilt mount and many others. The assembly, tests and performance-tuning of this complex real-time system was assumed by ESO-Garching staff. Installation at Paranal The first crates of the 60+ cubic-meter shipment with MACAO components arrived at the Paranal Observatory on March 12, 2003. Shortly thereafter, ESO engineers and technicians began the painstaking assembly of this complex instrument, below the VLT 8.2-m KUEYEN telescope (formerly UT2). They followed a carefully planned scheme, involving installation of the electronics, water cooling systems, mechanical and optical components. At the end, they performed the demanding optical alignment, delivering a fully assembled instrument one week before the planned first test observations. This extra week provided a very welcome and useful opportunity to perform a multitude of tests and calibrations in preparation of the actual observations. AO to the service of Interferometry The VLT Interferometer (VLTI) combines starlight captured by two or more 8.2- VLT Unit Telescopes (later also from four moveable1.8-m Auxiliary Telescopes) and allows to vastly increase the image resolution. The light beams from the telescopes are brought together "in phase" (coherently). Starting out at the primary mirrors, they undergo numerous reflections along their different paths over total distances of several hundred meters before they reach the interferometric Laboratory where they are combined to within a fraction of a wavelength, i.e., within nanometers! The gain by the interferometric technique is enormous - combining the light beams from two telescopes separated by 100 metres allows observation of details which could otherwise only be resolved by a single telescope with a diameter of 100 metres. Sophisticated data reduction is necessary to interpret interferometric measurements and to deduce important physical parameters of the observed objects like the diameters of stars, etc., cf. ESO PR 22/02 . The VLTI measures the degree of coherence of the combined beams as expressed by the contrast of the observed interferometric fringe pattern. The higher the degree of coherence between the individual beams, the stronger is the measured signal. By removing wavefront aberrations introduced by atmospheric turbulence, the MACAO-VLTI systems enormously increase the efficiency of combining the individual telescope beams. In the interferometric measurement process, the starlight must be injected into optical fibers which are extremely small in order to accomplish their function; only 6 µm (0.006 mm) in diameter. Without the "refocussing" action of MACAO, only a tiny fraction of the starlight captured by the telescopes can be injected into the fibers and the VLTI would not be working at the peak of efficiency for which it has been designed. MACAO-VLTI will now allow a gain of a factor 100 in the injected light flux - this will be tested in detail when two VLT Unit Telescopes, both equipped with MACAO-VLTI's, work together. However, the very good performance actually achieved with the first system makes the engineers very confident that a gain of this order will indeed be reached. This ultimate test will be performed as soon as the second MACAO-VLTI system has been installed later this year. MACAO-VLTI First Light After one month of installation work and following tests by means of an artificial light source installed in the Nasmyth focus of KUEYEN, MACAO-VLTI had "First Light" on April 18 when it received "real" light from several astronomical obejcts. During the preceding performance tests to measure the image improvement (sharpness, light energy concentration) in near-infrared spectral bands at 1.2, 1.6 and 2.2 µm, MACAO-VLTI was checked by means of a custom-made Infrared Test Camera developed for this purpose by ESO. This intermediate test was required to ensure the proper functioning of MACAO before it is used to feed a corrected beam of light into the VLTI. After only a few nights of testing and optimizing of the various functions and operational parameters, MACAO-VLTI was ready to be used for astronomical observations. The images below were taken under average seeing conditions and illustrate the improvement of the image quality when using MACAO-VLTI . MACAO-VLTI - First Images Here are some of the first images obtained with the test camera at the first MACAO-VLTI system, now installed at the 8.2-m VLT KUEYEN telescope. ESO PR Photo 12b/03 ESO PR Photo 12b/03 [Preview - JPEG: 400 x 468 pix - 25k [Normal - JPEG: 800 x 938 pix - 291k] ESO PR Photo 12c/03 ESO PR Photo 12c/03 [Preview - JPEG: 400 x 469 pix - 14k [Normal - JPEG: 800 x 938 pix - 135k] Captions : PR Photos 12b-c/03 show the first image, obtained by the first MACAO-VLTI system at the 8.2-m VLT KUEYEN telescope in the infrared K-band (wavelength 2.2 µm). It displays images of the star HIC 59206 (visual magnitude 10) obtained before (left; Photo 12b/03 ) and after (right; Photo 12c/03 ) the adaptive optics system was switched on. The binary is separated by 0.120 arcsec and the image was taken under medium seeing conditions (0.75 arcsec) seeing. The dramatic improvement in image quality is obvious. ESO PR Photo 12d/03 ESO PR Photo 12d/03 [Preview - JPEG: 400 x 427 pix - 18k [Normal - JPEG: 800 x 854 pix - 205k] ESO PR Photo 12e/03 ESO PR Photo 12e/03 [Preview - JPEG: 483 x 400 pix - 17k [Normal - JPEG: 966 x 800 pix - 169k] Captions : PR Photo 12d/03 shows one of the best images obtained with MACAO-VLTI (logarithmic intensity scale). The seeing was 0.8 arcsec at the time of the observations and three diffraction rings can clearly be seen around the star HIC 69495 of visual magnitude 9.9. This pattern is only well visible when the image resolution is very close to the theoretical limit. The exposure of the point-like source lasted 100 seconds through a narrow K-band filter. It has a Strehl ratio (a measure of light concentration) of about 55% and a Full-Width- Half-Maximum (FWHM) of 0.060 arcsec. The 3-D plot ( PRPhoto 12e/03 ) demonstrates the tremendous gain in peak intensity of the AO image (right) in peak intensity as compared to "open-loop" image (the "noise" to the left) obtained without the benefit of AO. ESO PR Photo 12f/03 ESO PR Photo 12f/03 [Preview - JPEG: 494 x 400 pix - 20k [Normal - JPEG: 988 x 800 pix - 204k] Caption : PR Photo 12f/03 demonstrates the correction performance of MACAO-VLTI when using a faint guide star. The observed star ( HIC 74324 (stellar spectral type G0 and visual magnitude 9.4) was artificially dimmed by a neutral optical filter to visual magnitude 16.5. The observation was carried out in 0.55 arcsec seeing and with a rather short atmospheric correlation time of 3 milliseconds at visible wavelengths. The Strehl ratio in the 25-second K-band exposure is about 10% and the FWHM is 0.14 arcseconds. The uncorrected image is shown to the left for comparison. The improvement is again impressive, even for a star as faint as this, indicating that guide stars of this magnitude are feasible during future observations. ESO PR Photo 12g/03 ESO PR Photo 12g/03 [Preview - JPEG: 528 x 400 pix - 48k [Normal - JPEG: 1055 x 800 pix - 542k] Captions : PR Photo 12g/03 shows some of the MACAO-VLTI commissioning team members in the VLT Control Room at the moment of "First Light" during the night between April 18-19, 2003. Sitting: Markus Kasper, Enrico Fedrigo - Standing: Robin Arsenault, Sebastien Tordo, Christophe Dupuy, Toomas Erm, Jason Spyromilio, Rob Donaldson (all from ESO). PR Photos 12b-c/03 show the first image in the infrared K-band (wavelength 2.2 µm) of a star (visual magnitude 10) obtained without and with image corrections by means of adaptive optics. PR Photo 12d/03 displays one of the best images obtained with MACAO-VLTI during the early tests. It shows a Strehl ratio (measure of light concentration) that fulfills the specifications according to which MACAO-VLTI was built. This enormous improvement when using AO techniques is clearly demonstrated in PR Photo 12e/03 , with the uncorrected image profile (left) hardly visible when compared to the corrected profile (right). PR Photo 11f/03 demonstrates the correction capabilities of MACAO-VLTI when using a faint guide star. Tests using different spectral types showed that the limiting visual magnitude varies between 16 for early-type B-stars and about 18 for late-type M-stars. Astronomical Objects seen at the Diffraction Limit The following examples of MACAO-VLTI observations of two well-known astronomical objects were obtained in order to provisionally evaluate the research opportunities now opening with MACAO-VLTI. They may well be compared with space-based images. The Galactic Center ESO PR Photo 12h/03 ESO PR Photo 12h/03 [Preview - JPEG: 693 x 400 pix - 46k [Normal - JPEG: 1386 x 800 pix - 403k] Caption : PR Photo 12h/03 shows a 90-second K-band exposure of the central 6 x 13 arcsec 2 around the Galactic Center obtained by MACAO-VLTI under average atmospheric conditions (0.8 arcsec seeing). Although the 14.6 magnitude guide star is located roughly 20 arcsec from the field center - this leading to isoplanatic degradation of image sharpness - the present image is nearly diffraction limited and has a point-source FWHM of about 0.115 arcsec. The center of our own galaxy is located in the Sagittarius constellation at a distance of approximately 30,000 light-years. PR Photo 12h/03 shows a short-exposure infrared view of this region, obtained by MACAO-VLTI during the early test phase. Recent AO observations using the NACO facility at the VLT provide compelling evidence that a supermassive black hole with 2.6 million solar masses is located at the very center, cf. ESO PR 17/02 . This result, based on astrometric observations of a star orbiting the black hole and approaching it to within a distance of only 17 light-hours, would not have been possible without images of diffraction limited resolution. Eta Carinae ESO PR Photo 12i/03 ESO PR Photo 12i/03 [Preview - JPEG: 400 x 482 pix - 25k [Normal - JPEG: 800 x 963 pix - 313k] Caption : PR Photo 12i/03 displays an infrared narrow K-band image of the massive star Eta Carinae . The image quality is difficult to estimate because the central star saturated the detector, but the clear structure of the diffraction spikes and the size of the smallest features visible in the photo indicate a near-diffraction limited performance. The field measures about 6.5 x 6.5 arcsec 2. Eta Carinae is one of the heaviest stars known, with a mass that probably exceeds 100 solar masses. It is about 4 million times brighter than the Sun, making it one of the most luminous stars known. Such a massive star has a comparatively short lifetime of about 1 million years only and - measured in the cosmic timescale- Eta Carinae must have formed quite recently. This star is highly unstable and prone to violent outbursts. They are caused by the very high radiation pressure at the star's upper layers, which blows significant portions of the matter at the "surface" into space during violent eruptions that may last several years. The last of these outbursts occurred between 1835 and 1855 and peaked in 1843. Despite its comparaticely large distance - some 7,500 to 10,000 light-years - Eta Carinae briefly became the second brightest star in the sky at that time (with an apparent magnitude -1), only surpassed by Sirius. Frosty Leo ESO PR Photo 12j/03 ESO PR Photo 12j/03 [Preview - JPEG: 411 x 400 pix - 22k [Normal - JPEG: 821 x 800 pix - 344k] Caption : PR Photo 12j/03 shows a 5 x 5 arcsec 2 K-band image of the peculiar star known as "Frosty Leo" obtained in 0.7 arcsec seeing. Although the object is comparatively bright (visual magnitude 11), it is a difficult AO target because of its extension of about 3 arcsec at visible wavelengths. The corrected image quality is about FWHM 0.1 arcsec. Frosty Leo is a magnitude 11 (post-AGB) star surrounded by an envelope of gas, dust, and large amounts of ice (hence the name). The associated nebula is of "butterfly" shape (bipolar morphology) and it is one of the best known examples of the brief transitional phase between two late evolutionary stages, asymptotic giant branch (AGB) and the subsequent planetary nebulae (PNe). For a three-solar-mass object like this one, this phase is believed to last only a few thousand years, the wink of an eye in the life of the star. Hence, objects like this one are very rare and Frosty Leo is one of the nearest and brightest among them.
The Most Remote Gamma-Ray Burst
NASA Astrophysics Data System (ADS)
2000-10-01
ESO Telescopes Observe "Lightning" in the Young Universe Summary Observations with telescopes at the ESO La Silla and Paranal observatories (Chile) have enabled an international team of astronomers [1] to measure the distance of a "gamma-ray burst", an extremely violent, cosmic explosion of still unknown physical origin. It turns out to be the most remote gamma-ray burst ever observed . The exceedingly powerful flash of light from this event was emitted when the Universe was very young, less than about 1,500 million years old, or only 10% of its present age. Travelling with the speed of light (300,000 km/sec) during 11,000 million years or more, the signal finally reached the Earth on January 31, 2000. The brightness of the exploding object was enormous, at least 1,000,000,000,000 times that of our Sun, or thousands of times that of the explosion of a single, heavy star (a "supernova"). The ESO Very Large Telescope (VLT) was also involved in trail-blazing observations of another gamma-ray burst in May 1999, cf. ESO PR 08/99. PR Photo 28a/00 : Sky field near GRB 000131 . PR Photo 28b/00 : The fading optical counterpart of GRB 000131 . PR Photo 28c/00 : VLT spectrum of GRB 000131 . What are Gamma-Ray Bursts? One of the currently most active fields of astrophysics is the study of the mysterious events known as "gamma-ray bursts" . They were first detected in the late 1960's by instruments on orbiting satellites. These short flashes of energetic gamma-rays last from less than a second to several minutes. Despite much effort, it is only within the last few years that it has become possible to locate the sites of some of these events (e.g. with the Beppo-Sax satellite ). Since the beginning of 1997, astronomers have identified about twenty optical sources in the sky that are associated with gamma-ray bursts. They have been found to be situated at extremely large (i.e., "cosmological") distances. This implies that the energy release during a gamma-ray burst within a few seconds is larger than that of the Sun during its entire life time (about 10,000 million years). "Gamma-ray bursts" are in fact by far the most powerful events since the Big Bang that are known in the Universe. While there are indications that gamma-ray bursts originate in star-forming regions within distant galaxies, the nature of such explosions remains a puzzle. Recent observations with large telescopes, e.g. the measurement of the degree of polarization of light from a gamma-ray burst in May 1999 with the VLT ( ESO PR 08/99), are now beginning to cast some light on this long-standing mystery. The afterglow of GRB 000131 ESO PR Photo 28a/00 ESO PR Photo 28a/00 [Preview - JPEG: 400 x 475 pix - 41k] [Normal - JPEG: 800 x 949 pix - 232k] [Full-Res - JPEG: 1200 x 1424 pix - 1.2Mb] ESO PR Photo 28b/00 ESO PR Photo 28b/00 [Preview - JPEG: 400 x 480 pix - 67k] [Normal - JPEG: 800 x 959 pix - 288k] [Full-Res - JPEG: 1200 x 1439 pix - 856k] Caption : PR Photo 28a/00 is a colour composite image of the sky field around the position of the gamma-ray burst GRB 000131 that was detected on January 31, 2000. It is based on images obtained with the ESO Very Large Telescope at Paranal. The object is indicated with an arrow, near a rather bright star (magnitude 9, i.e., over 1 million times brighter than the faintest objects visible on this photo). This and other bright objects in the field are responsible for various unavoidable imaging effects, caused by optical reflections (ring-shaped "ghost images", e.g. to the left of the brightest star) and detector saturation effects (horizontal and vertical straight lines and coloured "coronae" at the bright objects, and areas of "bleeding", e.g. below the bright star). PR Photo 28b/00 shows the rapid fading of the optical counterpart of GRB 000131 (slightly left of the centre), by means of exposures with the VLT on February 4 (upper left), 6 (upper right), 8 (lower left) and March 5 (lower right). It is no longer visible on the last photo. Technical information about these photos is available below. A gamma-ray burst was detected on January 31, 2000, by an international network of satellites ( Ulysses , NEAR and Konus ) via the InterPlanetary Network (IPN) [2]. It was designated GRB 000131 according to the date of the event. From geometric triangulation by means of the measured, exact arrival times of the signal at the individual satellites, it was possible to determine the direction from which the burst came. It was found to be from a point within a comparatively small sky area (about 50 arcmin 2 or 1/10 of the apparent size of the Moon), just inside the border of the southern constellation Carina (The Keel). Follow-up observations were undertaken by a group of European astronomers [1] with the ESO Very Large Telescope at the Paranal Observatory. A comparison of several exposures with the FORS1 multi-mode instrument at the 8.2-m VLT ANTU telescope during the nights of February 3-4 and 5-6 revealed a faint, point-like object that was fading rapidly - this was identified as the optical counterpart of the gamma-ray burst (the "afterglow"). On the second night, the R-magnitude (brightness) was found to be only 24.4, or 30 million times fainter than visible with the unaided eye in a dark sky. It was also possible to observe it with a camera at the 1.54-m Danish Telescope at the La Silla Observatory , albeit only in a near-infrared band and with a 1-hour exposure. Additional observations were made on February 8 with the SOFI multi-mode instrument at the ESO 3.58-m New Technology Telescope (NTT) at La Silla. The observations were performed partly by the astronomers from the group, partly in "service mode" by ESO staff at La Silla and Paranal. The observations showed that the light from the afterglow was very red, without blue and green light. This indicated a comparatively large distance and, assuming that the light from the explosion would originally have had the same colour (spectral distribution) as that of optical counterparts of other observed gamma-ray bursts, a photometric redshift of 4.35 to 4.70 was deduced [3]. A spectrum of GRB 000131 ESO PR Photo 28c/00 ESO PR Photo 28c/00 [Preview - JPEG: 400 x 332 pix - 22k] [Normal - JPEG: 800 x 663 pix - 62k] Caption : PR Photo 28c/00 shows the spectrum of the afterglow of GRB 000131 , obtained during a 3-hr exposure with the FORS1 multi-mode instrument at VLT ANTU on February 8, 2000. The "Lyman-alpha break" at wavelength 670.1 nm is indicated. Technical information about this photo is available below. An accurate measurement of the redshift - hence the distance - requires spectroscopic observations. A spectrum of GRB 000131 was therefore obtained on February 8, 2000, cf. PR Photo 28c/00 . At this time, the brightness had decreased further and the object had become so faint (R-magnitude 25.3) that a total of 3 hours of exposure time was necessary with VLT ANTU + FORS1 [4]. Still, this spectrum is quite "noisy". The deduced photometric redshift of GRB 000131 predicts that a "break" will be seen in the red region of the spectrum, at a wavelength somewhere between 650 and 700 nm. This break is caused by the strong absorption of light in intergalactic hydrogen clouds along the line of sight. The effect is known as the "Lyman-alpha forest" and is observed in all remote objects [5]. As PR Photo 28c/00 shows, such a break was indeed found at wavelength 670.1 nm. Virtually all light at shorter wavelengths from the optical counterpart of GRB 000131 is absorbed by intervening hydrogen clouds. From the rest wavelength of the Lyman-alpha break (121.6 nm), the redshift of GRB 000131 is then determined as 4.50, corresponding to a travel time of more than 90% of the age of the Universe . The most distant gamma-ray burst so far The measured redshift of 4.50 makes GRB 000131 the most distant gamma-ray burst known (the previous, spectroscopically confirmed record was 3.42). Assuming an age of the Universe of the order of 12 - 14,000 million years, the look-back time indicates that the explosion took place around the time our own galaxy, the Milky Way, was formed and at least 6,000 million years before the solar system was born. GRB 000131 and other gamma-ray bursts are believed to have taken place in remote galaxies. However, due to the huge distance, it has not yet been possible to see the galaxy in which the GRB 000131 event took place (the "host" galaxy). From the observed fading of the afterglow it is possible to estimate that the maximum brightness of this explosion was at least 10,000 times brighter than the host galaxy. Future studies of gamma-ray bursts The present team of astronomers has now embarked upon a detailed study of the surroundings of GRB 000131 with the VLT. A main goal is to observe the properties of the host galaxy. From the observations of about twenty optical counterparts of gamma-ray bursts identified until now, it is becoming increasingly clear that these very rare events are somehow related to the death of massive, short-lived stars . But despite the accumulating amount of excellent data, the details of the mechanism that leads to such dramatic explosions still remain a puzzle to astrophysicists. The detection and present follow-up observations of GRB 000131 highlight the new possibilities for studies of the extremely distant (and very early) Universe, now possible by means of gamma-ray bursts. When observed with the powerful instruments at a large ground-based telescope like the VLT, this incredibly bright class of cosmological objects may throw light on the fundamental processes of star formation in the infant universe. Of no less interest is the opportunity to analyse the chemical composition of the gas clouds at the epoch galaxies formed, by means of the imprints of the corresponding absorption lines on the afterglow spectrum. Waiting for the opportunity In this context, it would be extremely desirable to obtain very detailed (high-dispersion) spectra of the afterglow of a future gamma-ray burst, soon after the detection and while it is still sufficiently bright. It would for instance be possible to observe a gamma-ray burst like GRB 000131 with the UVES spectrograph at VLT KUEYEN at the moment of maximum brightness (that may have been about magnitude 16). An example of chemical studies of clouds at intermediate distance by means of a more nearby quasar is shown in ESO PR Photo 09h/00. Attempts are therefore now made to shorten considerably the various steps needed to perform such observations. This concerns especially the time needed to identify the counterpart of a gamma-ray burst and - to a lesser extent - the necessary reaction time at the VLT to point UVES towards the object (in theory, a matter of minutes only). The launch of the HETE-2 (High Energy Transient Explorer 2) gamma-ray burst satellite on October 9, 2000, is a major step in this direction. Under optimal conditions, a relative accurate sky position of a gamma-ray burst may henceforth reach the astronomy community within only 10-20 seconds of the first detection by this satellite. More information The research described in this press release is the subject of a scientific article by the team, entitled "VLT Identification of the optical afterglow of the gamma-ray burst GRB 000131 at z = 4.50" ; it will appear in a special VLT-issue (Letters to the Editor) of the European journal Astronomy & Astrophysics (December 1, 2000). The results are being presented today (October 18) at the joint CNR/ESO meeting on "Gamma-Ray Burst in the Afterglow Era" in Rome, Italy. Note also the related article in the ESO Messenger (No. 100, p. 32, June 2000). Notes [1]: The team consists of Michael Andersen (University of Oulu, Finland), Holger Pedersen, Jens Hjorth, Brian Lindgren Jensen, Lisbeth Fogh Olsen, Lise Christensen (University of Copenhagen, Denmark), Leslie Hunt (Centro per l'Astronomia Infrarossa e lo Studio del Mezzo, Florence, Italy), Javier Gorosabel (Danish Space Research Institute, Denmark), Johan Fynbo, Palle Møller (European Southern Observatory), Richard Marc Kippen (University of Alabama in Huntsville and NASA/Marshall Space Flight Center, USA), Bjarne Thomsen (University of Århus, Denmark), Marianne Vestergaard (Ohio State University, USA), Nicola Masetti, Eliana Palazzi (Instituto Tecnologie e Studio Radiazoni Extraterresti, Bologna, Italy) Kevin Hurley (University of California, Berkeley, USA), Thomas Cline (NASA Goddard Space Flight Center, Greenbelt, USA), Lex Kaper (Sterrenkundig Instituut ``Anton Pannekoek", the Netherlands) and Andreas O. Jaunsen (formerly University of Oslo, Norway; now ESO-Paranal). [2]: Detailed reports about the early observations of this gamma-ray burst are available at the dedicated webpage within the GRB Coordinates Network website. [3]: The photometric redshift method makes it possible to judge the distance to a remote celestial object (a galaxy, a quasar, a gamma-ray burst afterglow) from its measured colours. It is based on the proportionality between the distance and the velocity along the line of sight (Hubble's law) that reflects the expansion of the Universe. The larger the distance of an object is, the larger is its velocity and, due to the Doppler effect, the spectral shift of its emission towards longer (redder) wavelengths. Thus, the measured colour provides a rough indication of the distance. Examples of this method are shown in ESO PR 20/98 (Photos 48a/00 and 48e/00). [4]: In fact, the object was so faint that the positioning of the spectrograph slit had to be done in "blind" offset, i.e. without actually seeing the object on the slit during the observation. This very difficult observational feat was possible because of excellent preparations by the team of astronomers and the very good precision of the telescope and instrument. [5]: The " Lyman-alpha forest" refers to the crowding of absorption lines from intervening hydrogen clouds, shortward of the strong Lyman-alpha spectral line at rest wavelength 121.6 nm. Good examples in the VLT ANTU + FORS1 spectra of distant quasars are shown in ESO PR Photos 14a-c/99 and, at much higher dispersion, in a spectrum obtained with VLT KUEYEN + UVES, cf. ESO PR 08/00 (Photo 09f/00). Technical information about the photos PR Photo 28a/00 : The photo is based on three 8-min exposures obtained with VLT ANTU and the multi-mode FORS1 instrument. The optical filters were B (seeing 0.9 arcsec; here rendered as blue), V (0.8 arcsec; green) and R (0.7 arcsec; red). The field measures 6.8 x 6.8 arcmin 2. North is up and East is left. PR Photo 28b/00 : The four R-exposures were obtained with VLT ANTU + FORS1 on February 4 (magnitude R = 23.3), 6 (24.4), 8 (25.1) and March 5 (no longer visible). The field measures 48 x 48 arcsec 2. North is up and East is left. PR Photo 28c/00 : The spectrum was obtained during a 3-hr exposure with the FORS1 multi-mode instrument at VLT ANTU on February 8, 2000, when the object's magnitude was only R = 25.3. The mean levels of the spectral continua on either side of the redshifted "Lyman-alpha break" at wavelength 670.1 nm are indicated.
Movement towards transdiagnostic psychotherapeutic practices for the affective disorders.
Gros, Daniel F; Allan, Nicholas P; Szafranski, Derek D
2016-08-01
Evidence-based cognitive behavioural therapy (CBT) practices were first developed in the 1960s. Over the decades, refinements and alternative symptom foci resulted in the development of several CBT protocols/manuals for each of the many disorders, especially in the affective disorders. Although shown to be effective in highly trained providers, the proliferation of CBT protocols also has shown to demonstrate challenges in dissemination and implementation efforts due to the sheer number of CBT protocols and their related training requirements (eg, 6 months per protocol) and their related cost (eg, over US$2000 each; lost days/hours at work). To address these concerns, newer transdiagnostic CBT protocols have been developed to reduce the number of disorder-specific CBT protocols needed to treat patients with affective disorders. Transdiagnostic treatments are based on the notion that various disorder-specific CBT protocols contain important but overlapping treatment components that can be distilled into a single treatment and therefore address the symptoms and comorbidities across all of the disorders at once. 3 examples of transdiagnostic treatments include group CBT of anxiety, unified protocol for transdiagnostic treatment for emotional disorders and transdiagnostic behaviour therapy. Each transdiagnostic protocol is designed for a different set of disorders, contains a varied amount of CBT treatment components and is tested in different types of samples. However, together, these 3 transdiagnostic psychotherapies represent the future of CBT practice. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
The SETI episode in the 1967 discovery of pulsars
NASA Astrophysics Data System (ADS)
Penny, Alan John
2013-09-01
In the winter of 1967 Cambridge radio astronomers discovered a new type of radio source of such an artificial seeming nature that for a few weeks some members of the group had to seriously consider whether they had discovered an extraterrestrial intelligence. Although their investigations lead them to a natural explanation (they had discovered pulsars), they had discussed the implications if it was indeed an artificial source: how to verify such a conclusion and how to announce it, and whether such a discovery might be dangerous. In this they presaged many of the components of the SETI Detection Protocols and the proposed Reply Protocols which have been used to guide the responses of groups dealing with the detection of an extraterrestrial intelligence. These Protocols were only established some twenty five years later in the 1990s and 2000s. Using contemporary and near-contemporary documentation and later recollections, this paper discusses in detail what happened that winter.
Allison, Scott A; Sweet, Clifford F; Beall, Douglas P; Lewis, Thomas E; Monroe, Thomas
2005-09-01
The PACS implementation process is complicated requiring a tremendous amount of time, resources, and planning. The Department of Defense (DOD) has significant experience in developing and refining PACS acceptance testing (AT) protocols that assure contract compliance, clinical safety, and functionality. The DOD's AT experience under the initial Medical Diagnostic Imaging Support System contract led to the current Digital Imaging Network-Picture Archiving and Communications Systems (DIN-PACS) contract AT protocol. To identify the most common system and component deficiencies under the current DIN-PACS AT protocol, 14 tri-service sites were evaluated during 1998-2000. Sixteen system deficiency citations with 154 separate types of limitations were noted with problems involving the workstation, interfaces, and the Radiology Information System comprising more than 50% of the citations. Larger PACS deployments were associated with a higher number of deficiencies. The most commonly cited systems deficiencies were among the most expensive components of the PACS.
Bueno, Andressa; Gutierres, Jessié M.; Lhamas, Cibele; Andrade, Cinthia M.
2017-01-01
The aim of this study was to assess if the dose and exposure duration of the anabolic androgenic steroids (AAS) boldenone (BOL) and stanazolol (ST) affected memory, anxiety, and social interaction, as well as acetylcholinesterase (AChE) activity and oxidative stress in the cerebral cortex (CC) and hippocampus (HC). Male Wistar rats (90 animals) were randomly assigned to three treatment protocols: (I) 5 mg/kg BOL or ST, once a week for 4 weeks; (II) 2.5 mg/kg BOL or ST, once a week for 8 weeks; and (III) 1.25 mg/kg BOL or ST, once a week for 12 weeks. Each treatment protocol included a control group that received an olive oil injection (vehicle control) and AAS were administered intramuscularly (a total volume of 0.2 ml) once a week in all three treatment protocols. In the BOL and ST groups, a higher anxiety level was observed only for Protocol I. BOL and ST significantly affected social interaction in all protocols. Memory deficits and increased AChE activity in the CC and HC were found in the BOL groups treated according to Protocol III only. In addition, BOL and ST significantly increased oxidative stress in both the CC and HC in the groups treated according to Protocol I and III. In conclusion, our findings show that the impact of BOL and ST on memory, anxiety, and social interaction depends on the dose and exposure duration of these AAS. PMID:28594925
Bueno, Andressa; Carvalho, Fabiano B; Gutierres, Jessié M; Lhamas, Cibele; Andrade, Cinthia M
2017-01-01
The aim of this study was to assess if the dose and exposure duration of the anabolic androgenic steroids (AAS) boldenone (BOL) and stanazolol (ST) affected memory, anxiety, and social interaction, as well as acetylcholinesterase (AChE) activity and oxidative stress in the cerebral cortex (CC) and hippocampus (HC). Male Wistar rats (90 animals) were randomly assigned to three treatment protocols: (I) 5 mg/kg BOL or ST, once a week for 4 weeks; (II) 2.5 mg/kg BOL or ST, once a week for 8 weeks; and (III) 1.25 mg/kg BOL or ST, once a week for 12 weeks. Each treatment protocol included a control group that received an olive oil injection (vehicle control) and AAS were administered intramuscularly (a total volume of 0.2 ml) once a week in all three treatment protocols. In the BOL and ST groups, a higher anxiety level was observed only for Protocol I. BOL and ST significantly affected social interaction in all protocols. Memory deficits and increased AChE activity in the CC and HC were found in the BOL groups treated according to Protocol III only. In addition, BOL and ST significantly increased oxidative stress in both the CC and HC in the groups treated according to Protocol I and III. In conclusion, our findings show that the impact of BOL and ST on memory, anxiety, and social interaction depends on the dose and exposure duration of these AAS.
Similarity-based modeling in large-scale prediction of drug-drug interactions.
Vilar, Santiago; Uriarte, Eugenio; Santana, Lourdes; Lorberbaum, Tal; Hripcsak, George; Friedman, Carol; Tatonetti, Nicholas P
2014-09-01
Drug-drug interactions (DDIs) are a major cause of adverse drug effects and a public health concern, as they increase hospital care expenses and reduce patients' quality of life. DDI detection is, therefore, an important objective in patient safety, one whose pursuit affects drug development and pharmacovigilance. In this article, we describe a protocol applicable on a large scale to predict novel DDIs based on similarity of drug interaction candidates to drugs involved in established DDIs. The method integrates a reference standard database of known DDIs with drug similarity information extracted from different sources, such as 2D and 3D molecular structure, interaction profile, target and side-effect similarities. The method is interpretable in that it generates drug interaction candidates that are traceable to pharmacological or clinical effects. We describe a protocol with applications in patient safety and preclinical toxicity screening. The time frame to implement this protocol is 5-7 h, with additional time potentially necessary, depending on the complexity of the reference standard DDI database and the similarity measures implemented.
Roguev, Assen; Ryan, Colm J; Xu, Jiewei; Colson, Isabelle; Hartsuiker, Edgar; Krogan, Nevan
2018-02-01
This protocol describes computational analysis of genetic interaction screens, ranging from data capture (plate imaging) to downstream analyses. Plate imaging approaches using both digital camera and office flatbed scanners are included, along with a protocol for the extraction of colony size measurements from the resulting images. A commonly used genetic interaction scoring method, calculation of the S-score, is discussed. These methods require minimal computer skills, but some familiarity with MATLAB and Linux/Unix is a plus. Finally, an outline for using clustering and visualization software for analysis of resulting data sets is provided. © 2018 Cold Spring Harbor Laboratory Press.
Kasenda, Benjamin; von Elm, Erik; You, John J.; Tomonaga, Yuki; Saccilotto, Ramon; Amstutz, Alain; Bengough, Theresa; Meerpohl, Joerg J.; Stegert, Mihaela; Olu, Kelechi K.; Tikkinen, Kari A. O.; Neumann, Ignacio; Carrasco-Labra, Alonso; Faulhaber, Markus; Mulla, Sohail M.; Mertz, Dominik; Akl, Elie A.; Bassler, Dirk; Busse, Jason W.; Nordmann, Alain; Gloy, Viktoria; Ebrahim, Shanil; Schandelmaier, Stefan; Sun, Xin; Vandvik, Per O.; Johnston, Bradley C.; Walter, Martin A.; Burnand, Bernard; Hemkens, Lars G.; Bucher, Heiner C.; Guyatt, Gordon H.; Briel, Matthias
2016-01-01
Background Little is known about publication agreements between industry and academic investigators in trial protocols and the consistency of these agreements with corresponding statements in publications. We aimed to investigate (i) the existence and types of publication agreements in trial protocols, (ii) the completeness and consistency of the reporting of these agreements in subsequent publications, and (iii) the frequency of co-authorship by industry employees. Methods and Findings We used a retrospective cohort of randomized clinical trials (RCTs) based on archived protocols approved by six research ethics committees between 13 January 2000 and 25 November 2003. Only RCTs with industry involvement were eligible. We investigated the documentation of publication agreements in RCT protocols and statements in corresponding journal publications. Of 647 eligible RCT protocols, 456 (70.5%) mentioned an agreement regarding publication of results. Of these 456, 393 (86.2%) documented an industry partner’s right to disapprove or at least review proposed manuscripts; 39 (8.6%) agreements were without constraints of publication. The remaining 24 (5.3%) protocols referred to separate agreement documents not accessible to us. Of those 432 protocols with an accessible publication agreement, 268 (62.0%) trials were published. Most agreements documented in the protocol were not reported in the subsequent publication (197/268 [73.5%]). Of 71 agreements reported in publications, 52 (73.2%) were concordant with those documented in the protocol. In 14 of 37 (37.8%) publications in which statements suggested unrestricted publication rights, at least one co-author was an industry employee. In 25 protocol-publication pairs, author statements in publications suggested no constraints, but 18 corresponding protocols documented restricting agreements. Conclusions Publication agreements constraining academic authors’ independence are common. Journal articles seldom report on publication agreements, and, if they do, statements can be discrepant with the trial protocol. PMID:27352244
Refinement of Protocols for Measuring the Apparent Optical Properties of Seawater. Chapter 8
NASA Technical Reports Server (NTRS)
Hooker, Stanford B.; Zibordi, Giuseppe; Berthon, Jean-Francois; Nirek, Andre; Antoine, David
2003-01-01
Ocean color satellite missions, like the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) or the Moderate Resolution Imaging Spectroradiometer (MODIS) projects, are tasked with acquiring a global ocean color data set, validating and monitoring the accuracy and quality of the data, processing the radiometric data into geophysical units using a set of atmospheric and bio-optical algorithms, and distributing the final products to the scientific community. The long-standing requirement of the SeaWiFS Project, for example, is to produce spectral water-leaving radiances, LW(lambda), to within 5% absolute (lambda denotes wavelength) and chlorophyll a concentrations to within 35% (Hooker and Esaias 1993), and most ocean color sensors have the same or similar requirements. Although a diverse set of activities are required to ensure the accuracy requirements are met (Hooker and McClain 2000), the perspective here is with field observations. The accurate determination of upper ocean apparent optical properties (AOPs) is essential for the vicarious calibration of ocean color data and the validation of the derived data products, because the sea-truth measurements are used to evaluate the satellite observations (Hooker and McClain 2000). The uncertainties with in situ AOP measurements have various sources: a) the sampling procedures used in the field, including the environmental conditions encountered; b) the absolute characterization of the radiometers in the laboratory; c) the conversion of the light signals to geophysical units in a processing scheme, and d) the stability of the radiometers in the harsh environment they are subjected to during transport and use. Assuming ideal environmental conditions, so this aspect can be neglected, the SeaWiFS ground-truth uncertainty budget can only be satisfied if each uncertainty is on the order of 1-2%, or what is generally referred to as 1% radiometry. In recent years, progress has been made in estimating the magnitude of some of these uncertainties and in defining procedures for minimizing them. For the SeaWiFS Project, the first step was to convene a workshop to draft the SeaWiFS Ocean Optics Protocols (hereafter referred to as the Protocols). The Protocols initially adhered to the Joint Global Ocean Flux Study (JGOFS) sampling procedures (JGOFS 1991) and defined the standards for optical measurements to be used in SeaWiFS calibration and validation activities (Mueller and Austin 1992). Over time, the Protocols were revised (Mueller and Austin 1995), and then recurringly updated on essentially an annual basis (Mueller 2000, 2002, and 2003) as part of the Sensor Inter-comparison and Merger for Biological and Interdisciplinary Oceanic Studies (SIMBIOS) project. 98
A data seamless interaction scheme between electric power secondary business systems
NASA Astrophysics Data System (ADS)
Ai, Wenkai; Qian, Feng
2018-03-01
At present, the data interaction of electric power secondary business systems is very high, and it is not universal to develop programs when data interaction is carried out by different manufacturers' electric power secondary business systems. There are different interaction schemes for electric power secondary business systems with different manufacturers, which lead to high development cost, low reusability and high maintenance difficulty. This paper introduces a new data seamless interaction scheme between electric power secondary business systems. The scheme adopts the international common Java message service protocol as the transmission protocol, adopts the common JavaScript object symbol format as the data interactive format, unified electric power secondary business systems data interactive way, improve reusability, reduce complexity, monitor the operation of the electric power secondary business systems construction has laid a solid foundation.
2011-04-11
the Naval Health Research Center (protocol NHRC.2000.0007). Data Sources In addition to our longitudinal survey instrument , other data sources...megavitamin therapy, homeopathic remedies, hypnosis , massage therapy, relaxation, and spiritual healing. For the purposes of these analyses...acupuncture, biofeedback, chiropractic care, energy healing, folk medicine, hypnosis , and massage therapy were grouped together as practitioner-assisted
SURVIVAL OF CAPTIVE-REARED PUERTO RICAN PARROTS RELEASED IN THE CARIBBEAN NATIONAL FOREST
THOMAS H. WHITE; JAIME A. COLLAZO; FRANCISCO J. VILELLA
2005-01-01
We report first-year survival for 34 captive-reared Puerto Rican Parrots (Amazona vittata) released in the Caribbean National Forest, Puerto Rico between 2000 and 2002. The purpose of the releases were to increase population size and the potential number of breeding individuals of the sole extant wild population, and to refine release protocols for eventual...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-22
... preparation of an Environmental Assessment (EA) or Environmental Impact Statement (EIS), absent extraordinary..., 972-73 (S.D. Ill. 1999), aff'd, 230 F.3d 947, 954-55 (7th Cir. 2000). For the reasons set out in the... of actions are categorically excluded and do not require the preparation of an EA or EIS because they...
2015-12-24
Signal to Noise Ratio SPICE Simulation Program with Integrated Circuit Emphasis TIFF Tagged Image File Format USC University of Southern California xvii...sources can create errors in digital circuits. These effects can be simulated using Simulation Program with Integrated Circuit Emphasis ( SPICE ) or...compute summary statistics. 4.1 Circuit Simulations Noisy analog circuits can be simulated in SPICE or Cadence SpectreTM software via noisy voltage
Embedded wavelet packet transform technique for texture compression
NASA Astrophysics Data System (ADS)
Li, Jin; Cheng, Po-Yuen; Kuo, C.-C. Jay
1995-09-01
A highly efficient texture compression scheme is proposed in this research. With this scheme, energy compaction of texture images is first achieved by the wavelet packet transform, and an embedding approach is then adopted for the coding of the wavelet packet transform coefficients. By comparing the proposed algorithm with the JPEG standard, FBI wavelet/scalar quantization standard and the EZW scheme with extensive experimental results, we observe a significant improvement in the rate-distortion performance and visual quality.
ImageJ: Image processing and analysis in Java
NASA Astrophysics Data System (ADS)
Rasband, W. S.
2012-06-01
ImageJ is a public domain Java image processing program inspired by NIH Image. It can display, edit, analyze, process, save and print 8-bit, 16-bit and 32-bit images. It can read many image formats including TIFF, GIF, JPEG, BMP, DICOM, FITS and "raw". It supports "stacks", a series of images that share a single window. It is multithreaded, so time-consuming operations such as image file reading can be performed in parallel with other operations.
NASA Technical Reports Server (NTRS)
Linares, Irving; Mersereau, Russell M.; Smith, Mark J. T.
1994-01-01
Two representative sample images of Band 4 of the Landsat Thematic Mapper are compressed with the JPEG algorithm at 8:1, 16:1 and 24:1 Compression Ratios for experimental browsing purposes. We then apply the Optimal PSNR Estimated Spectra Adaptive Postfiltering (ESAP) algorithm to reduce the DCT blocking distortion. ESAP reduces the blocking distortion while preserving most of the image's edge information by adaptively postfiltering the decoded image using the block's spectral information already obtainable from each block's DCT coefficients. The algorithm iteratively applied a one dimensional log-sigmoid weighting function to the separable interpolated local block estimated spectra of the decoded image until it converges to the optimal PSNR with respect to the original using a 2-D steepest ascent search. Convergence is obtained in a few iterations for integer parameters. The optimal logsig parameters are transmitted to the decoder as a negligible byte of overhead data. A unique maxima is guaranteed due to the 2-D asymptotic exponential overshoot shape of the surface generated by the algorithm. ESAP is based on a DFT analysis of the DCT basis functions. It is implemented with pixel-by-pixel spatially adaptive separable FIR postfilters. PSNR objective improvements between 0.4 to 0.8 dB are shown together with their corresponding optimal PSNR adaptive postfiltered images.
Digital image modification detection using color information and its histograms.
Zhou, Haoyu; Shen, Yue; Zhu, Xinghui; Liu, Bo; Fu, Zigang; Fan, Na
2016-09-01
The rapid development of many open source and commercial image editing software makes the authenticity of the digital images questionable. Copy-move forgery is one of the most widely used tampering techniques to create desirable objects or conceal undesirable objects in a scene. Existing techniques reported in the literature to detect such tampering aim to improve the robustness of these methods against the use of JPEG compression, blurring, noise, or other types of post processing operations. These post processing operations are frequently used with the intention to conceal tampering and reduce tampering clues. A robust method based on the color moments and other five image descriptors is proposed in this paper. The method divides the image into fixed size overlapping blocks. Clustering operation divides entire search space into smaller pieces with similar color distribution. Blocks from the tampered regions will reside within the same cluster since both copied and moved regions have similar color distributions. Five image descriptors are used to extract block features, which makes the method more robust to post processing operations. An ensemble of deep compositional pattern-producing neural networks are trained with these extracted features. Similarity among feature vectors in clusters indicates possible forged regions. Experimental results show that the proposed method can detect copy-move forgery even if an image was distorted by gamma correction, addictive white Gaussian noise, JPEG compression, or blurring. Copyright © 2016. Published by Elsevier Ireland Ltd.
Observer performance assessment of JPEG-compressed high-resolution chest images
NASA Astrophysics Data System (ADS)
Good, Walter F.; Maitz, Glenn S.; King, Jill L.; Gennari, Rose C.; Gur, David
1999-05-01
The JPEG compression algorithm was tested on a set of 529 chest radiographs that had been digitized at a spatial resolution of 100 micrometer and contrast sensitivity of 12 bits. Images were compressed using five fixed 'psychovisual' quantization tables which produced average compression ratios in the range 15:1 to 61:1, and were then printed onto film. Six experienced radiologists read all cases from the laser printed film, in each of the five compressed modes as well as in the non-compressed mode. For comparison purposes, observers also read the same cases with reduced pixel resolutions of 200 micrometer and 400 micrometer. The specific task involved detecting masses, pneumothoraces, interstitial disease, alveolar infiltrates and rib fractures. Over the range of compression ratios tested, for images digitized at 100 micrometer, we were unable to demonstrate any statistically significant decrease (p greater than 0.05) in observer performance as measured by ROC techniques. However, the observers' subjective assessments of image quality did decrease significantly as image resolution was reduced and suggested a decreasing, but nonsignificant, trend as the compression ratio was increased. The seeming discrepancy between our failure to detect a reduction in observer performance, and other published studies, is likely due to: (1) the higher resolution at which we digitized our images; (2) the higher signal-to-noise ratio of our digitized films versus typical CR images; and (3) our particular choice of an optimized quantization scheme.
About a method for compressing x-ray computed microtomography data
NASA Astrophysics Data System (ADS)
Mancini, Lucia; Kourousias, George; Billè, Fulvio; De Carlo, Francesco; Fidler, Aleš
2018-04-01
The management of scientific data is of high importance especially for experimental techniques that produce big data volumes. Such a technique is x-ray computed tomography (CT) and its community has introduced advanced data formats which allow for better management of experimental data. Rather than the organization of the data and the associated meta-data, the main topic on this work is data compression and its applicability to experimental data collected from a synchrotron-based CT beamline at the Elettra-Sincrotrone Trieste facility (Italy) and studies images acquired from various types of samples. This study covers parallel beam geometry, but it could be easily extended to a cone-beam one. The reconstruction workflow used is the one currently in operation at the beamline. Contrary to standard image compression studies, this manuscript proposes a systematic framework and workflow for the critical examination of different compression techniques and does so by applying it to experimental data. Beyond the methodology framework, this study presents and examines the use of JPEG-XR in combination with HDF5 and TIFF formats providing insights and strategies on data compression and image quality issues that can be used and implemented at other synchrotron facilities and laboratory systems. In conclusion, projection data compression using JPEG-XR appears as a promising, efficient method to reduce data file size and thus to facilitate data handling and image reconstruction.
Color image lossy compression based on blind evaluation and prediction of noise characteristics
NASA Astrophysics Data System (ADS)
Ponomarenko, Nikolay N.; Lukin, Vladimir V.; Egiazarian, Karen O.; Lepisto, Leena
2011-03-01
The paper deals with JPEG adaptive lossy compression of color images formed by digital cameras. Adaptation to noise characteristics and blur estimated for each given image is carried out. The dominant factor degrading image quality is determined in a blind manner. Characteristics of this dominant factor are then estimated. Finally, a scaling factor that determines quantization steps for default JPEG table is adaptively set (selected). Within this general framework, two possible strategies are considered. A first one presumes blind estimation for an image after all operations in digital image processing chain just before compressing a given raster image. A second strategy is based on prediction of noise and blur parameters from analysis of RAW image under quite general assumptions concerning characteristics parameters of transformations an image will be subject to at further processing stages. The advantages of both strategies are discussed. The first strategy provides more accurate estimation and larger benefit in image compression ratio (CR) compared to super-high quality (SHQ) mode. However, it is more complicated and requires more resources. The second strategy is simpler but less beneficial. The proposed approaches are tested for quite many real life color images acquired by digital cameras and shown to provide more than two time increase of average CR compared to SHQ mode without introducing visible distortions with respect to SHQ compressed images.
Digital storage and analysis of color Doppler echocardiograms
NASA Technical Reports Server (NTRS)
Chandra, S.; Thomas, J. D.
1997-01-01
Color Doppler flow mapping has played an important role in clinical echocardiography. Most of the clinical work, however, has been primarily qualitative. Although qualitative information is very valuable, there is considerable quantitative information stored within the velocity map that has not been extensively exploited so far. Recently, many researchers have shown interest in using the encoded velocities to address the clinical problems such as quantification of valvular regurgitation, calculation of cardiac output, and characterization of ventricular filling. In this article, we review some basic physics and engineering aspects of color Doppler echocardiography, as well as drawbacks of trying to retrieve velocities from video tape data. Digital storage, which plays a critical role in performing quantitative analysis, is discussed in some detail with special attention to velocity encoding in DICOM 3.0 (medical image storage standard) and the use of digital compression. Lossy compression can considerably reduce file size with minimal loss of information (mostly redundant); this is critical for digital storage because of the enormous amount of data generated (a 10 minute study could require 18 Gigabytes of storage capacity). Lossy JPEG compression and its impact on quantitative analysis has been studied, showing that images compressed at 27:1 using the JPEG algorithm compares favorably with directly digitized video images, the current goldstandard. Some potential applications of these velocities in analyzing the proximal convergence zones, mitral inflow, and some areas of future development are also discussed in the article.
Suraniti, Emmanuel; Studer, Vincent; Sojic, Neso; Mano, Nicolas
2011-04-01
Immobilization and electrical wiring of enzymes is of particular importance for the elaboration of efficient biosensors and can be cumbersome. Here, we report a fast and easy protocol for enzyme immobilization, and as a proof of concept, we applied it to the immobilization of bilirubin oxidase, a labile enzyme. In the first step, bilirubin oxidase is mixed with a redox hydrogel "wiring" the enzyme reaction centers to electrodes. Then, this adduct is covered by an outer layer of PEGDA made by photoinitiated polymerization of poly(ethylene-glycol) diacrylate (PEGDA) and a photoclivable precursor, DAROCUR. This two-step protocol is 18 times faster than the current state-of-the-art protocol and leads to currents 25% higher. In addition, the outer layer of PEGDA acts as a protective layer increasing the lifetime of the electrode by 100% when operating continuously for 2000 s and by 60% when kept in dry state for 24 h. This new protocol is particularly appropriate for labile enzymes that quickly denaturate. In addition, by tuning the ratio PEGDA/DAROCUR, it is possible to make the enzyme electrodes even more active or more stable.
Handa, Rajash K; Bailey, Michael R; Paun, Marla; Gao, Sujuan; Connors, Bret A; Willis, Lynn R; Evan, Andrew P
2009-05-01
To test the hypothesis that the pretreatment of the kidney with low-energy shock waves (SWs) will induce renal vasoconstriction sooner than a standard clinical dose of high-energy SWs, thus providing a potential mechanism by which the pretreatment SW lithotripsy (SWL) protocol reduces tissue injury. Female farm pigs (6-weeks-old) were anaesthetized with isoflurane and the lower pole of the right kidney treated with SWs using a conventional electrohydraulic lithotripter (HM3, Dornier GmbH, Germany). Pulsed Doppler ultrasonography was used to measure renal resistive index (RI) in blood vessels as a measure of resistance/impedance to blood flow. RI was recorded from one intralobar artery located in the targeted pole of the kidney, and measurements taken from pigs given sham SW treatment (Group 1; no SWs, four pigs), a standard clinical dose of high-energy SWs (Group 2; 2000 SWs, 24 kV, 120 SWs/min, seven pigs), low-energy SW pretreatment followed by high-energy SWL (Group 3; 500 SWs, 12 kV, 120 SWs/min + 2000 SWs, 24 kV, 120 SWs/min, eight pigs) and low-energy SW pretreatment alone (Group 4; 500 SWs, 12 kV, 120 SWs/min, six pigs). Baseline RI (approximately 0.61) was similar for all groups. Pigs receiving sham SW treatment (Group 1) had no significant change in RI. A standard clinical dose of high-energy SWs (Group 2) did not significantly alter RI during treatment, but did increase RI at 45 min after SWL. Low-energy SWs did not alter RI in Group 3 pigs, but subsequent treatment with a standard clinical dose of high-energy SWs resulted in a significantly earlier (at 1000 SWs) and greater (two-fold) rise in RI than that in Group 2 pigs. This rise in RI during the low/high-energy SWL protocol was not due to a delayed vasoconstrictor response of pretreatment, as low-energy SW treatment alone (Group 4) did not increase RI until 65 min after SWL. The pretreatment protocol induces renal vasoconstriction during the period of SW application whereas the standard protocol shows vasoconstriction occurring after SWL. Thus, the earlier and greater rise in RI during the pretreatment protocol may be causally associated with a reduction in tissue injury.
“Counterfactual” quantum protocols
NASA Astrophysics Data System (ADS)
Vaidman, L.
2016-05-01
The counterfactuality of recently proposed protocols is analyzed. A definition of “counterfactuality” is offered and it is argued that an interaction-free measurement (IFM) of the presence of an opaque object can be named “counterfactual”, while proposed “counterfactual” measurements of the absence of such objects are not counterfactual. The quantum key distribution protocols which rely only on measurements of the presence of the object are counterfactual, but quantum direct communication protocols are not. Therefore, the name “counterfactual” is not appropriate for recent “counterfactual” protocols which transfer quantum states by quantum direct communication.
Practical quantum appointment scheduling
NASA Astrophysics Data System (ADS)
Touchette, Dave; Lovitz, Benjamin; Lütkenhaus, Norbert
2018-04-01
We propose a protocol based on coherent states and linear optics operations for solving the appointment-scheduling problem. Our main protocol leaks strictly less information about each party's input than the optimal classical protocol, even when considering experimental errors. Along with the ability to generate constant-amplitude coherent states over two modes, this protocol requires the ability to transfer these modes back-and-forth between the two parties multiple times with very low losses. The implementation requirements are thus still challenging. Along the way, we develop tools to study quantum information cost of interactive protocols in the finite regime.
Cyclooxygenase inhibition does not alter methacholine-induced sweating
Fujii, Naoto; McGinn, Ryan; Paull, Gabrielle; Stapleton, Jill M.; Meade, Robert D.
2014-01-01
Cholinergic agents (e.g., methacholine) induce cutaneous vasodilation and sweating. Reports indicate that either nitric oxide (NO), cyclooxygenase (COX), or both can contribute to cholinergic cutaneous vasodilation. Also, NO is reportedly involved in cholinergic sweating; however, whether COX contributes to cholinergic sweating is unclear. Forearm sweat rate (ventilated capsule) and cutaneous vascular conductance (CVC, laser-Doppler perfusion units/mean arterial pressure) were evaluated in 10 healthy young (24 ± 4 yr) adults (7 men, 3 women) at four skin sites that were continuously perfused via intradermal microdialysis with 1) lactated Ringer (control), 2) 10 mM ketorolac (a nonselective COX inhibitor), 3) 10 mM NG-nitro-l-arginine methyl ester (l-NAME, a nonselective NO synthase inhibitor), or 4) a combination of 10 mM ketorolac + 10 mM l-NAME. At the four skin sites, methacholine was simultaneously infused in a dose-dependent manner (1, 10, 100, 1,000, 2,000 mM). Relative to the control site, forearm CVC was not influenced by ketorolac throughout the protocol (all P > 0.05), whereas l-NAME and ketorolac + l-NAME reduced forearm CVC at and above 10 mM methacholine (all P < 0.05). Conversely, there was no main effect of treatment site (P = 0.488) and no interaction of methacholine dose and treatment site (P = 0.711) on forearm sweating. Thus forearm sweating (in mg·min−1·cm−2) from baseline up to the maximal dose of methacholine was not different between the four sites (at 2,000 mM, control 0.50 ± 0.23, ketorolac 0.44 ± 0.23, l-NAME 0.51 ± 0.22, and ketorolac + l-NAME 0.51 ± 0.23). We show that both NO synthase and COX inhibition do not influence cholinergic sweating induced by 1–2,000 mM methacholine. PMID:25213633
Gender stratification in management. The World Health Organization 2000.
Brännström, Inger A
2004-01-01
The World Health Organization (WHO) is a global organization that nowadays has integrated gender issues into its policy, programmes and budget. How then is the state of affairs in the area of gender equity at the ultimate governing bodies of the modern WHO? This study aims to assess the representation of women and men and their promotion within the supreme decision-making bodies of the WHO during the year 2000. Information sources used are the official and confirmed protocols of the 53rd World Health Assembly (WHA) in 2000 and of the two Executive Board (EB) meetings of the corresponding year. A descriptive quantitative content analysis approach is used exclusively. The present study demonstrates strikingly skewed gender distribution, with men substantially at an advantage numerically in the prominent positions at the WHA 2000. Additionally, men also hold an advantage in terms of being promoted to leading positions within the bodies examined, notably all upgraded chairs of the EB during 2000. However, the formerly male-dominated supervisory positions of the WHO are, these days, challenged by women having been elected at the very top of the WHO. The present study stresses the need to elaborate a qualitative research design to advance the understanding of the social construction of gender in supreme governing positions of the modern WHO.
Motwani, Manoj
2017-01-01
Purpose To demonstrate how higher-order corneal aberrations can cancel out, modify, or induce lower-order corneal astigmatism. Patients and methods Six representative eyes are presented that show different scenarios in which higher-order aberrations interacting with corneal astigmatism can affect the manifest refraction. WaveLight® Contoura ablation maps showing the higher-order aberrations are shown, as are results of correction with full measured correction using the LYRA (Layer Yolked Reduction of Astigmatism) Protocol. Results Higher-order corneal aberrations such as trefoil, quadrafoil, and coma can create ovalization of the central cornea, which can interact with the ovalization caused by lower-order astigmatism to either induce, cancel out, or modify the manifest refraction. Contoura processing successfully determines the linkage of these interactions resulting in full astigmatism removal. Purely lenticular astigmatism appears to be rare, but a case is also demonstrated. The author theorizes that all aberrations require cerebral compensatory processing and can be removed, supported by the facts that full removal of aberrations and its linkage with lower-order astigmatism with the LYRA Protocol has not resulted in worse or unacceptable vision for any patients. Conclusion Higher-order aberrations interacting with lower-order astigmatism is the main reason for the differences between manifest refraction and Contoura measured astigmatism, and the linkage between these interactions can be successfully treated using Contoura and the LYRA Protocol. Lenticular astigmatism is relatively rare. PMID:28553069
Motwani, Manoj
2017-01-01
To demonstrate how higher-order corneal aberrations can cancel out, modify, or induce lower-order corneal astigmatism. Six representative eyes are presented that show different scenarios in which higher-order aberrations interacting with corneal astigmatism can affect the manifest refraction. WaveLight ® Contoura ablation maps showing the higher-order aberrations are shown, as are results of correction with full measured correction using the LYRA (Layer Yolked Reduction of Astigmatism) Protocol. Higher-order corneal aberrations such as trefoil, quadrafoil, and coma can create ovalization of the central cornea, which can interact with the ovalization caused by lower-order astigmatism to either induce, cancel out, or modify the manifest refraction. Contoura processing successfully determines the linkage of these interactions resulting in full astigmatism removal. Purely lenticular astigmatism appears to be rare, but a case is also demonstrated. The author theorizes that all aberrations require cerebral compensatory processing and can be removed, supported by the facts that full removal of aberrations and its linkage with lower-order astigmatism with the LYRA Protocol has not resulted in worse or unacceptable vision for any patients. Higher-order aberrations interacting with lower-order astigmatism is the main reason for the differences between manifest refraction and Contoura measured astigmatism, and the linkage between these interactions can be successfully treated using Contoura and the LYRA Protocol. Lenticular astigmatism is relatively rare.
Journal of Chemical Education on CD-ROM, 1999
NASA Astrophysics Data System (ADS)
1999-12-01
The Journal of Chemical Education on CD-ROM contains the text and graphics for all the articles, features, and reviews published in the Journal of Chemical Education. This 1999 issue of the JCE CD series includes all twelve issues of 1999, as well as all twelve issues from 1998 and from 1997, and the September-December issues from 1996. Journal of Chemical Education on CD-ROM is formatted so that all articles on the CD retain as much as possible of their original appearance. Each article file begins with an abstract/keyword page followed by the article pages. All pages of the Journal that contain editorial content, including the front covers, table of contents, letters, and reviews, are included. Also included are abstracts (when available), keywords for all articles, and supplementary materials. The Journal of Chemical Education on CD-ROM has proven to be a useful tool for chemical educators. Like the Computerized Index to the Journal of Chemical Education (1) it will help you to locate articles on a particular topic or written by a particular author. In addition, having the complete article on the CD-ROM provides added convenience. It is no longer necessary to go to the library, locate the Journal issue, and read it while sitting in an uncomfortable chair. With a few clicks of the mouse, you can scan an article on your computer monitor, print it if it proves interesting, and read it in any setting you choose. Searching and Linking JCE CD is fully searchable for any word, partial word, or phrase. Successful searches produce a listing of articles that contain the requested text. Individual articles can be quickly accessed from this list. The Table of Contents of each issue is linked to individual articles listed. There are also links from the articles to any supplementary materials. References in the Chemical Education Today section (found in the front of each issue) to articles elsewhere in the issue are also linked to the article, as are WWW addresses and email addresses. If you have Internet access and a WWW browser and email utility, you can go directly to the Web site or prepare to send a message with a single mouse click.
Full-text searching of the entire CD enables you to find the articles you want. Price and Ordering An order form is inserted in this issue that provides prices and other ordering information. If this insert is not available or if you need additional information, contact: JCE Software, University of Wisconsin-Madison, 1101 University Avenue, Madison, WI 53706-1396; phone: 608/262-5153 or 800/991-5534; fax: 608/265-8094; email: jcesoft@chem.wisc.edu. Information about all our publications (including abstracts, descriptions, updates) is available from our World Wide Web site at: http://jchemed.chem.wisc.edu/JCESoft/. Hardware and Software Requirements Hardware and software requirements for JCE CD 1999 are listed in the table below:
Literature Cited 1. Schatz, P. F. Computerized Index, Journal of Chemical Education; J. Chem. Educ. Software 1993, SP 5-M. Schatz, P. F.; Jacobsen, J. J. Computerized Index, Journal of Chemical Education; J. Chem. Educ. Software 1993, SP 5-W.
Expansion of U.S. emergency medical service routing for stroke care: 2000-2010.
Hanks, Natalie; Wen, Ge; He, Shuhan; Song, Sarah; Saver, Jeffrey L; Cen, Steven; Kim-Tenser, May; Mack, William; Sanossian, Nerses
2014-07-01
Organized stroke systems of care include preferential emergency medical services (EMS) routing to deliver suspected stroke patients to designated hospitals. To characterize the growth and implementation of EMS routing of stroke nationwide, we describe the proportion of stroke hospitalizations in the United States (U.S.) occurring within regions having adopted these protocols. We collected data on ischemic stroke using International Classification of Diseases-9 (ICD-9) coding from the Healthcare Cost and Utilization Project Nationwide Inpatient Sample (NIS) database from the years 2000-2010. The NIS contains all discharge data from 1,051 hospitals located in 45 states, approximating a 20% stratified sample. We obtained data on EMS systems of care from a review of archives, reports, and interviews with state emergency medical services (EMS) officials. A county or state was considered to be in transition if the protocol was adopted in the calendar year, with establishment in the year following transition. Nationwide, stroke hospitalizations remained constant over the course of the study period: 583,000 in 2000 and 573,000 in 2010. From 2000-2003 there were no states or counties participating in the NIS with EMS systems of care. The proportion of U.S. stroke hospitalizations occurring in jurisdictions with established EMS regional systems of acute stroke care increased steadily from 2004 to 2010 (1%, 13%, 28%, 30%, 30%, 34%, 49%). In 2010, 278,538 stroke hospitalizations, 49% of all U.S. stroke hospitalizations, occurred in areas with established EMS routing, with an additional 18,979 (3%) patients in regions undergoing a transition to EMS routing. In 2010, a majority of stroke patients in the U.S. were hospitalized in states with established or transitioning to organized stroke systems of care. This milestone coverage of half the U.S. population is a major advance in systematic stroke care and emphasizes the need for novel approaches to further extend access to stroke center care to all patients.
NASA Astrophysics Data System (ADS)
2002-01-01
First System of Deployable Multi-Integral Field Units Ready Summary The ESO Very Large Telescope (VLT) at the Paranal Observatory is being equipped with many state-of-the-art astronomical instruments that will allow observations in a large number of different modes and wavebands. Soon to come is the Fibre Large Array Multi-Element Spectrograph (FLAMES) , a project co-ordinated by ESO. It incorporates several complex components, now being constructed at various research institutions in Europe and Australia. One of these, a true technological feat, is a unique system of 15 deployable fibre bundles, the so-called Integral Field Units (IFUs) . They can be accurately positioned within a sky field-of-view measuring no less that 25 arcmin in diameter, i.e., almost as large as the full Moon . Each of the IFUs looks like an insect's eye and images a small sky area (3 x 2 arcsec 2 ) with a multiple microlens. From each IFU, 20 narrow light beams are sent via optical fibres to an advanced spectrograph. All 300 spectra are recorded simultaneously by a sensitive digital camera. A major advantage of this technique is that, contrary usual spectroscopic observations in which spectral information is obtained along a (one-dimensional) line on the sky, it now allows (two-dimensional) area spectroscopy . This will permit extremely efficient spectral observations of many celestial objects, including faint galaxies, providing detailed information about their internal structure and motions. Such studies will have an important impact on our understanding, e.g., of the early evolution of galaxies , the main building blocks in the Universe. The IFUs have been developed by a team of astronomers and engineers [2] at the Observatoire de Paris-Meudon. All IFU components are now at the ESO Headquarters in Garching (Germany) where they are being checked and integrated into the instrument [3]. PR Photo 03a/02 : The GIRAFFE spectrograph in the ESO Assembly Hall (Garching, Germany) . PR Photo 03b/02 : Example of a future IFU observation in a sky field with galaxies. PR Photo 03c/02 : An illustration of how the IFUs function . PR Photo 03d/02 : The IFU design . PR Photo 03e/02 : Computer simulation of the motions in a galaxy , as deduced from IFU observations. The FLAMES instrument and its many parts ESO PR Photo 03a/02 ESO PR Photo 03a/02 [Preview - JPEG: 560 x 400 pix - 62k] [Normal - JPEG: 1120 x 800 pix - 544k] [Hi-Res - JPEG: 2885 x 2061 pix - 5.3M] Caption : PR Photo 03a/02 : The GIRAFFE spectrograph, a major component of the VLT Fibre Large Array Multi-Element Spectrograph (FLAMES) , during the present assembly at the ESO Headquarters in Garching (Germany). Late last year, the ESO Very Large Telescope (VLT) at the Paranal Observatory received its newest instrument, NAOS-CONICA . The first tests were very successful, cf. PR 25/01. But this is far from the last. Work is now underway at several European and overseas research institutes to complete the many other large astronomical instruments planned for the VLT. Over the next years, these new facilities will enter into operation one by one, further enhancing the capabilities of this true flagship of European science. One of these instruments is the Fibre Large Array Multi-Element Spectrograph (FLAMES) , to be installed at the 8.2-m VLT KUEYEN Unit Telescope. It will be able to observe the spectra of a large number of individual, faint objects (or small sky areas) simultaneously and incorporates several highly complex components, e.g., * a Nasmyth Corrector - an optical system to focus the light that is received from the telescope over a sky field of no less than 25 arcmin in diameter, i.e., almost as large as the full Moon . It was installed on KUEYEN in September 2001 * a Fibre Positioner (known as "OzPoz"). It is now being built by the AUSTRALIS Consortium, lead by the Anglo Australian Observatory (AAO) , cf. ESO PR 07/98 * a high- and intermediate-resolution optical spectrograph, GIRAFFE , with its own fibre system, developed by the Observatoire de Paris-Meudon in close collaboration with ESO . It is now in the process of being assembled in the ESO laboratories in Garching, cf. PR Photo 03a/01 . Work at the FLAMES facility will be supported by specialized data reduction software developed by Observatoire de Genève-Lausanne in collaboration with Observatoire de Paris-Meudon , and specialized observing software developed at ESO . There will also be a fibre link to the UVES high-dispersion spectrograph and there are plans for incorporating an intermediate resolution IR spectrograph in the future; the ITAL-FLAMES consortium is now preparing the associated instrument control and data reduction software packages. The Integral Field Units (IFUs) for FLAMES ESO PR Photo 03b/02 ESO PR Photo 03b/02 [Preview - JPEG: 573 x 400 pix - 94k] [Normal - JPEG: 1145 x 800 pix - 592k] ESO PR Photo 03c/02 ESO PR Photo 03c/02 [Preview - JPEG: 538 x 400 pix - 63k] [Normal - JPEG: 1076 x 800 pix - 256k] Caption : PR Photo 03b/02 : An example of observations with Integral Field Units (IFUs) at FLAMES (only 4 of the 15 units are shown here). Each IFU is placed so that it records the light from 20 small adjacent sky areas (each measuring about 3 x 2 arcsec 2 ). In this way, it is possible to register simultaneously the spectrum of as many different regions of a (distant) galaxy. PR Photo 03c/02 : How the IFUs work: each IFU consists of a microlens that guides the light from a small sky area, normally centred on a celestial object (e.g., a distant galaxy) and sends it on to the entry of the spectrograph (inside the dotted box). When it enters into operation later this year [3], GIRAFFE will become the most efficient instrument of its kind available at the world's large optical/infrared telescopes. It will be especially suited for the study of the dynamical properties of distant galaxies - their motion in space, as well as the internal motions of their stars and gas clouds. Indeed, observations of the velocity fields in a large variety of galaxies in the early Universe (when its age was only one third to one half of its current age) will be essential for a better understanding of those major building blocks of the Universe. This is first of all due to the unique system of 15 deployable fibre bundles, the Integral Field Units (IFUs) , that can be accurately positioned within a field-of-view measuring no less than 25 arcmin across, cf. PR Photo 03b/02 . Each IFU is a microscopic, state-of-the-art two-dimensional lens array with an aperture of 3 x 2 arcsec 2 on the sky. It contains twenty micro-lenses coupled with optical fibres leading the light recorded at each point in the field to the entry slit of the spectrograph, cf. PR Photo 03c/02 . A great advantage of this technique is that, contrary to usual spectroscopic observations in which spectral information is obtained along a (one-dimensional) line on the sky, it now allows (two-dimensional) area spectroscopy . It is therefore possible to obtain spectra of larger areas of a celestial object simultaneously, and not just along one particular diameter. With 15 IFUs at their disposal, the astronomers will be able to observe many galaxies at the same time - this will represent a tremendous gain of efficiency with many more astrophysical data collected within the available observation time! The IFU design ESO PR Photo 03d/02 ESO PR Photo 03d/02 [Preview - JPEG: 400 x 469 pix - 86k] [Normal - JPEG: 800 x 937 pix - 232k] Caption : PR Photo 03d/02 : Mechanical design of an IFU "button". Upper right: photo of an "IFU entrance" with the 20 square microlenses, each measuring 1.8 x 1.8 mm 2. PR Photo 03d/02 shows the mechanical design of the entrance of one IFU. An array of 20 square microlenses, each measuring 1.8 x 1.8 mm 2 is used to concentrate the light in the corresponding, small sky field onto a prism that passes the light on to 20 fibres. These are inserted and cemented into a mechanical holder and the entire assembly is then mounted in an IFU "button" that will be positioned in the focal plane by the OzPoz Positioner. A magnet is incorporated at the base of the button to ensure a stable position (a firm hold) on the focal plate during the observation. The optical cementing is ensured with an UV curing and the fibre bundle is cemented into the button with an epoxy glue in order to ensure excellent stiffness of the complete assembly. The external diameter of the button is about 6 mm, corresponding to about 11 arcsec on the sky, allowing quite close positioning of the buttons on the focal plate. An example of astronomical observations with IFUs ESO PR Photo 03e/02 ESO PR Photo 03e/02 [Preview - JPEG: 467 x 400 pix - 51k] [Normal - JPEG: 933 x 800 pix - 264k] Caption : PR Photo 03e/02 is a computer simulation of the velocity field in a galaxy , as deduced on the basis of IFU spectra. The blue area has negative velocities and is thus the approaching side of the galaxy, while the red area is receding. In this way, the direction of rotation can be determined. The velocity unit is km/s. During the astronomical observation with the IFUs , the spectrograph slit receives light from 15 sky areas simultaneously, each with 21 fibres (20 from the IFU and 1 that collects the light from the night sky in an adjacent sky field) or 22 fibres (with the addition of 1 fibre with light from a calibration lamp). Altogether, about 300 spectra are recorded simultaneously. By means of such observations, the astronomers can perform many different studies, e.g., of the dynamics of star clusters and motions of stars and interstellar clouds in galaxies. PR Photo 03e/02 provides an example of a computer simulation of a resulting diagramme in which the internal rotation of a distant spiral galaxy is clearly visible. Red and yellow areas have positive velocities that are approaching while the blue areas are receding). Of special interest will be the study of the often violent motions when two or more galaxies interact gravitationally. Notes [1]: This is a joint Press Release of ESO and the Observatoire de Paris (cf. http://www.obspm.fr/actual/nouvelle/jan02/flames.shtml ). [2]:The GIRAFFE team at the Observatoire de Paris that has developed the Integral Field Units (IFUs) discussed in this Press Release includes Jean-Pierre Aoustin, Sebastien Baratchart, Patrice Barroso, Veronique Cayatte, Laurent Chemin, Florence Cornu, Jean Cretenet, Jean-Paul Danton, Hector Flores, Francoise Gex, Fabien Guillon, Isabelle Guinouard, Francois Hammer, Jacques Hammes, David Horville, Jean-Michel Huet, Laurent Jocou, Pierre Kerlirzin, Serge Lebourg, Hugo Lenoir, Claude Lesqueren, Regis Marichal, Michel Marteaud, Thierry Melse, Fabrice Peltier, Francois Rigaud, Frederic Sayede and Pascal Vola . [3]: It is expected to ship the various components of the FLAMES instrument to the VLT Observatory at Paranal (Chile) during the next month. "First Light" is scheduled to take place some weeks thereafter, following installation at the telescope and extensive system tests. ESO will issue another Press Release with more details on that occasion.
Handa, Rajash K.; Bailey, Michael R.; Paun, Marla; Gao, Sujuan; Connors, Bret A.; Willis, Lynn R.; Evan, Andrew P.
2008-01-01
Introduction and Objective A great deal of effort has been focused on developing new treatment protocols to reduce tissue injury to improve the safety of shock wave lithotripsy. This has led to the discovery that pretreatment of the kidney with a series of low-energy shock waves (SWs) will substantially reduce the hemorrhagic lesion that normally results from a standard clinical dose of high-energy SWs. Because renal blood flow is reduced following low- or high-energy SWL, and may therefore contribute to this effect, this study was designed to test the hypothesis that the pretreatment protocol induces renal vasoconstriction sooner than the standard protocol for SW delivery. Methods Female farm pigs (6-weeks old) were anesthetized with isoflurane and the lower pole of the right kidney treated with SWs using the HM3 lithotripter. Pulsed Doppler sonography was used to measure resistive index (RI) in blood vessels as a reflection of resistance/impedance to blood flow. RI was recorded from a single intralobar artery located in the targeted pole of the kidney, and measurements taken from pigs given sham SW treatment (Group 1; no SWs, n = 4), a standard clinical dose of high-energy SWs (Group 2; 2000 SWs, 24 kV, 120 SWs/min, n = 7), low-energy SW pretreatment followed by high-energy SWL (Group 3; 500 SWs, 12 kV, 120 SWs/min + 2000 SWs, 24 kV, 120 SWs/min, n = 8) and low-energy SW pretreatment alone (Group 4; 500 SWs, 12 kV, 120 SWs/min, n = 6). Results Baseline RI (~ 0.61) was similar for all groups. Pigs receiving sham SW treatment (Group 1) had no significant change in RI. A standard clinical dose of high-energy SWs (Group 2) did not significantly alter RI during treatment, but did increase RI at 45-min into the post-SWL period. Low-energy SWs did not alter RI in Group 3 pigs, but subsequent treatment with a standard clinical dose of high-energy SWs resulted in a significantly earlier (at 1000 SWs) and greater (two-fold) rise in RI than that observed in Group 2 pigs. This rise in RI during the low/high-energy SWL treatment protocol was not due to a delayed vasoconstrictor response of pretreatment, as low-energy SW treatment alone (Group 4) did not increase RI until 65 min into the post-SWL period. Conclusions The pretreatment protocol induces renal vasoconstriction during the period of SW application whereas the standard protocol shows vasoconstriction occurring only during the post-SWL period. Thus the earlier and greater rise in RI during the pretreatment protocol may be causally associated with a reduction in tissue injury. PMID:19154458
Argumentation for coordinating shared activities
NASA Technical Reports Server (NTRS)
Clement, Bradley J.; Barrett, Anthony C.; Schaffer, Steven R.
2004-01-01
an increasing need for space missions to be able to collaboratively (and competitively) develop plans both within and across missions. In addition, interacting spacecraft that interleave onboard planning and execution must reach consensus on their commitments to each other prior to execution. In domains where missions have varying degrees of interaction and different constraints on communication and computation, the missions will require different coordination protocols in order to efficiently reach consensus with in their imposed deadlines. We describe a Shared Activity Coordination (SHAC) framework that provides a decentralized algorithm for negotiating the scheduling of shared activities over the lifetimes of multiple agents and a foundation for customizing protocols for negotiating planner interactions. We investigate variations of a few simple protocols based on argumentation and distributed constraints satisfaction techniques and evaluate their abilities to reach consistent solutions according to computation, time, and communication costs in an abstract domain where spacecraft propose joint measurements.
Information security: where computer science, economics and psychology meet.
Anderson, Ross; Moore, Tyler
2009-07-13
Until ca. 2000, information security was seen as a technological discipline, based on computer science but with mathematics helping in the design of ciphers and protocols. That perspective started to change as researchers and practitioners realized the importance of economics. As distributed systems are increasingly composed of machines that belong to principals with divergent interests, incentives are becoming as important to dependability as technical design. A thriving new field of information security economics provides valuable insights not just into 'security' topics such as privacy, bugs, spam and phishing, but into more general areas of system dependability and policy. This research programme has recently started to interact with psychology. One thread is in response to phishing, the most rapidly growing form of online crime, in which fraudsters trick people into giving their credentials to bogus websites; a second is through the increasing importance of security usability; and a third comes through the psychology-and-economics tradition. The promise of this multidisciplinary research programme is a novel framework for analysing information security problems-one that is both principled and effective.
Grass, Gregor; Ahrens, Bjoern; Schleenbecker, Uwe; Dobrzykowski, Linda; Wagner, Matthias; Krüger, Christian; Wölfel, Roman
2016-02-01
We describe a culture-based method suitable for isolating Bacillus anthracis and other live bacteria from heroin. This protocol was developed as a consequence of the bioforensic need to retrieve bacteria from batches of the drug associated with cases of injectional anthrax among heroin-consumers in Europe. This uncommon manifestation of infection with the notorious pathogen B. anthracis has resulted in 26 deaths between the years 2000 to 2013. Thus far, no life disease agent has been isolated from heroin during forensic investigations surrounding these incidences. Because of the conjectured very small number of disease-causing endospores in the contaminated drug it is likely that too few target sequences are available for molecular genetic analysis. Therefore, a direct culture-based approach was chosen here. Endospores of attenuated B. anthracis artificially spiked into heroin were successfully retrieved at 84-98% recovery rates using a wash solution consisting of 0.5% Tween 20 in water. Using this approach, 82 samples of un-cut heroin originating from the German Federal Criminal Police Office's heroin analysis program seized during the period between 2000 and 2014 were tested and found to be surprisingly poor in retrievable bacteria. Notably, while no B. anthracis was isolated from the drug batches, other bacteria were successfully cultured. The resulting methodical protocol is therefore suitable for analyzing un-cut heroin which can be anticipated to comprise the original microbiota from the drug's original source without interference from contaminations introduced by cutting. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
DUST DISK AROUND A BLACK HOLE IN GALAXY NGC 4261
NASA Technical Reports Server (NTRS)
2002-01-01
This is a Hubble Space Telescope image of an 800-light-year-wide spiral-shaped disk of dust fueling a massive black hole in the center of galaxy, NGC 4261, located 100 million light-years away in the direction of the constellation Virgo. By measuring the speed of gas swirling around the black hole, astronomers calculate that the object at the center of the disk is 1.2 billion times the mass of our Sun, yet concentrated into a region of space not much larger than our solar system. The strikingly geometric disk -- which contains enough mass to make 100,000 stars like our Sun -- was first identified in Hubble observations made in 1992. These new Hubble images reveal for the first time structure in the disk, which may be produced by waves or instabilities in the disk. Hubble also reveals that the disk and black hole are offset from the center of NGC 4261, implying some sort of dynamical interaction is taking place, that has yet to be fully explained. Credit: L. Ferrarese (Johns Hopkins University) and NASA Image files in GIF and JPEG format, captions, and press release text may be accessed on Internet via anonymous ftp from oposite.stsci.edu in /pubinfo:
Peterson, David A.; Zumberge, Jeremy R.
2006-01-01
Samples of benthic macroinvertebrates were collected side-by-side from riffles at 12 stream sites in Wyoming, Colorado, and Montana during 2000-2001, following protocols established by the U.S. Geological Survey National Water-Quality Assessment (NAWQA) Program and the U.S. Environmental Protection Agency Environmental Monitoring and Assessment Program (EMAP). Samples from riffles were collected following NAWQA protocols, using a sampler with 425-micron net mesh-opening size from a total area of 1.25 m2 per sample in multiple riffles. Samples also were collected following EMAP protocols, using a sampler with 500-micron net mesh-opening size from a total area of 0.72 m2 per sample in multiple riffles. The taxonomic identification and enumeration of the samples followed procedures established for each program. Benthic macroinvertebrate community structure was compared between the data sets using individual metrics, a multimetric index, and multivariate analysis. Comparisons between the macroinvertebrate community structures were made after sequentially adjusting both data sets for: (1) ambiguous taxa, (2) taxonomic inconsistencies, and (3) differences in laboratory subsampling. After removal of ambiguous taxa, pair-wise differences in total taxa richness and Ephemeroptera taxa richness were statistically significant (p < 0.05). Differences between the data sets generally were not significant for richness of other taxa, tolerant taxa, semi-voltine taxa, functional feeding groups, diversity, and dominance. Sample scores calculated using the Wyoming Stream Integrity Index were not significantly different between the two data sets. After reconciling both data sets for taxonomic inconsistencies, total taxa richness and Ephemeroptera taxa richness remained significantly different between the data sets. After adjusting the data for differences in laboratory subsampling, the differences in taxa richness were no longer significant. Bray-Curtis similarity coefficients and non-metric multi-dimensional scaling were used to examine macroinvertebrate community structure. Similarity in community structure between sites was affected to a greater extent by taxa reconciliation than by adjustment for subsampling.
Garrido-Miguel, Miriam; Cavero-Redondo, Iván; Álvarez-Bueno, Celia; Rodriguez-Artalejo, Fernando; Moreno Aznar, Luis; Ruiz, Jonatan R; Martinez-Vizcaino, Vicente
2017-12-21
Increasing prevalence of both thinness and excess weight during childhood and adolescence is a significant public health issue because of short-term health consequences and long-term tracking of weight status. Monitoring weight status in Europe may serve to identify countries and regions where rates of these disorders are either slowing down or increasing to evaluate recent policies aimed at appropriate body weight, and to direct future interventions. This study protocol provides a standardised and transparent methodology to improve estimating trends of thinness, overweight and obesity in children aged 3-18 years and adolescents across the European region between 2000 and 2017. This protocol is guided by the Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols (PRISMA-P) and the Cochrane Collaboration Handbook. To identify relevant studies, a search will be conducted in MEDLINE, EMBASE, Cochrane Library, CINAHL and Web of Science databases. From the selected studies, relevant references will be screened as supplemental sources. Finally, open search in websites from health institutions will be conducted to identify weight status data not published in scientific journals. Cross-sectional, follow-up studies and panel surveys reporting weight status (objectively measured height and weight) according to the International Obesity Task Force criteria, and written in English or Spanish will be included. Subgroup analyses will be carried out by gender, age, study year and country or European region. This study will provide a comprehensive description of weight status of children and adolescents across Europe from 2000 to 2017. The results will be disseminated in a peer-reviewed journal. This study will use data exclusively from published research or institutional literature, so institutional ethical approval is not required. CRD42017056917. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Garrido-Miguel, Miriam; Cavero-Redondo, Iván; Álvarez-Bueno, Celia; Rodriguez-Artalejo, Fernando; Moreno Aznar, Luis; Ruiz, Jonatan R; Martinez-Vizcaino, Vicente
2017-01-01
Introduction Increasing prevalence of both thinness and excess weight during childhood and adolescence is a significant public health issue because of short-term health consequences and long-term tracking of weight status. Monitoring weight status in Europe may serve to identify countries and regions where rates of these disorders are either slowing down or increasing to evaluate recent policies aimed at appropriate body weight, and to direct future interventions. This study protocol provides a standardised and transparent methodology to improve estimating trends of thinness, overweight and obesity in children aged 3–18 years and adolescents across the European region between 2000 and 2017. Methods and analysis This protocol is guided by the Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols (PRISMA-P) and the Cochrane Collaboration Handbook. To identify relevant studies, a search will be conducted in MEDLINE, EMBASE, Cochrane Library, CINAHL and Web of Science databases. From the selected studies, relevant references will be screened as supplemental sources. Finally, open search in websites from health institutions will be conducted to identify weight status data not published in scientific journals. Cross-sectional, follow-up studies and panel surveys reporting weight status (objectively measured height and weight) according to the International Obesity Task Force criteria, and written in English or Spanish will be included. Subgroup analyses will be carried out by gender, age, study year and country or European region. Discussion This study will provide a comprehensive description of weight status of children and adolescents across Europe from 2000 to 2017. The results will be disseminated in a peer-reviewed journal. This study will use data exclusively from published research or institutional literature, so institutional ethical approval is not required. PROSPERO registration number CRD42017056917. PMID:29273660
Data exchange technology based on handshake protocol for industrial automation system
NASA Astrophysics Data System (ADS)
Astafiev, A. V.; Shardin, T. O.
2018-05-01
In the article, questions of data exchange technology based on the handshake protocol for industrial automation system are considered. The methods of organizing the technology in client-server applications are analyzed. In the process of work, the main threats of client-server applications that arise during the information interaction of users are indicated. Also, a comparative analysis of analogue systems was carried out, as a result of which the most suitable option was chosen for further use. The basic schemes for the operation of the handshake protocol are shown, as well as the general scheme of the implemented application, which describes the entire process of interaction between the client and the server.
ERIC Educational Resources Information Center
Southard, Sherry G.
Protocol involves the behavior and procedures that are proper in any discourse community, including both what is spoken or written as well as what is not spoken or written. Students need to understand what proper protocol in corporate culture involves, how it is determined by formal and informal structures, and why such protocol is important. They…
Dynamic Spectrum Management for Military Wireless Networks
2010-09-01
auctions, and protocols and etiquettes . Command and control assignments are provided by the regulatory agency by reviewing specific licensing...devices and amateur licensees do not have specific frequency assignments. The Protocols and Etiquettes methods allow these devices to operate within a...with Collision Avoidance (CSMA/CA) a protocol . Etiquettes are rules that are followed without explicit interaction between devices. Simple etiquettes
Eggins, Suzanne; Slade, Diana
2012-01-01
Clinical handover -- the transfer between clinicians of responsibility and accountability for patients and their care (AMA 2006) -- is a pivotal and high-risk communicative event in hospital practice. Studies focusing on critical incidents, mortality, risk and patient harm in hospitals have highlighted ineffective communication -- including incomplete and unstructured clinical handovers -- as a major contributing factor (NSW Health 2005; ACSQHC 2010). In Australia, as internationally, Health Departments and hospital management have responded by introducing standardised handover communication protocols. This paper problematises one such protocol - the ISBAR tool - and argues that the narrow understanding of communication on which such protocols are based may seriously constrain their ability to shape effective handovers. Based on analysis of audio-recorded shift-change clinical handovers between medical staff we argue that handover communication must be conceptualised as inherently interactive and that attempts to describe, model and teach handover practice must recognise both informational and interactive communication strategies. By comparing the communicative performance of participants in authentic handover events we identify communication strategies that are more and less likely to lead to an effective handover and demonstrate the importance of focusing close up on communication to improve the quality and safety of healthcare interactions.
Observation sequences and onboard data processing of Planet-C
NASA Astrophysics Data System (ADS)
Suzuki, M.; Imamura, T.; Nakamura, M.; Ishi, N.; Ueno, M.; Hihara, H.; Abe, T.; Yamada, T.
Planet-C or VCO Venus Climate Orbiter will carry 5 cameras IR1 IR 1micrometer camera IR2 IR 2micrometer camera UVI UV Imager LIR Long-IR camera and LAC Lightning and Airglow Camera in the UV-IR region to investigate atmospheric dynamics of Venus During 30 hr orbiting designed to quasi-synchronize to the super rotation of the Venus atmosphere 3 groups of scientific observations will be carried out i image acquisition of 4 cameras IR1 IR2 UVI LIR 20 min in 2 hrs ii LAC operation only when VCO is within Venus shadow and iii radio occultation These observation sequences will define the scientific outputs of VCO program but the sequences must be compromised with command telemetry downlink and thermal power conditions For maximizing science data downlink it must be well compressed and the compression efficiency and image quality have the significant scientific importance in the VCO program Images of 4 cameras IR1 2 and UVI 1Kx1K and LIR 240x240 will be compressed using JPEG2000 J2K standard J2K is selected because of a no block noise b efficiency c both reversible and irreversible d patent loyalty free and e already implemented as academic commercial software ICs and ASIC logic designs Data compression efficiencies of J2K are about 0 3 reversible and 0 1 sim 0 01 irreversible The DE Digital Electronics unit which controls 4 cameras and handles onboard data processing compression is under concept design stage It is concluded that the J2K data compression logics circuits using space
Introducing keytagging, a novel technique for the protection of medical image-based tests.
Rubio, Óscar J; Alesanco, Álvaro; García, José
2015-08-01
This paper introduces keytagging, a novel technique to protect medical image-based tests by implementing image authentication, integrity control and location of tampered areas, private captioning with role-based access control, traceability and copyright protection. It relies on the association of tags (binary data strings) to stable, semistable or volatile features of the image, whose access keys (called keytags) depend on both the image and the tag content. Unlike watermarking, this technique can associate information to the most stable features of the image without distortion. Thus, this method preserves the clinical content of the image without the need for assessment, prevents eavesdropping and collusion attacks, and obtains a substantial capacity-robustness tradeoff with simple operations. The evaluation of this technique, involving images of different sizes from various acquisition modalities and image modifications that are typical in the medical context, demonstrates that all the aforementioned security measures can be implemented simultaneously and that the algorithm presents good scalability. In addition to this, keytags can be protected with standard Cryptographic Message Syntax and the keytagging process can be easily combined with JPEG2000 compression since both share the same wavelet transform. This reduces the delays for associating keytags and retrieving the corresponding tags to implement the aforementioned measures to only ≃30 and ≃90ms respectively. As a result, keytags can be seamlessly integrated within DICOM, reducing delays and bandwidth when the image test is updated and shared in secure architectures where different users cooperate, e.g. physicians who interpret the test, clinicians caring for the patient and researchers. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Tech Briefs, September 2008
NASA Technical Reports Server (NTRS)
2008-01-01
Topics covered include: Nanotip Carpets as Antireflection Surfaces; Nano-Engineered Catalysts for Direct Methanol Fuel Cells; Capillography of Mats of Nanofibers; Directed Growth of Carbon Nanotubes Across Gaps; High-Voltage, Asymmetric-Waveform Generator; Magic-T Junction Using Microstrip/Slotline Transitions; On-Wafer Measurement of a Silicon-Based CMOS VCO at 324 GHz; Group-III Nitride Field Emitters; HEMT Amplifiers and Equipment for their On-Wafer Testing; Thermal Spray Formation of Polymer Coatings; Improved Gas Filling and Sealing of an HC-PCF; Making More-Complex Molecules Using Superthermal Atom/Molecule Collisions; Nematic Cells for Digital Light Deflection; Improved Silica Aerogel Composite Materials; Microgravity, Mesh-Crawling Legged Robots; Advanced Active-Magnetic-Bearing Thrust- Measurement System; Thermally Actuated Hydraulic Pumps; A New, Highly Improved Two-Cycle Engine; Flexible Structural-Health-Monitoring Sheets; Alignment Pins for Assembling and Disassembling Structures; Purifying Nucleic Acids from Samples of Extremely Low Biomass; Adjustable-Viewing-Angle Endoscopic Tool for Skull Base and Brain Surgery; UV-Resistant Non-Spore-Forming Bacteria From Spacecraft-Assembly Facilities; Hard-X-Ray/Soft-Gamma-Ray Imaging Sensor Assembly for Astronomy; Simplified Modeling of Oxidation of Hydrocarbons; Near-Field Spectroscopy with Nanoparticles Deposited by AFM; Light Collimator and Monitor for a Spectroradiometer; Hyperspectral Fluorescence and Reflectance Imaging Instrument; Improving the Optical Quality Factor of the WGM Resonator; Ultra-Stable Beacon Source for Laboratory Testing of Optical Tracking; Transmissive Diffractive Optical Element Solar Concentrators; Delaying Trains of Short Light Pulses in WGM Resonators; Toward Better Modeling of Supercritical Turbulent Mixing; JPEG 2000 Encoding with Perceptual Distortion Control; Intelligent Integrated Health Management for a System of Systems; Delay Banking for Managing Air Traffic; and Spline-Based Smoothing of Airfoil Curvatures.
Analytical Models of Cross-Layer Protocol Optimization in Real-Time Wireless Sensor Ad Hoc Networks
NASA Astrophysics Data System (ADS)
Hortos, William S.
The real-time interactions among the nodes of a wireless sensor network (WSN) to cooperatively process data from multiple sensors are modeled. Quality-of-service (QoS) metrics are associated with the quality of fused information: throughput, delay, packet error rate, etc. Multivariate point process (MVPP) models of discrete random events in WSNs establish stochastic characteristics of optimal cross-layer protocols. Discrete-event, cross-layer interactions in mobile ad hoc network (MANET) protocols have been modeled using a set of concatenated design parameters and associated resource levels by the MVPPs. Characterization of the "best" cross-layer designs for a MANET is formulated by applying the general theory of martingale representations to controlled MVPPs. Performance is described in terms of concatenated protocol parameters and controlled through conditional rates of the MVPPs. Modeling limitations to determination of closed-form solutions versus explicit iterative solutions for ad hoc WSN controls are examined.
Congressional Report on Defense Business Operations
2010-03-15
by more than 1,700 users and used to store approximately 250 submissions a month. Each month, more than 2,000 documents are accessed and downloaded . 6...that is stored, managed and main- tained centrally. Data includes Geographic Information Systems ( GIS ) and Computer Aided Design and Drafting (CADD...Office FTP File Transfer Protocol FY Fiscal Year GAO Government Accountability Office GFEBS General Fund Enterprise Business System GIS Geographic
The effect of lossy image compression on image classification
NASA Technical Reports Server (NTRS)
Paola, Justin D.; Schowengerdt, Robert A.
1995-01-01
We have classified four different images, under various levels of JPEG compression, using the following classification algorithms: minimum-distance, maximum-likelihood, and neural network. The training site accuracy and percent difference from the original classification were tabulated for each image compression level, with maximum-likelihood showing the poorest results. In general, as compression ratio increased, the classification retained its overall appearance, but much of the pixel-to-pixel detail was eliminated. We also examined the effect of compression on spatial pattern detection using a neural network.
Black Hole in Search of a Home
NASA Astrophysics Data System (ADS)
2005-09-01
Astronomers Discover Bright Quasar Without Massive Host Galaxy An international team of astronomers [1] used two of the most powerful astronomical facilities available, the ESO Very Large Telescope (VLT) at Cerro Paranal and the Hubble Space Telescope (HST), to conduct a detailed study of 20 low redshift quasars. For 19 of them, they found, as expected, that these super massive black holes are surrounded by a host galaxy. But when they studied the bright quasar HE0450-2958, located some 5 billion light-years away, they couldn't find evidence for an encircling galaxy. This, the astronomers suggest, may indicate a rare case of collision between a seemingly normal spiral galaxy and a much more exotic object harbouring a very massive black hole. With masses up to hundreds of millions that of the Sun, "super massive" black holes are the most tantalizing objects known. Hiding in the centre of most large galaxies, including our own Milky Way (see ESO PR 26/03), they sometimes manifest themselves by devouring matter they engulf from their surroundings. Shining up to the largest distances, they are then called "quasars" or "QSOs" (for "quasi-stellar objects"), as they had initially been confused with stars. Decades of observations of quasars have suggested that they are always associated with massive host galaxies. However, observing the host galaxy of a quasar is a challenging work, because the quasar is radiating so energetically that its host galaxy is hard to detect in the flare. ESO PR Photo 28a/05 ESO PR Photo 28a/05 Two Quasars with their Host Galaxy [Preview - JPEG: 400 x 760 pix - 82k] [Normal - JPEG: 800 x 1520 pix - 395k] [Full Res - JPEG: 1722 x 3271 pix - 4.0M] Caption: ESO PR Photo 28a/05 shows two examples of quasars from the sample studied by the astronomers, where the host galaxy is obvious. In each case, the quasar is the bright central spot. The host of HE1239-2426 (left), a z=0.082 quasar, displays large spiral arms, while the host of HE1503+0228 (right), having a redshift of 0.135, is more fuzzy and shows only hints of spiral arms. Although these particular objects are rather close to us and constitute therefore easy targets, their host would still be perfectly visible at much higher redshift, including at distances as large as the one of HE0450-2958 (z=0.285). The observations were done with the ACS camera on the HST. ESO PR Photo 28b/05 ESO PR Photo 28b/05 The Quasar without a Home: HE0450-2958 [Preview - JPEG: 400 x 760 pix - 53k] [Normal - JPEG: 800 x 1520 pix - 197k] [Full Res - JPEG: 1718 x 3265 pix - 1.5M] Caption of ESO PR Photo 28b/05: (Left) HST image of the z=0.285 quasar HE0450-2958. No obvious host galaxy centred on the quasar is seen. Only a strongly disturbed and star forming companion galaxy is seen near the top of the image. (Right) Same image shown after applying an efficient image sharpening method known as MCS-deconvolution. In contrast to the usual cases, as the ones shown in ESO PR Photo 28a/05, the quasar is not situated at the centre of an extended host galaxy, but on the edge of a compact structure, whose spectra (see ESO PR Photo 28c/05) show it to be composed of gas ionised by the quasar radiation. This gas may have been captured through a collision with the star-forming galaxy. The star indicated on the figure is a nearby galactic star seen by chance in the field of view. To overcome this problem, the astronomers devised a new and highly efficient strategy. Using ESO's VLT for spectroscopy and HST for imagery, they observed their quasars at the same time as a reference star. Simultaneous observation of a star allowed them to measure at best the shape of the quasar point source on spectra and images, and further to separate the quasar light from the other contribution, i.e. from the underlying galaxy itself. This very powerful image and spectra sharpening method ("MCS deconvolution") was applied to these data in order to detect the finest details of the host galaxy (see e.g. ESO PR 19/03). Using this efficient technique, the astronomers could detect a host galaxy for all but one of the quasars they studied. No stellar environment was found for HE0450-2958, suggesting that if any host galaxy exists, it must either have a luminosity at least six times fainter than expected a priori from the quasar observed luminosity, or a radius smaller than about 300 light-years. Typical radii for quasar host galaxies range between 6,000 and 50,000 light-years, i.e. they are at least 20 to 170 times larger. "With the data we managed to secure with the VLT and the HST, we would have been able to detect a normal host galaxy", says Pierre Magain (Université de Liège, Belgium), lead author of the paper reporting the study. "We must therefore conclude that, contrary to our expectations, this bright quasar is not surrounded by a massive galaxy." Instead, the astronomers detected just besides the quasar a bright cloud of about 2,500 light-years in size, which they baptized "the blob". The VLT observations show this cloud to be composed only of gas ionised by the intense radiation coming from the quasar. It is probably the gas of this cloud which is feeding the supermassive black hole, allowing it to become a quasar. ESO PR Photo 28c/05 ESO PR Photo 28c/05 Spectrum of Quasar HE0450-2958, the Blob and the Companion Galaxy (FORS/VLT) [Preview - JPEG: 400 x 561 pix - 112k] [Normal - JPEG: 800 x 1121 pix - 257k] [HiRes - JPEG: 2332 x 3268 pix - 1.1M] Caption: ESO PR Photo 28c/05 presents the spectra of the three objects indicated in ESO PR Photo 28b/05 as obtained with FORS1 on ESO's Very Large Telescope. The spectrum of the companion galaxy shown on the top panel reveals strong star formation. Thanks to the image sharpening process, it has been possible to separate very well the spectra of the quasar (centre) from that of the blob (bottom). The spectrum of the blob shows exclusively strong narrow emission lines having properties indicative of ionisation by the quasar light. There is no trace of stellar light, down to very faint levels, in the surrounding of the quasar. A strongly perturbed galaxy, showing all signs of a recent collision, is also seen on the HST images 2 arcseconds away (corresponding to about 50,000 light-years), with the VLT spectra showing it to be presently in a state where it forms stars at a frantic rate. "The absence of a massive host galaxy, combined with the existence of the blob and the star-forming galaxy, lead us to believe that we have uncovered a really exotic quasar, says team member Frédéric Courbin (Ecole Polytechnique Fédérale de Lausanne, Switzerland). "There is little doubt that a burst in the formation of stars in the companion galaxy and the quasar itself have been ignited by a collision that must haven taken place about 100 million years ago. What happened to the putative quasar host remains unknown." HE0450-2958 constitutes a challenging case of interpretation. The astronomers propose several possible explanations, that will need to be further investigated and confronted. Has the host galaxy been completely disrupted as a result of the collision? It is hard to imagine how that could happen. Has an isolated black hole captured gas while crossing the disc of a spiral galaxy? This would require very special conditions and would probably not have caused such a tremendous perturbation as is observed in the neighbouring galaxy. Another intriguing hypothesis is that the galaxy harbouring the black hole was almost exclusively made of dark matter. "Whatever the solution of this riddle, the strong observable fact is that the quasar host galaxy, if any, is much too faint", says team member Knud Jahnke (Astrophysikalisches Institut Potsdam, Germany). The report on HE0450-2958 is published in the September 15, 2005 issue of the journal Nature ("Discovery of a bright quasar without a massive host galaxy" by Pierre Magain et al.).
Efficiency and security problems of anonymous key agreement protocol based on chaotic maps
NASA Astrophysics Data System (ADS)
Yoon, Eun-Jun
2012-07-01
In 2011, Niu-Wang proposed an anonymous key agreement protocol based on chaotic maps in [Niu Y, Wang X. An anonymous key agreement protocol based on chaotic maps. Commun Nonlinear Sci Simulat 2011;16(4):1986-92]. Niu-Wang's protocol not only achieves session key agreement between a server and a user, but also allows the user to anonymously interact with the server. Nevertheless, this paper points out that Niu-Wang's protocol has the following efficiency and security problems: (1) The protocol has computational efficiency problem when a trusted third party decrypts the user sending message. (2) The protocol is vulnerable to Denial of Service (DoS) attack based on illegal message modification by an attacker.
A Unified Fault-Tolerance Protocol
NASA Technical Reports Server (NTRS)
Miner, Paul; Gedser, Alfons; Pike, Lee; Maddalon, Jeffrey
2004-01-01
Davies and Wakerly show that Byzantine fault tolerance can be achieved by a cascade of broadcasts and middle value select functions. We present an extension of the Davies and Wakerly protocol, the unified protocol, and its proof of correctness. We prove that it satisfies validity and agreement properties for communication of exact values. We then introduce bounded communication error into the model. Inexact communication is inherent for clock synchronization protocols. We prove that validity and agreement properties hold for inexact communication, and that exact communication is a special case. As a running example, we illustrate the unified protocol using the SPIDER family of fault-tolerant architectures. In particular we demonstrate that the SPIDER interactive consistency, distributed diagnosis, and clock synchronization protocols are instances of the unified protocol.
Zakaria, A; Schuette, W; Younan, C
2011-01-01
The preceding DIN 6800-2 (1997) protocol has been revised by a German task group and its latest version was published in March 2008 as the national standard dosimetry protocol DIN 6800-2 (2008 March). Since then, in Germany the determination of absorbed dose to water for high-energy photon and electron beams has to be performed according to this new German dosimetry protocol. The IAEA Code of Practice TRS 398 (2000) and the AAPM TG-51 are the two main protocols applied internationally. The new German version has widely adapted the methodology and dosimetric data of TRS-398. This paper investigates systematically the DIN 6800-2 protocol and compares it with the procedures and results obtained by using the international protocols. The investigation was performed with 6 MV and 18 MV photon beams as well as with electron beams from 5 MeV to 21 MeV. While only cylindrical chambers were used for photon beams, the measurements of electron beams were performed by using cylindrical and plane-parallel chambers. It was found that the discrepancies in the determination of absorbed dose to water among the three protocols were 0.23% for photon beams and 1.2% for electron beams. The determination of water absorbed dose was also checked by a national audit procedure using TLDs. The comparison between the measurements following the DIN 6800-2 protocol and the TLD audit-procedure confirmed a difference of less than 2%. The advantage of the new German protocol DIN 6800-2 lies in the renouncement on the cross calibration procedure as well as its clear presentation of formulas and parameters. In the past, the different protocols evoluted differently from time to time. Fortunately today, a good convergence has been obtained in concepts and methods. PMID:22287987
Zakaria, A; Schuette, W; Younan, C
2011-04-01
The preceding DIN 6800-2 (1997) protocol has been revised by a German task group and its latest version was published in March 2008 as the national standard dosimetry protocol DIN 6800-2 (2008 March). Since then, in Germany the determination of absorbed dose to water for high-energy photon and electron beams has to be performed according to this new German dosimetry protocol. The IAEA Code of Practice TRS 398 (2000) and the AAPM TG-51 are the two main protocols applied internationally. The new German version has widely adapted the methodology and dosimetric data of TRS-398. This paper investigates systematically the DIN 6800-2 protocol and compares it with the procedures and results obtained by using the international protocols. The investigation was performed with 6 MV and 18 MV photon beams as well as with electron beams from 5 MeV to 21 MeV. While only cylindrical chambers were used for photon beams, the measurements of electron beams were performed by using cylindrical and plane-parallel chambers. It was found that the discrepancies in the determination of absorbed dose to water among the three protocols were 0.23% for photon beams and 1.2% for electron beams. The determination of water absorbed dose was also checked by a national audit procedure using TLDs. The comparison between the measurements following the DIN 6800-2 protocol and the TLD audit-procedure confirmed a difference of less than 2%. The advantage of the new German protocol DIN 6800-2 lies in the renouncement on the cross calibration procedure as well as its clear presentation of formulas and parameters. In the past, the different protocols evoluted differently from time to time. Fortunately today, a good convergence has been obtained in concepts and methods.
Analytical approach to cross-layer protocol optimization in wireless sensor networks
NASA Astrophysics Data System (ADS)
Hortos, William S.
2008-04-01
In the distributed operations of route discovery and maintenance, strong interaction occurs across mobile ad hoc network (MANET) protocol layers. Quality of service (QoS) requirements of multimedia service classes must be satisfied by the cross-layer protocol, along with minimization of the distributed power consumption at nodes and along routes to battery-limited energy constraints. In previous work by the author, cross-layer interactions in the MANET protocol are modeled in terms of a set of concatenated design parameters and associated resource levels by multivariate point processes (MVPPs). Determination of the "best" cross-layer design is carried out using the optimal control of martingale representations of the MVPPs. In contrast to the competitive interaction among nodes in a MANET for multimedia services using limited resources, the interaction among the nodes of a wireless sensor network (WSN) is distributed and collaborative, based on the processing of data from a variety of sensors at nodes to satisfy common mission objectives. Sensor data originates at the nodes at the periphery of the WSN, is successively transported to other nodes for aggregation based on information-theoretic measures of correlation and ultimately sent as information to one or more destination (decision) nodes. The "multimedia services" in the MANET model are replaced by multiple types of sensors, e.g., audio, seismic, imaging, thermal, etc., at the nodes; the QoS metrics associated with MANETs become those associated with the quality of fused information flow, i.e., throughput, delay, packet error rate, data correlation, etc. Significantly, the essential analytical approach to MANET cross-layer optimization, now based on the MVPPs for discrete random events occurring in the WSN, can be applied to develop the stochastic characteristics and optimality conditions for cross-layer designs of sensor network protocols. Functional dependencies of WSN performance metrics are described in terms of the concatenated protocol parameters. New source-to-destination routes are sought that optimize cross-layer interdependencies to achieve the "best available" performance in the WSN. The protocol design, modified from a known reactive protocol, adapts the achievable performance to the transient network conditions and resource levels. Control of network behavior is realized through the conditional rates of the MVPPs. Optimal cross-layer protocol parameters are determined by stochastic dynamic programming conditions derived from models of transient packetized sensor data flows. Moreover, the defining conditions for WSN configurations, grouping sensor nodes into clusters and establishing data aggregation at processing nodes within those clusters, lead to computationally tractable solutions to the stochastic differential equations that describe network dynamics. Closed-form solution characteristics provide an alternative to the "directed diffusion" methods for resource-efficient WSN protocols published previously by other researchers. Performance verification of the resulting cross-layer designs is found by embedding the optimality conditions for the protocols in actual WSN scenarios replicated in a wireless network simulation environment. Performance tradeoffs among protocol parameters remain for a sequel to the paper.
Genetic characterization of measles viruses isolated in Turkey during 2000 and 2001
Korukluoglu, Gulay; Liffick, Stephanie; Guris, Dalya; Kobune, Fumio; Rota, Paul A; Bellini, William J; Ceylan, Ali; Ertem, Meliksah
2005-01-01
Background Molecular epidemiologic studies have made significant contributions to measles surveillance activities by helping to identify source and transmission pathways of the virus. This report describes the genetic characterization of wild-type measles viruses isolated in Turkey in 2000 and 2001. Results Wild-type measles viruses were isolated from 24 cases from five provinces in Turkey during 2001. The viruses were analyzed using the standard genotyping protocols. All isolates were classified as genotype D6, the same genotype that was identified in Turkey in previous outbreaks during 1998. Conclusion Turkey has begun implementation of a national program to eliminate measles by 2010. Therefore, this baseline genotype data will provide a means to monitor the success of the elimination program. PMID:16029506
Savas, Jeffrey N.; De Wit, Joris; Comoletti, Davide; Zemla, Roland; Ghosh, Anirvan
2015-01-01
Ligand-receptor interactions represent essential biological triggers which regulate many diverse and important cellular processes. We have developed a discovery-based proteomic biochemical protocol which couples affinity purification with multidimensional liquid chromatographic tandem mass spectrometry (LCLC-MS/MS) and bioinformatic analysis. Compared to previous approaches, our analysis increases sensitivity, shortens analysis duration, and boosts comprehensiveness. In this protocol, receptor extracellular domains are fused with the Fc region of IgG to generate fusion proteins that are purified from transfected HEK293T cells. These “ecto-Fcs” are coupled to protein A beads and serve as baits for binding assays with prey proteins extracted from rodent brain. After capture, the affinity purified proteins are digested into peptides and comprehensively analyzed by LCLC-MS/MS with ion trap mass spectrometers. In four working days, this protocol can generate shortlists of candidate ligand-receptor protein-protein interactions. Our “Ecto-Fc MS” approach outperforms antibody-based approaches and provides a reproducible and robust framework to identify extracellular ligand – receptor interactions. PMID:25101821
Functional genomics platform for pooled screening and mammalian genetic interaction maps
Kampmann, Martin; Bassik, Michael C.; Weissman, Jonathan S.
2014-01-01
Systematic genetic interaction maps in microorganisms are powerful tools for identifying functional relationships between genes and defining the function of uncharacterized genes. We have recently implemented this strategy in mammalian cells as a two-stage approach. First, genes of interest are robustly identified in a pooled genome-wide screen using complex shRNA libraries. Second, phenotypes for all pairwise combinations of hit genes are measured in a double-shRNA screen and used to construct a genetic interaction map. Our protocol allows for rapid pooled screening under various conditions without a requirement for robotics, in contrast to arrayed approaches. Each stage of the protocol can be implemented in ~2 weeks, with additional time for analysis and generation of reagents. We discuss considerations for screen design, and present complete experimental procedures as well as a full computational analysis suite for identification of hits in pooled screens and generation of genetic interaction maps. While the protocols outlined here were developed for our original shRNA-based approach, they can be applied more generally, including to CRISPR-based approaches. PMID:24992097
Fakir, Hatim; Hlatky, Lynn; Li, Huamin; Sachs, Rainer
2013-12-01
Optimal treatment planning for fractionated external beam radiation therapy requires inputs from radiobiology based on recent thinking about the "five Rs" (repopulation, radiosensitivity, reoxygenation, redistribution, and repair). The need is especially acute for the newer, often individualized, protocols made feasible by progress in image guided radiation therapy and dose conformity. Current stochastic tumor control probability (TCP) models incorporating tumor repopulation effects consider "stem-like cancer cells" (SLCC) to be independent, but the authors here propose that SLCC-SLCC interactions may be significant. The authors present a new stochastic TCP model for repopulating SLCC interacting within microenvironmental niches. Our approach is meant mainly for comparing similar protocols. It aims at practical generalizations of previous mathematical models. The authors consider protocols with complete sublethal damage repair between fractions. The authors use customized open-source software and recent mathematical approaches from stochastic process theory for calculating the time-dependent SLCC number and thereby estimating SLCC eradication probabilities. As specific numerical examples, the authors consider predicted TCP results for a 2 Gy per fraction, 60 Gy protocol compared to 64 Gy protocols involving early or late boosts in a limited volume to some fractions. In sample calculations with linear quadratic parameters α = 0.3 per Gy, α∕β = 10 Gy, boosting is predicted to raise TCP from a dismal 14.5% observed in some older protocols for advanced NSCLC to above 70%. This prediction is robust as regards: (a) the assumed values of parameters other than α and (b) the choice of models for intraniche SLCC-SLCC interactions. However, α = 0.03 per Gy leads to a prediction of almost no improvement when boosting. The predicted efficacy of moderate boosts depends sensitively on α. Presumably, the larger values of α are the ones appropriate for individualized treatment protocols, with the smaller values relevant only to protocols for a heterogeneous patient population. On that assumption, boosting is predicted to be highly effective. Front boosting, apart from practical advantages and a possible advantage as regards iatrogenic second cancers, also probably gives a slightly higher TCP than back boosting. If the total number of SLCC at the start of treatment can be measured even roughly, it will provide a highly sensitive way of discriminating between various models and parameter choices. Updated mathematical methods for calculating repopulation allow credible generalizations of earlier results.
Zimmerman, Christian E.; Nielsen, Roger L.
2003-01-01
The use of strontium-to-calcium (Sr/Ca) ratios in otoliths is becoming a standard method to describe life history type and the chronology of migrations between freshwater and seawater habitats in teleosts (e.g. Kalish, 1990; Radtke et al., 1990; Secor, 1992; Rieman et al., 1994; Radtke, 1995; Limburg, 1995; Tzeng et al. 1997; Volk et al., 2000; Zimmerman, 2000; Zimmerman and Reeves, 2000, 2002). This method provides critical information concerning the relationship and ecology of species exhibiting phenotypic variation in migratory behavior (Kalish, 1990; Secor, 1999). Methods and procedures, however, vary among laboratories because a standard method or protocol for measurement of Sr in otoliths does not exist. In this note, we examine the variations in analytical conditions in an effort to increase precision of Sr/Ca measurements. From these findings we argue that precision can be maximized with higher beam current (although there is specimen damage) than previously recommended by Gunn et al. (1992).
Epidemiology of Dengue Disease in Malaysia (2000–2012): A Systematic Literature Review
Mohd-Zaki, Abdul Hamid; Brett, Jeremy; Ismail, Ellyana; L'Azou, Maïna
2014-01-01
A literature survey and analysis was conducted to describe the epidemiology of dengue disease in Malaysia between 2000 and 2012. Published literature was searched for epidemiological studies of dengue disease, using specific search strategies for each electronic database; 237 relevant data sources were identified, 28 of which fulfilled the inclusion criteria. The epidemiology of dengue disease in Malaysia was characterized by a non-linear increase in the number of reported cases from 7,103 in 2000 to 46,171 in 2010, and a shift in the age range predominance from children toward adults. The overall increase in dengue disease was accompanied by a rise in the number, but not the proportion, of severe cases. The dominant circulating dengue virus serotypes changed continually over the decade and differed between states. Several gaps in epidemiological knowledge were identified; in particular, studies of regional differences, age-stratified seroprevalence, and hospital admissions. Protocol registration PROSPERO #CRD42012002293 PMID:25375211
Investigating the Effects of the Interaction Intensity in a Weak Measurement.
Piacentini, Fabrizio; Avella, Alessio; Gramegna, Marco; Lussana, Rudi; Villa, Federica; Tosi, Alberto; Brida, Giorgio; Degiovanni, Ivo Pietro; Genovese, Marco
2018-05-03
Measurements are crucial in quantum mechanics, for fundamental research as well as for applicative fields like quantum metrology, quantum-enhanced measurements and other quantum technologies. In the recent years, weak-interaction-based protocols like Weak Measurements and Protective Measurements have been experimentally realized, showing peculiar features leading to surprising advantages in several different applications. In this work we analyze the validity range for such measurement protocols, that is, how the interaction strength affects the weak value extraction, by measuring different polarization weak values on heralded single photons. We show that, even in the weak interaction regime, the coupling intensity limits the range of weak values achievable, setting a threshold on the signal amplification effect exploited in many weak measurement based experiments.
Lu, Hao; Papathomas, Thomas G; van Zessen, David; Palli, Ivo; de Krijger, Ronald R; van der Spek, Peter J; Dinjens, Winand N M; Stubbs, Andrew P
2014-11-25
In prognosis and therapeutics of adrenal cortical carcinoma (ACC), the selection of the most active areas in proliferative rate (hotspots) within a slide and objective quantification of immunohistochemical Ki67 Labelling Index (LI) are of critical importance. In addition to intratumoral heterogeneity in proliferative rate i.e. levels of Ki67 expression within a given ACC, lack of uniformity and reproducibility in the method of quantification of Ki67 LI may confound an accurate assessment of Ki67 LI. We have implemented an open source toolset, Automated Selection of Hotspots (ASH), for automated hotspot detection and quantification of Ki67 LI. ASH utilizes NanoZoomer Digital Pathology Image (NDPI) splitter to convert the specific NDPI format digital slide scanned from the Hamamatsu instrument into a conventional tiff or jpeg format image for automated segmentation and adaptive step finding hotspots detection algorithm. Quantitative hotspot ranking is provided by the functionality from the open source application ImmunoRatio as part of the ASH protocol. The output is a ranked set of hotspots with concomitant quantitative values based on whole slide ranking. We have implemented an open source automated detection quantitative ranking of hotspots to support histopathologists in selecting the 'hottest' hotspot areas in adrenocortical carcinoma. To provide wider community easy access to ASH we implemented a Galaxy virtual machine (VM) of ASH which is available from http://bioinformatics.erasmusmc.nl/wiki/Automated_Selection_of_Hotspots . The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/13000_2014_216.
NASA Astrophysics Data System (ADS)
Kim, Hie-Sik; Nam, Chul; Ha, Kwan-Yong; Ayurzana, Odgeral; Kwon, Jong-Won
2005-12-01
The embedded systems have been applied to many fields, including households and industrial sites. The user interface technology with simple display on the screen was implemented more and more. The user demands are increasing and the system has more various applicable fields due to a high penetration rate of the Internet. Therefore, the demand for embedded system is tend to rise. An embedded system for image tracking was implemented. This system is used a fixed IP for the reliable server operation on TCP/IP networks. Using an USB camera on the embedded Linux system developed a real time broadcasting of video image on the Internet. The digital camera is connected at the USB host port of the embedded board. All input images from the video camera are continuously stored as a compressed JPEG file in a directory at the Linux web-server. And each frame image data from web camera is compared for measurement of displacement Vector. That used Block matching algorithm and edge detection algorithm for past speed. And the displacement vector is used at pan/tilt motor control through RS232 serial cable. The embedded board utilized the S3C2410 MPU, which used the ARM 920T core form Samsung. The operating system was ported to embedded Linux kernel and mounted of root file system. And the stored images are sent to the client PC through the web browser. It used the network function of Linux and it developed a program with protocol of the TCP/IP.
Glowing Hot Transiting Exoplanet Discovered
NASA Astrophysics Data System (ADS)
2003-04-01
VLT Spectra Indicate Shortest-Known-Period Planet Orbiting OGLE-TR-3 Summary More than 100 exoplanets in orbit around stars other than the Sun have been found so far. But while their orbital periods and distances from their central stars are well known, their true masses cannot be determined with certainty, only lower limits. This fundamental limitation is inherent in the common observational method to discover exoplanets - the measurements of small and regular changes in the central star's velocity, caused by the planet's gravitational pull as it orbits the star. However, in two cases so far, it has been found that the exoplanet's orbit happens to be positioned in such a way that the planet moves in front of the stellar disk, as seen from the Earth. This "transit" event causes a small and temporary dip in the star's brightness, as the planet covers a small part of its surface, which can be observed. The additional knowledge of the spatial orientation of the planetary orbit then permits a direct determination of the planet's true mass. Now, a group of German astronomers [1] have found a third star in which a planet, somewhat larger than Jupiter, but only half as massive, moves in front of the central star every 28.5 hours . The crucial observation of this solar-type star, designated OGLE-TR-3 [2] was made with the high-dispersion UVES spectrograph on the Very Large Telescope (VLT) at the ESO Paranal Observatory (Chile). It is the exoplanet with the shortest period found so far and it is very close to the star, only 3.5 million km away. The hemisphere that faces the star must be extremely hot, about 2000 °C and the planet is obviously losing its atmosphere at high rate . PR Photo 10a/03 : The star OGLE-TR-3 . PR Photo 10b/03 : VLT UVES spectrum of OGLE-TR-3. PR Photo 10c/03 : Relation between stellar brightness and velocity (diagram). PR Photo 10d/03 : Observed velocity variation of OGLE-TR-3. PR Photo 10e/03 : Observed brightness variation of OGLE-TR-3. The search for exoplanets More than 100 planets in orbit around stars other than the Sun have been found so far. These "exoplanets" come in many different sizes and they move in a great variety of orbits at different distances from their central star, some nearly round and others quite elongated. Some planets are five to ten times more massive than the largest one in the solar system, Jupiter - the lightest exoplanets known at this moment are about half as massive as Saturn, i.e. about 50 times more massive than the Earth. Astronomers are hunting exoplanets not just to discover more such objects, but also to learn more about the apparent diversity of planetary systems. The current main research goal is to eventually discover an Earth-like exoplanet, but the available telescopes and instrumentation are still not "sensitive" enough for this daunting task. However, also in this context, it is highly desirable to know not only the orbits of the observable exoplanets, but also their true masses . But this is not an easy task. Masses of exoplanets Virtually all exoplanets detected so far have been found by an indirect method - the measurement of stellar velocity variations . It is based on the gravitational pull of the orbiting planet that causes the central star to move a little back and forth; the heavier the planet, the greater is the associated change in the star's velocity. This technique is rapidly improving: the new HARPS spectrograph (High Accuracy Radial Velocity Planet Searcher) , now being tested on the 3.6-m telescope at the ESO La Silla Observatory , can measure such stellar motions with an unrivalled accuracy of about 1 metre per second (m/s), cf. ESO PR 06/03 . It will shortly be able to search for exoplanets only a few times more massive than the Earth. However, velocity measurements alone do not allow to determine the true mass of the orbiting planet. Because of the unknown inclination of the planetary orbit (to the line-of-sight), they only provide a lower limit to this mass . Additional information about this orbital inclination is therefore needed to derive the true mass of an exoplanet. The transit method Fortunately, this information becomes available if the exoplanet is known to move across ("transit") the star's disk, as seen from the Earth; the orbital plane must then necessarily be very near the line-of-sight. This phenomenon is exactly the same that happens in our own solar system, when the inner planets Mercury and Venus pass in front of the solar disk, as seen from the Earth [3]. A solar eclipse (caused by the Moon moving in front of the Sun) is a more extreme case of the same type of event. During such an exoplanet transit, the observed brightness of the star will decrease slightly because the planet blocks a part of the stellar light. The larger the planet, the more of the light is blocked and the more the brightness of the star will decrease. A study of the way this brightness changes with time (astronomers refer to the "light curve"), when combined with radial velocity measurements, allows a complete determination of the planetary orbit, including the exact inclination. It also provides accurate information about the planet's size, true mass and hence, density. The chances that a particular exoplanet passes in front of the disk of its central star as seen from the Earth are small. However, because of the crucial importance of such events in order to characterize exoplanets fully, astronomers have for some time been actively searching for stars that experience small regularly occurring "brightness dips" that might possibly be caused by exoplanetary transits. The OGLE list Last year, a first list of 59 such possible cases of stars with transiting planets was announced by the Optical Gravitational Lensing Experiment (OGLE) [2]. These stars were found - within a sample of about 5 million stars observed during a 32-day period - to exhibit small and regular brightness dips that might possibly be caused by transits of an exoplanet. For one of these stars, OGLE-TR-56 , a team of American astronomers soon thereafter observed slight variations of the velocity , strongly indicating the presence of an exoplanet around that star. UVES spectra of OGLE-TR-3 ESO PR Photo 10a/03 ESO PR Photo 10a/03 [Preview - JPEG: 400 x 466 pix - 41k [Normal - JPEG: 800 x 931 pix - 280k] ESO PR Photo 10b/03 ESO PR Photo 10b/03 [Preview - JPEG: 492 x 400 pix - 52k [Normal - JPEG: 984 x 800 pix - 224k] Captions : PR Photo 10a/03 shows the 16.5-mag star OGLE-TR-3 , a solar-like star in the direction of the Galactic Center, discovered during an extensive photometric search for planetary and low-luminosity object transits [2]. The image is reproduced from an I-band CCD frame of a 1 x 1 arcmin 2 sky field. North is up and East is left. PR Photo 10b/03 displays a small portion of a high-dispersion spectrum of OGLE-TR-3 , obtained with the UVES spectrograph at the 8.2-m VLT KUEYEN telescope at the Paranal Observatory (Chile). It is divided into five adjacent wavelength intervals and represents the mean of ten 1-hour spectral exposures. The fully drawn curve shows the spectrum of the "best fitting" stellar model from which the composition, temperature, mass, age of OGLE-TR-3 were deduced. Now, a team of German and ESO astronomers [1] have used the UVES High-Dispersion Spectrograph on the 8.2-m VLT KUEYEN telescope at the Paranal Observatory (Chile) to obtain very detailed spectra of another star on that list, OGLE-TR-3 , cf. PR Photos 10a-b/03 . Over a period of one month, a total of ten high-resolution spectra - each with an exposure time of about one hour - were obtained of the 16.5-mag object, i.e. its brightness is about 16,000 fainter that what can be perceived with the unaided eye. A careful evaluation shows that OGLE-TR-3 is very similar to the Sun, with a temperature of about 5800 °C (6100 K). And most interestingly, it undergoes velocity variations of the order of 120 m/s . The exoplanet at OGLE-TR-3 ESO PR Photo 10c/03 ESO PR Photo 10c/03 [Preview - JPEG: 400 x 507 pix - 24k [Normal - JPEG: 800 x 1014 pix - 95k] ESO PR Photo 10d/03 ESO PR Photo 10d/03 [Preview - JPEG: 466 x 400 pix - 20k [Normal - JPEG: 932 x 800 pix - 120k] ESO PR Photo 10e/03 ESO PR Photo 10e/03 [Preview - JPEG: 510 x 200 pix - 21k [Normal - JPEG: 1024 x 400 pix - 120k] Captions : PR Photo 10c/03 illustrates the relationship between the variations in stellar brightness and velocity, caused by an orbiting exoplanet that transits the disk of its central star. Consecutive positions of the planet in its (circular) orbit are marked by black dots, with the motion from left to right. The figure has been drawn to scale, i.e. the dots actually represent the size of the planet itself. At the top is the view of the planetary orbit from above - below a view from the Earth with the planetary transit. Further down, the lightcurve with a brightness (intensity) dip when the planet blocks a small part of the star's light is shown, and at the bottom the corresponding change in the star's velocity. Before the transit, when the planet moves towards us, the star moves in the opposite direction, i.e. away from us and the velocity is positive; during the transit, the relative velocity is zero and later is becomes negative as the star moves towards us. PR Photo 10d/03 displays the velocity variation of the star OGLE-TR-3 , as measured from ten VLT-UVES spectra (each with 1-hour exposure time) and plotted according to the "photometric phase". This means that the planetary transit occurs at phase 0 (left) and again at phase 1 (right). The observed variation is in agreement with the expected one, cf. PR Photo 10c/03 . The fully drawn curve represents the best fit to the observations (velocity variation about 120 m/s) - the mass of the planet is derived from this. PR Photo 10e/03 shows the brightness variation ("light-curve") of the star OGLE-TR-3 obtained during the OGLE observations [2]. The crosses correspond to the observations and the fully drawn curve represents a model fit, with the stellar parameters from the analysis of the UVES spectra (1 solar radius and 1 solar mass) and the planetary parameters from the velocity analysis (0.6 Jupiter mass). The best fit allows determination of the planet's size as about 200,000 km (1.4 times the size of Jupiter). The 2 per cent dip in the brightness of OGLE-TR-3 , as observed during the OGLE programme, occurs every 28 hours 33 minutes (1.1899 days), cf. PR Photo 10e/03 . The UVES velocity measurements ( PR Photo 10d/03 ) fit this period well and reveal, with high probability, the presence of an exoplanet orbiting OGLE-TR-3 with this period. In any case, the observations firmly exclude that the well observed brightness variations could be due to a small stellar companion. A red dwarf star would have caused velocity variations of 15 km/s and a brown dwarf star 2.5 km/s; both would have been easy to observe with UVES, and it is clear that such variations can be excluded. Although the available observations are still insufficient to allow an accurate determination of the planetary properties, the astronomers provisionally deduce a true mass of the planet of the order of one half of that of Jupiter . The density is found to be about 250 kg/m 3 , only one-quarter of that of water or one-fifth of that of Jupiter, so the planet is quite big for this mass - a bit "blown up". It is obviously a planet of the gaseous type . A very hot planet The orbital period, 28 hours 33 minutes (1.1899 days), is the shortest known for any exoplanet and the distance between the star and the planet is correspondingly small, only 3.5 million kilometres . The temperature of the side of the planet facing the star must therefore be very high, of the order of 2000 °C . Clearly, the planet must be losing its atmosphere by evaporation. The astronomers also conclude that it might in fact be possible to observe this exoplanet directly because of its comparatively strong infrared radiation. An attempt to do so will soon be made. As only the third exoplanet found this way (after those at the stars HD209458 and OGLE-TR-56 ), the new object confirms the current impression that a considerable number of stars may possess giant planets in close orbits. Since such planets cannot form so close to their parent star, they must have migrated inwards to the current orbit from a much larger, initial distance. It is not known at this time with certainty how this might happen. Future prospects It is expected that more observational campaigns will be made to search for transiting planets around other stars. There is good hope that OGLE-TR-3 and OGLE-TR-56 are just the first two of a substantial number of exoplanets to be discovered this way. Some years from now, searches will also begin from dedicated space observatories, e.g. ESA's Eddington and Darwin , and NASA's Kepler .
NASA Astrophysics Data System (ADS)
Steiner, S. M.; Wood, J. H.
2015-12-01
As decomposition rates are affected by climate change, understanding crucial soil interactions that affect plant growth and decomposition becomes a vital part of contributing to the students' knowledge base. The Global Decomposition Project (GDP) is designed to introduce and educate students about soil organic matter and decomposition through a standardized protocol for collecting, reporting, and sharing data. The Interactive Model of Leaf Decomposition (IMOLD) utilizes animations and modeling to learn about the carbon cycle, leaf anatomy, and the role of microbes in decomposition. Paired together, IMOLD teaches the background information and allows simulation of numerous scenarios, and the GDP is a data collection protocol that allows students to gather usable measurements of decomposition in the field. Our presentation will detail how the GDP protocol works, how to obtain or make the materials needed, and how results will be shared. We will also highlight learning objectives from the three animations of IMOLD, and demonstrate how students can experiment with different climates and litter types using the interactive model to explore a variety of decomposition scenarios. The GDP demonstrates how scientific methods can be extended to educate broader audiences, and data collected by students can provide new insight into global patterns of soil decomposition. Using IMOLD, students will gain a better understanding of carbon cycling in the context of litter decomposition, as well as learn to pose questions they can answer with an authentic computer model. Using the GDP protocols and IMOLD provide a pathway for scientists and educators to interact and reach meaningful education and research goals.
Cannibal Stars Cause Giant Explosions in Fornax Cluster Galaxy
NASA Astrophysics Data System (ADS)
2000-07-01
The VLT Observes Most Remote Novae Ever Seen About 70 million years ago, when dinosaurs were still walking on the Earth, a series of violent thermo-nuclear explosions took place in a distant galaxy. After a very long travel across vast reaches of virtually empty space (70 million light-years, or ~ 7 x 10 20 km), dim light carrying the message about these events has finally reached us. It was recorded by the ESO Very Large Telescope (VLT) at the Paranal Observatory (Chile) during an observing programme by a group of Italian astronomers [1]. The subsequent analysis has shown that the observers witnessed the most distant nova outbursts ever seen . They were caused by "stellar cannibalism" in binary systems in which one relatively cool star loses matter to its smaller and hotter companion. An instability results that leads to the ignition of a "hydrogen bomb" on the surface of the receiving star. The "Stella Nova" Phenomenon A stellar outburst of the type now observed with the VLT is referred to as a "Stella Nova" ("new star" in Latin), or just "Nova" . Novae caused by explosions in binary stars in our home galaxy, the Milky Way system, are relatively frequent and about every second or third year one of them is bright enough to be easily visible with the naked eye. For our ancestors, who had no means to see the faint binary star before the explosion, it looked as if a new star had been born in the sky, hence the name. The most common nova explosion occurs in a binary stellar system in which a white dwarf (a very dense and hot, compact star with a mass comparable to that of the Sun and a size like the Earth) accretes hydrogen from a cooler and larger red dwarf star [2]. As the hydrogen collects on the surface of the white dwarf star, it becomes progressively hotter until a thermonuclear explosion is ignited at the bottom of the collected gas. A huge amount of energy is released and causes a million-fold increase in the brightness of the binary system within a few hours. After reaching maximum light within some days or weeks, it begins to fade as the hydrogen supply is exhausted and blown into space. The processed material is ejected at high speeds, up to ~1000 km/sec, and may later be visible as an expanding shell of emitting gas. Altogether, the tremendous flash of light involves the release of about 10 45 ergs in a few weeks, or about as much energy as our Sun produces in 10,000 years. Supernovae explosions that completely destroy heavier stars at the end of their lives are even more powerful. However, in contrast to supernovae and despite the colossal energy production, the progenitor of a nova is not destroyed during the explosion. Some time after an outburst, transfer of hydrogen from the companion star begins anew, and the process repeats itself with explosions taking place about once every 100,000 years. The nova star will finally die of "old age" when the cool companion has been completely cannibalized. Novae as Distance Indicators Due to their exceptional luminosity, novae can be used as powerful beacons that allow relative distances to different types of galaxies to be measured. The measurement is based on the assumption that novae of the same type are intrinsically equally bright, together with the physical law that states that an object's observed brightness decreases with the square of the distance to the observer. Thus, if we observe that a nova in a certain galaxy is one million times fainter than a nearby one, we know that it must be one thousand times more distant. In addition, observations of novae in other galaxies shed light on the history of formation of their stars. Despite their scientific importance, surveys of novae in distant, rich clusters of galaxies have not been very popular among astronomers. Major reasons are probably the inherent observational difficulties and the comparatively low rates of discovery. In the past, with 4-m class telescopes, tens of hours of monitoring of several galaxies have indeed been necessary to detect a few distant novae [3]. VLT observations of NGC 1316 in the Fornax Cluster ESO PR Photo 18a/00 ESO PR Photo 18a/00 [Preview - JPEG: 400 x 448 pix - 28k] [Normal - JPEG: 800 x 895 pix - 136k] [Full-Res - JPEG: 1941 x 2172 pix - 904k] Caption : Colour composite photo of the central area of NGC 1316 , a giant elliptical galaxy in the Fornax cluster of galaxies. Many dark dust clouds and lanes are visible. Some of the star-like objects in the field are globular clusters of stars that belong to the galaxy. It is based on CCD exposures, obtained with the 8.2-m VLT/ANTU telescope and the FORS-1 multi-mode instrument through B (blue), V (green-yellow) and I (here rendered as red) filters, respectively. The "pyramids" above and below the bright centre of the galaxy and the vertical lines at some of the brighter stars are caused by overexposure ("CCD bleeding"). The field measures 6.8 x 6.8 arcmin 2 , with 0.2 arcsec/pixel. The image quality of this composite is about 0.9 arcsec. North is up and East is left. NGC 1316 is a giant "dusty" galaxy ( PR Photo 18a/00 ), located in the Fornax cluster seen in the southern constellation of that name ("The Oven"). This galaxy is of special interest in connection with current attempts to establish an accurate distance scale in the Universe. In 1980 and 1981, NGC 1316 was the host of two supernovae of type Ia , a class of object that is widely used as a "cosmological standard candle" to determine the distance to very distant galaxies, cf. ESO PR 21/98. A precise measurement of the distance to NGC 1316 may therefore provide an independent calibration of the intrinsic brightness of these supernovae. The new observations were performed during 8 nights distributed over the period from January 9 to 19, 2000. They were made in service mode at the 8.2-m VLT/ANTU telescope with the FORS-1 multi-mode instrument, using a 2k x 2k CCD camera with 0.2 arcsec pixels and a field of 6.8 x 6.8 arcmin 2. The exposures lasted 20 min and were carried out with three optical filters (B, V and I). The most distant Novae observed so far ESO PR Photo 18b/00 ESO PR Photo 18b/00 [Preview - JPEG: 400 x 452 pix - 83k] [Normal - JPEG: 800 x 904 pix - 224k] ESO PR Photo 18c/00 ESO PR Photo 18c/00 [Preview - JPEG: 400 x 458 pix - 54k] [Normal - JPEG: 800 x 916 pix - 272k] Caption : Images of two of the novae in NGC 1316 that were discovered during the observational programme described in this Press Release. Both composites show the blue images (B-filter) obtained on January 9 (upper left), 12 (upper right), 15 (lower left) and 19 (lower right), 2000, respectively. The decline of the brightness of the objects is obvious. An analysis of the images that were obtained in blue light (B-filter) resulted in the detection of four novae. They were identified because of the typical change of brightness over the observation period, cf. PR Photos 18b-c/00 , as well as their measured colours. Although the time-consuming reduction of the data and the subsequent astrophysical interpretation is still in progress, the astronomers are already now very satisfied with the outcome. In particular, no less than four novae were detected in a single giant galaxy within only 11 days . This implies a rate of approximately 100 novae/year in NGC 1316, or about 3 times larger than the rate estimated for the Milky Way galaxy. This may (at least partly) be due to the fact that NGC 1316 is of a different type and contains more stars than our own galaxy. The novae in NGC 1316 are quite faint, of about magnitude 24 and decreasing towards 25-26 during the period of observation. This corresponds to nearly 100 million times fainter than what can be seen with the naked eye. The corresponding distance to NGC 1316 is found to be about 70 million light-years . Moreover, the discovery of four novae in one galaxy in the Fornax cluster was possible with only 3 hours of observing time per filter. This clearly shows that the new generation of 8-m class telescopes like the VLT, equipped with the new and large detectors, is able to greatly improve the efficiency of this type of astronomical investigations (by a factor of 10 or more) , as compared to previous searches with 4-m telescopes. The road is now open for exhaustive searches for novae in remote galaxies, with all the resulting benefits, also for the accurate determination of the extragalactic distance scale. Notes [1]: The group consists of Massimo Della Valle (Osservatorio Astrofisico di Arcetri, Firenze, Italy), Roberto Gilmozzi and Rodolfo Viezzer (both ESO). [2]: A graphical illustration of the nova phenomenon can be found at this website. [3]: For example, in 1987, Canadian astronomers Christopher Pritchet and Sidney van den Bergh , in an heroic tour de force with the 4-m Canada-France-Hawaii telescope, found 9 novae after 56 hours of monitoring of 3 giant elliptical galaxies in the Virgo cluster of galaxies.
A model-based executive for commanding robot teams
NASA Technical Reports Server (NTRS)
Barrett, Anthony
2005-01-01
The paper presents a way to robustly command a system of systems as a single entity. Instead of modeling each component system in isolation and then manually crafting interaction protocols, this approach starts with a model of the collective population as a single system. By compiling the model into separate elements for each component system and utilizing a teamwork model for coordination, it circumvents the complexities of manually crafting robust interaction protocols. The resulting systems are both globally responsive by virtue of a team oriented interaction model and locally responsive by virtue of a distributed approach to model-based fault detection, isolation, and recovery.
ERIC Educational Resources Information Center
2000
Three presentations are provided from Symposium 18, Instructional Technology, of the Academy of Human Resource Development (HRD) 2000 Conference Proceedings. "Strategies for Facilitating Interaction When Using Technology-Mediated Training Methods [TMTM]" (Jeffrey S. Lewis, Gary D. Geroy, Orlando Griego) focuses on differences between…
Efficient Group Coordination in Multicast Trees
2001-01-01
describe a novel protocol to coordinate multipoint groupwork within the IP-multicast framework. The protocol supports Internet-wide coordination for large...and highly-interactive groupwork , relying on the dissemination of coordination directives among group members across a shared end-to-end multicast
ISS and STS Commercial Off-the-Shelf Router Testing
NASA Technical Reports Server (NTRS)
Ivancie, William D.; Bell, Terry L.; Shell, Dan
2002-01-01
This report documents the results of testing performed with commercial off-the-shelf (COTS) routers and Internet Protocols (IPs) to determine if COTS equipment and IP could be utilized to upgrade NASA's current Space Transportation System (STS), the Shuttle, and the International Space Station communication infrastructure. Testing was performed by NASA Glenn Research Center (GRC) personnel within the Electronic Systems Test Laboratory (ESTE) with cooperation from the Mission Operations Directorate (MOD) Qualification and Utilization of Electronic System Technology (QUEST) personnel. The ESTE testing occurred between November 1 and 9, 2000. Additional testing was performed at NASA Glenn Research Center in a laboratory environment with equipment configured to emulate the STS. This report documents those tests and includes detailed test procedures, equipment interface requirements, test configurations and test results. The tests showed that a COTS router and standard Transmission Control Protocols and Internet Protocols (TCP/IP) could be used for both the Shuttle and the Space Station if near-error-free radio links are provided.
NASA Astrophysics Data System (ADS)
Derwent, R. G.; Simmonds, P. G.; Greally, B. R.; O'doherty, S.; McCulloch, A.; Manning, A.; Reimann, S.; Folini, D.; Vollmer, M. K.
The mixing ratios of HCFC-141b (1,1-dichlorofluoroethane) and HCFC-142b (1-chloro-1,1-difluoroethane) have been rising steadily in baseline air at Mace Head, Ireland over the 10-year period from 1994 to 2004. These HCFCs are widely used replacements for the chlorofluorocarbons phased out under the Montreal Protocol and its subsequent amendments. Analysis of the HCFC content of regionally-polluted air arriving at Mace Head from the European continent shows that European emissions reached a peak during 2000-2001 and have declined subsequently, following the phase-out in their usage. European emissions of HCFC-141b have been further constrained by observations at the High-Alpine Jungfraujoch site. The reductions are consistent with the phase-out of HCFC production and use from the year 2001 onwards mandated by European regulations designed to exceed the requirements of the Montreal Protocol.
Simulation Modeling and Performance Evaluation of Space Networks
NASA Technical Reports Server (NTRS)
Jennings, Esther H.; Segui, John
2006-01-01
In space exploration missions, the coordinated use of spacecraft as communication relays increases the efficiency of the endeavors. To conduct trade-off studies of the performance and resource usage of different communication protocols and network designs, JPL designed a comprehensive extendable tool, the Multi-mission Advanced Communications Hybrid Environment for Test and Evaluation (MACHETE). The design and development of MACHETE began in 2000 and is constantly evolving. Currently, MACHETE contains Consultative Committee for Space Data Systems (CCSDS) protocol standards such as Proximity-1, Advanced Orbiting Systems (AOS), Packet Telemetry/Telecommand, Space Communications Protocol Specification (SCPS), and the CCSDS File Delivery Protocol (CFDP). MACHETE uses the Aerospace Corporation s Satellite Orbital Analysis Program (SOAP) to generate the orbital geometry information and contact opportunities. Matlab scripts provide the link characteristics. At the core of MACHETE is a discrete event simulator, QualNet. Delay Tolerant Networking (DTN) is an end-to-end architecture providing communication in and/or through highly stressed networking environments. Stressed networking environments include those with intermittent connectivity, large and/or variable delays, and high bit error rates. To provide its services, the DTN protocols reside at the application layer of the constituent internets, forming a store-and-forward overlay network. The key capabilities of the bundling protocols include custody-based reliability, ability to cope with intermittent connectivity, ability to take advantage of scheduled and opportunistic connectivity, and late binding of names to addresses. In this presentation, we report on the addition of MACHETE models needed to support DTN, namely: the Bundle Protocol (BP) model. To illustrate the use of MACHETE with the additional DTN model, we provide an example simulation to benchmark its performance. We demonstrate the use of the DTN protocol and discuss statistics gathered concerning the total time needed to simulate numerous bundle transmissions
Bharatham, Nagakumar; Finch, Kristin E; Min, Jaeki; Mayasundari, Anand; Dyer, Michael A; Guy, R Kiplin; Bashford, Donald
2017-06-01
A virtual screening protocol involving docking and molecular dynamics has been tested against the results of fluorescence polarization assays testing the potency of a series of compounds of the nutlin class for inhibition of the interaction between p53 and Mdmx, an interaction identified as a driver of certain cancers. The protocol uses a standard docking method (AutoDock) with a cutoff based on the AutoDock score (ADscore), followed by molecular dynamics simulation with a cutoff based on root-mean-square-deviation (RMSD) from the docked pose. An analysis of the experimental and computational results shows modest performance of ADscore alone, but dramatically improved performance when RMSD is also used. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Yang, YuGuang; Zhang, YuChen; Xu, Gang; Chen, XiuBo; Zhou, Yi-Hua; Shi, WeiMin
2018-03-01
Li et al. first proposed a quantum hash function (QHF) in a quantum-walk architecture. In their scheme, two two-particle interactions, i.e., I interaction and π-phase interaction are introduced and the choice of I or π-phase interactions at each iteration depends on a message bit. In this paper, we propose an efficient QHF by dense coding of coin operators in discrete-time quantum walk. Compared with existing QHFs, our protocol has the following advantages: the efficiency of the QHF can be doubled and even more; only one particle is enough and two-particle interactions are unnecessary so that quantum resources are saved. It is a clue to apply the dense coding technique to quantum cryptographic protocols, especially to the applications with restricted quantum resources.
Bacterial molecular networks: bridging the gap between functional genomics and dynamical modelling.
van Helden, Jacques; Toussaint, Ariane; Thieffry, Denis
2012-01-01
This introductory review synthesizes the contents of the volume Bacterial Molecular Networks of the series Methods in Molecular Biology. This volume gathers 9 reviews and 16 method chapters describing computational protocols for the analysis of metabolic pathways, protein interaction networks, and regulatory networks. Each protocol is documented by concrete case studies dedicated to model bacteria or interacting populations. Altogether, the chapters provide a representative overview of state-of-the-art methods for data integration and retrieval, network visualization, graph analysis, and dynamical modelling.
First-Ever Census of Variable Mira-Type Stars in Galaxy Outside the Local Group
NASA Astrophysics Data System (ADS)
2003-05-01
First-Ever Census of Variable Mira-Type Stars in Galaxy Outsidethe Local Group Summary An international team led by ESO astronomer Marina Rejkuba [1] has discovered more than 1000 luminous red variable stars in the nearby elliptical galaxy Centaurus A (NGC 5128) . Brightness changes and periods of these stars were measured accurately and reveal that they are mostly cool long-period variable stars of the so-called "Mira-type" . The observed variability is caused by stellar pulsation. This is the first time a detailed census of variable stars has been accomplished for a galaxy outside the Local Group of Galaxies (of which the Milky Way galaxy in which we live is a member). It also opens an entirely new window towards the detailed study of stellar content and evolution of giant elliptical galaxies . These massive objects are presumed to play a major role in the gravitational assembly of galaxy clusters in the Universe (especially during the early phases). This unprecedented research project is based on near-infrared observations obtained over more than three years with the ISAAC multi-mode instrument at the 8.2-m VLT ANTU telescope at the ESO Paranal Observatory . PR Photo 14a/03 : Colour image of the peculiar galaxy Centaurus A . PR Photo 14b/03 : Location of the fields in Centaurus A, now studied. PR Photo 14c/03 : "Field 1" in Centaurus A (visual light; FORS1). PR Photo 14d/03 : "Field 2" in Centaurus A (visual light; FORS1). PR Photo 14e/03 : "Field 1" in Centaurus A (near-infrared; ISAAC). PR Photo 14f/03 : "Field 2" in Centaurus A (near-infrared; ISAAC). PR Photo 14g/03 : Light variation of six variable stars in Centaurus A PR Photo 14h/03 : Light variation of stars in Centaurus A (Animated GIF) PR Photo 14i/03 : Light curves of four variable stars in Centaurus A. Mira-type variable stars Among the stars that are visible in the sky to the unaided eye, roughly one out of three hundred (0.3%) displays brightness variations and is referred to by astronomers as a "variable star". The percentage is much higher among large, cool stars ("red giants") - in fact, almost all luminous stars of that type are variable. Such stars are known as Mira-variables ; the name comes from the most prominent member of this class, Omicron Ceti in the constellation Cetus (The Whale), also known as "Stella Mira" (The Wonderful Star). Its brightness changes with a period of 332 days and it is about 1500 times brighter at maximum (visible magnitude 2 and one of the fifty brightest stars in the sky) than at minimum (magnitude 10 and only visible in small telescopes) [2]. Stars like Omicron Ceti are nearing the end of their life. They are very large and have sizes from a few hundred to about a thousand times that of the Sun. The brightness variation is due to pulsations during which the star's temperature and size change dramatically. In the following evolutionary phase, Mira-variables will shed their outer layers into surrounding space and become visible as planetary nebulae with a hot and compact star (a "white dwarf") at the middle of a nebula of gas and dust (cf. the "Dumbbell Nebula" - ESO PR Photo 38a-b/98 ). Several thousand Mira-type stars are currently known in the Milky Way galaxy and a few hundred have been found in other nearby galaxies, including the Magellanic Clouds. The peculiar galaxy Centaurus A ESO PR Photo 14a/03 ESO PR Photo 14a/03 [Preview - JPEG: 400 x 451 pix - 53k [Normal - JPEG: 800 x 903 pix - 528k] [Hi-Res - JPEG: 3612 x 4075 pix - 8.4M] ESO PR Photo 14b/03 ESO PR Photo 14b/03 [Preview - JPEG: 570 x 400 pix - 52k [Normal - JPEG: 1140 x 800 pix - 392k] ESO PR Photo 14c/03 ESO PR Photo 14c/03 [Preview - JPEG: 400 x 451 pix - 61k [Normal - JPEG: 800 x 903 pix - 768k] ESO PR Photo 14d/03 ESO PR Photo 14d/03 [Preview - JPEG: 400 x 451 pix - 56k [Normal - JPEG: 800 x 903 pix - 760k] Captions : PR Photo 14a/03 is a colour composite photo of the peculiar galaxy Centaurus A (NGC 5128) , obtained with the Wide-Field Imager (WFI) camera at the ESO/MPG 2.2-m telescope on La Silla. It is based on a total of nine 3-min exposures made on March 25, 1999, through different broad-band optical filters (B(lue) - total exposure time 9 min - central wavelength 456 nm - here rendered as blue; V(isual) - 540 nm - 9 min - green; I(nfrared) - 784 nm - 9 min - red); it was prepared from files in the ESO Science Data Archive by ESO-astronomer Benoît Vandame . The elliptical shape and the central dust band, the imprint of a galaxy collision, are well visible. PR Photo 14b/03 identifies the two regions of Centaurus A (the rectangles in the upper left and lower right inserts) in which a search for variable stars was made during the present research project: "Field 1" is located in an area north-east of the center in which many young stars are present. This is also the direction in which an outflow ("jet") is seen on deep optical and radio images. "Field 2" is positioned in the galaxy's halo, south of the centre. High-resolution, very deep colour photos of these two fields and their immediate surroundings are shown in PR Photos 14c-d/03 . They were produced by means of CCD-frames obtained in July 1999 through U- and V-band optical filters with the VLT FORS1 multi-mode instrument at the 8.2-m VLT ANTU telescope on Paranal. Note the great variety of object types and colours, including many background galaxies which are seen through these less dense regions of Centaurus A . The total exposure time was 30 min in each filter and the seeing was excellent, 0.5 arcsec. The original pixel size is 0.196 arcsec and the fields measure 6.7 x 6.7 arcmin 2 (2048 x 2048 pix 2 ). North is up and East is left on all photos. Centaurus A (NGC 5128) is the nearest giant galaxy, at a distance of about 13 million light-years. It is located outside the Local Group of Galaxies to which our own galaxy, the Milky Way, and its satellite galaxies, the Magellanic Clouds, belong. Centaurus A is seen in the direction of the southern constellation Centaurus. It is of elliptical shape and is currently merging with a companion galaxy, making it one of the most spectacular objects in the sky, cf. PR Photo 14a/03 . It possesses a very heavy black hole at its centre (see ESO PR 04/01 ) and is a source of strong radio and X-ray emission. During the present research programme, two regions in Centaurus A were searched for stars of variable brightness; they are located in the periphery of this peculiar galaxy, cf. PR Photos 14b-d/03 . An outer field ("Field 1") coincides with a stellar shell with many blue and luminous stars produced by the on-going galaxy merger; it lies at a distance of 57,000 light-years from the centre. The inner field ("Field 2") is more crowded and is situated at a projected distance of about 30,000 light-years from the centre.. Three years of VLT observations ESO PR Photo 14e/03 ESO PR Photo 14e/03 [Preview - JPEG: 400 x 447 pix - 120k [Normal - JPEG: 800 x 894 pix - 992k] ESO PR Photo 14f/03 ESO PR Photo 14f/03 [Preview - JPEG: 400 x 450 pix - 96k [Normal - JPEG: 800 x 899 pix - 912k] Caption : PR Photos 14e-f/03 are colour composites of two small fields ("Field 1" and "Field 2") in the peculiar galaxy Centaurus A (NGC 5128) , based on exposures through three near-infrared filters (the J-, H- and K-bands at wavelengths 1.2, 1.6 and 2.2 µm, respectively) with the ISAAC multi-mode instrument at the 8.2-m VLT ANTU telescope at the ESO Paranal observatory. The corresponding areas are outlined within the two inserts in PR Photo 14b/03 and may be compared with the visual images from FORS1 ( PR Photos 14c-d/03 ). These ISAAC photos are the deepest near-infrared images ever obtained in this galaxy and show thousands of its stars of different colours. In the present colour-coding, the redder an image, the cooler is the star. The original pixel size is 0.15 arcsec and both fields measure 2.5 x 2.5 arcmin 2. North is up and East is left. Under normal circumstances, any team of professional astronomers will have access to the largest telescopes in the world for only a very limited number of consecutive nights each year. However, extensive searches for variable stars like the present require repeated observations lasting minutes-to-hours over periods of months-to-years. It is thus not feasible to perform such observations in the classical way in which the astronomers travel to the telescope each time. Fortunately, the operational system of the VLT at the ESO Paranal Observatory (Chile) is also geared to encompass this kind of long-term programme. Between April 1999 and July 2002, the 8.2-m VLT ANTU telescope on Cerro Paranal in Chile) was operated in service mode on many occasions to obtain K-band images of the two fields in Centaurus A by means of the near-infrared ISAAC multi-mode instrument. Each field was observed over 20 times in the course of this three-year period ; some of the images were obtained during exceptional seeing conditions of 0.30 arcsec. One set of complementary optical images was obtained with the FORS1 multi-mode instrument (also on VLT ANTU) in July 1999. Each image from the ISAAC instrument covers a sky field measuring 2.5 x 2.5 arcmin 2. The combined images, encompassing a total exposure of 20 hours are indeed the deepest infrared images ever made of the halo of any galaxy as distant as Centaurus A , about 13 million light-years. Discovering one thousand Mira variables ESO PR Photo 14g/03 ESO PR Photo 14g/03 [Preview - JPEG: 400 x 480 pix - 61k [Normal - JPEG: 800 x 961 pix - 808k] ESO PR Photo 14h/03 ESO PR Photo 14h/03 [Animated GIF: 263 x 267 pix - 56k ESO PR Photo 14i/03 ESO PR Photo 14i/03 [Preview - JPEG: 480 x 400 pix - 33k [Normal - JPEG: 959 x 800 pix - 152k] Captions : PR Photo 14g/03 shows a zoomed-in area within "Field 2" in Centaurus A , from the ISAAC colour image shown in PR Photo 14e/03 . Nearly all red stars in this area are of the variable Mira-type. The brightness variation of some stars (labelled A-D) is demonstrated in the animated-GIF image PR Photo 14h/03 . The corresponding light curves (brightness over the pulsation period) are shown in PR Photo 14i/03 . Here the abscissa indicates the pulsation phase (one full period corresponds to the interval from 0 to 1) and the ordinate unit is near-infrared K s -magnitude. One magnitude corresponds to a difference in brightness of a factor 2.5. Once the lengthy observations were completed, two further steps were needed to identify the variable stars in Centaurus A . First, each ISAAC frame was individually processed to identify the thousands and thousands of faint point-like images (stars) visible in these fields. Next, all images were compared using a special software package ("DAOPHOT") to measure the brightness of all these stars in the different frames, i.e., as a function of time. While most stars in these fields as expected were found to have constant brightness, more than 1000 stars displayed variations in brightness with time; this is by far the largest number of variable stars ever discovered in a galaxy outside the Local Group of Galaxies. The detailed analysis of this enormous dataset took more than a year. Most of the variable stars were found to be of the Mira-type and their light curves (brightness over the pulsation period) were measured, cf. PR Photo 14i/03 . For each of them, values of the characterising parameters, the period (days) and brightness amplitude (magnitudes) were determined. A catalogue of the newly discovered variable stars in Centaurus A has now been made available to the astronomical community via the European research journal Astronomy & Astrophysics. Marina Rejkuba is pleased and thankful: "We are really very fortunate to have carried out this ambitious project so successfully. It all depended critically on different factors: the repeated granting of crucial observing time by the ESO Observing Programmes Committee over different observing periods in the face of rigorous international competition, the stability and reliability of the telescope and the ISAAC instrument over a period of more than three years and, not least, the excellent quality of the service mode observations, so efficiently performed by the staff at the Paranal Observatory." What have we learned about Centaurus A? The present study of variable stars in this giant elliptical galaxy is the first-ever of its kind. Although the evaluation of the very large observational data material is still not finished, it has already led to a number of very useful scientific results. Confirmation of the presence of an intermediate-age population Based on earlier research (optical and near-IR colour-magnitude diagrams of the stars in the fields), the present team of astronomers had previously detected the presence of intermediate-age and young stellar populations in the halo of this galaxy. The youngest stars appear to be aligned with the powerful jet produced by the massive black hole at the centre. Some of the very luminous red variable stars now discovered confirm the presence of a population of intermediate-age stars in the halo of this galaxy. It also contributes to our understanding of how giant elliptical galaxies form. New measurement of the distance to Centaurus A The pulsation of Mira-type variable stars obeys a period-luminosity relation. The longer its period, the more luminous is a Mira-type star. This fact makes it possible to use Mira-type stars as "standard candles" (objects of known intrinsic luminosity) for distance determinations. They have in fact often been used in this way to measure accurate distances to more nearby objects, e.g., to individual clusters of stars and to the center in our Milky Way galaxy, and also to galaxies in the Local Group, in particular the Magellanic Clouds. This method works particularly well with infrared measurements and the astronomers were now able to measure the distance to Centaurus A in this new way. They found 13.7 ± 1.9 million light-years , in general agreement with and thus confirming other methods. Study of stellar population gradients in the halo of a giant elliptical galaxy The two fields here studied contain different populations of stars. A clear dependence on the location (a "gradient") within the galaxy is observed, which can be due to differences in chemical composition or age, or to a combination of both. Understanding the cause of this gradient will provide additional clues to how Centaurus A - and indeed all giant elliptical galaxies - was formed and has since evolved. Comparison with other well-known nearby galaxies Past searches have discovered Mira-type variable stars thoughout the Milky Way, our home galaxy, and in other nearby galaxies in the Local Group. However, there are no giant elliptical galaxies like Centaurus A in the Local Group and this is the first time it has been possible to identify this kind of stars in that type of galaxy. The present investigation now opens a new window towards studies of the stellar constituents of such galaxies .
Data Management Working Group report
NASA Technical Reports Server (NTRS)
Filardo, Edward J.; Smith, David B.
1986-01-01
The current flight qualification program lags technology insertion by 6 to 10 years. The objective is to develop an integrated software engineering and development environment assisted by an expert system technology. An operating system needs to be developed which is portable to the on-board computers of the year 2000. The use of ADA verses a High-Order Language; fault tolerance; fiber optics networks; communication protocols; and security are also examined and outlined.
Engineering Task Plan for the Ultrasonic Inspection of Hanford Double Shell Tanks (DST) FY2000
DOE Office of Scientific and Technical Information (OSTI.GOV)
JENSEN, C.E.
2000-01-10
This document facilitates the ultrasonic examination of Hanford double-shell tanks. Included are a plan for engineering activities (individual responsibilities), plan for performance demonstration testing, and a plan for field activities (tank inspection). Also included are a Statement of Work for contractor performance of the work and a protocol to be followed should tank flaws that exceed the acceptance criteria be discovered.
Lindsay, Elizabeth A.; Lawson, Andrew J.; Walker, Rachel A.; Ward, Linda R.; Smith, Henry R.; Scott, Fiona W.; O'Brien, Sarah J.; Fisher, Ian S.T.; Crook, Paul D.; Wilson, Deborah; Brown, Derek J; Hardardottir, Hjordis; Wannet, Wim J.B.; Tschäpe, Helmut
2002-01-01
From July through September 2000, patients in five European countries were infected with a multidrug-resistant strain of Salmonella Typhimurium DT204b. Epidemiologic investigations were facilitated by the transmission of electronic images (Tagged Image Files) of pulsed-field gel electrophoresis profiles. This investigation highlights the importance of standardized protocols for molecular typing in international outbreaks of foodborne disease. PMID:12095445
Thoracic injuries to contained and restrained occupants in single-vehicle pure rollover crashes.
Bambach, M R; Grzebieta, R H; McIntosh, A S
2013-01-01
Around one in three contained and restrained seriously injured occupants in single-vehicle pure rollover crashes receive a serious injury to the thorax. With dynamic rollover test protocols currently under development, there is a need to understand the nature and cause of serious thoracic injuries incurred in rollover events. This will allow decisions to be made with regards to adoption of a suitable crash test dummy and appropriate thoracic injury criteria in such protocols. Valid rollover occupant protection test protocols will lead to vehicle improvements that will reduce the high trauma burden of vehicle rollover crashes. This paper presents an analysis of contained and restrained occupants involved in single-vehicle pure rollover crashes that occurred in the United States between 2000 and 2009 (inclusive). Serious thoracic injury typology and causality are determined. A logistic regression model is developed to determine associations between the incidence of serious thoracic injury and the human, vehicle and environmental characteristics of the crashes. Recommendations are made with regards to the appropriate assessment of potential thoracic injury in dynamic rollover occupant protection crash test protocols. Copyright © 2012 Elsevier Ltd. All rights reserved.
Flory, Andrea B; Rassnick, Kenneth M; Erb, Hollis N; Garrett, Laura D; Northrup, Nicole C; Selting, Kim A; Phillips, Brenda S; Locke, Jennifer E; Chretin, John D
2011-02-15
To evaluate factors associated with second remission in dogs with lymphoma retreated with a cyclophosphamide, doxorubicin, vincristine, and prednisone (CHOP) protocol after relapse following initial treatment with a first-line 6-month CHOP protocol. Retrospective case series. 95 dogs with lymphoma. Medical records were reviewed. Remission duration was estimated by use of the Kaplan-Meier method. Factors potentially associated with prognosis were examined. Median remission duration after the first-line CHOP protocol was 289 days (range, 150 to 1,457 days). Overall, 78% (95% confidence interval [CI], 69% to 86%) of dogs achieved a complete remission following retreatment, with a median second remission duration of 159 days (95% CI, 126 to 212 days). Duration of time off chemotherapy was associated with likelihood of response to retreatment; median time off chemotherapy was 140 days for dogs that achieved a complete remission after retreatment and 84 days for dogs that failed to respond to retreatment. Second remission duration was associated with remission duration after initial chemotherapy; median second remission duration for dogs with initial remission duration ≥ 289 days was 214 days (95% CI, 168 to 491 days), compared with 98 days (95% CI, 70 to 144 days) for dogs with initial remission duration < 289 days. Findings suggested that retreatment with the CHOP protocol can be effective in dogs with lymphoma that successfully complete an initial 6-month CHOP protocol.