Sample records for open source jpeg

  1. Block selective redaction for minimizing loss during de-identification of burned in text in irreversibly compressed JPEG medical images.

    PubMed

    Clunie, David A; Gebow, Dan

    2015-01-01

    Deidentification of medical images requires attention to both header information as well as the pixel data itself, in which burned-in text may be present. If the pixel data to be deidentified is stored in a compressed form, traditionally it is decompressed, identifying text is redacted, and if necessary, pixel data are recompressed. Decompression without recompression may result in images of excessive or intractable size. Recompression with an irreversible scheme is undesirable because it may cause additional loss in the diagnostically relevant regions of the images. The irreversible (lossy) JPEG compression scheme works on small blocks of the image independently, hence, redaction can selectively be confined only to those blocks containing identifying text, leaving all other blocks unchanged. An open source implementation of selective redaction and a demonstration of its applicability to multiframe color ultrasound images is described. The process can be applied either to standalone JPEG images or JPEG bit streams encapsulated in other formats, which in the case of medical images, is usually DICOM.

  2. LDPC-based iterative joint source-channel decoding for JPEG2000.

    PubMed

    Pu, Lingling; Wu, Zhenyu; Bilgin, Ali; Marcellin, Michael W; Vasic, Bane

    2007-02-01

    A framework is proposed for iterative joint source-channel decoding of JPEG2000 codestreams. At the encoder, JPEG2000 is used to perform source coding with certain error-resilience (ER) modes, and LDPC codes are used to perform channel coding. During decoding, the source decoder uses the ER modes to identify corrupt sections of the codestream and provides this information to the channel decoder. Decoding is carried out jointly in an iterative fashion. Experimental results indicate that the proposed method requires fewer iterations and improves overall system performance.

  3. JHelioviewer: Open-Source Software for Discovery and Image Access in the Petabyte Age (Invited)

    NASA Astrophysics Data System (ADS)

    Mueller, D.; Dimitoglou, G.; Langenberg, M.; Pagel, S.; Dau, A.; Nuhn, M.; Garcia Ortiz, J. P.; Dietert, H.; Schmidt, L.; Hughitt, V. K.; Ireland, J.; Fleck, B.

    2010-12-01

    The unprecedented torrent of data returned by the Solar Dynamics Observatory is both a blessing and a barrier: a blessing for making available data with significantly higher spatial and temporal resolution, but a barrier for scientists to access, browse and analyze them. With such staggering data volume, the data is bound to be accessible only from a few repositories and users will have to deal with data sets effectively immobile and practically difficult to download. From a scientist's perspective this poses three challenges: accessing, browsing and finding interesting data while avoiding the proverbial search for a needle in a haystack. To address these challenges, we have developed JHelioviewer, an open-source visualization software that lets users browse large data volumes both as still images and movies. We did so by deploying an efficient image encoding, storage, and dissemination solution using the JPEG 2000 standard. This solution enables users to access remote images at different resolution levels as a single data stream. Users can view, manipulate, pan, zoom, and overlay JPEG 2000 compressed data quickly, without severe network bandwidth penalties. Besides viewing data, the browser provides third-party metadata and event catalog integration to quickly locate data of interest, as well as an interface to the Virtual Solar Observatory to download science-quality data. As part of the Helioviewer Project, JHelioviewer offers intuitive ways to browse large amounts of heterogeneous data remotely and provides an extensible and customizable open-source platform for the scientific community.

  4. JHelioviewer: Open-Source Software for Discovery and Image Access in the Petabyte Age

    NASA Astrophysics Data System (ADS)

    Mueller, D.; Dimitoglou, G.; Garcia Ortiz, J.; Langenberg, M.; Nuhn, M.; Dau, A.; Pagel, S.; Schmidt, L.; Hughitt, V. K.; Ireland, J.; Fleck, B.

    2011-12-01

    The unprecedented torrent of data returned by the Solar Dynamics Observatory is both a blessing and a barrier: a blessing for making available data with significantly higher spatial and temporal resolution, but a barrier for scientists to access, browse and analyze them. With such staggering data volume, the data is accessible only from a few repositories and users have to deal with data sets effectively immobile and practically difficult to download. From a scientist's perspective this poses three challenges: accessing, browsing and finding interesting data while avoiding the proverbial search for a needle in a haystack. To address these challenges, we have developed JHelioviewer, an open-source visualization software that lets users browse large data volumes both as still images and movies. We did so by deploying an efficient image encoding, storage, and dissemination solution using the JPEG 2000 standard. This solution enables users to access remote images at different resolution levels as a single data stream. Users can view, manipulate, pan, zoom, and overlay JPEG 2000 compressed data quickly, without severe network bandwidth penalties. Besides viewing data, the browser provides third-party metadata and event catalog integration to quickly locate data of interest, as well as an interface to the Virtual Solar Observatory to download science-quality data. As part of the ESA/NASA Helioviewer Project, JHelioviewer offers intuitive ways to browse large amounts of heterogeneous data remotely and provides an extensible and customizable open-source platform for the scientific community. In addition, the easy-to-use graphical user interface enables the general public and educators to access, enjoy and reuse data from space missions without barriers.

  5. A modified JPEG-LS lossless compression method for remote sensing images

    NASA Astrophysics Data System (ADS)

    Deng, Lihua; Huang, Zhenghua

    2015-12-01

    As many variable length source coders, JPEG-LS is highly vulnerable to channel errors which occur in the transmission of remote sensing images. The error diffusion is one of the important factors which infect its robustness. The common method of improving the error resilience of JPEG-LS is dividing the image into many strips or blocks, and then coding each of them independently, but this method reduces the coding efficiency. In this paper, a block based JPEP-LS lossless compression method with an adaptive parameter is proposed. In the modified scheme, the threshold parameter RESET is adapted to an image and the compression efficiency is close to that of the conventional JPEG-LS.

  6. The effect of JPEG compression on automated detection of microaneurysms in retinal images

    NASA Astrophysics Data System (ADS)

    Cree, M. J.; Jelinek, H. F.

    2008-02-01

    As JPEG compression at source is ubiquitous in retinal imaging, and the block artefacts introduced are known to be of similar size to microaneurysms (an important indicator of diabetic retinopathy) it is prudent to evaluate the effect of JPEG compression on automated detection of retinal pathology. Retinal images were acquired at high quality and then compressed to various lower qualities. An automated microaneurysm detector was run on the retinal images of various qualities of JPEG compression and the ability to predict the presence of diabetic retinopathy based on the detected presence of microaneurysms was evaluated with receiver operating characteristic (ROC) methodology. The negative effect of JPEG compression on automated detection was observed even at levels of compression sometimes used in retinal eye-screening programmes and these may have important clinical implications for deciding on acceptable levels of compression for a fully automated eye-screening programme.

  7. Oblivious image watermarking combined with JPEG compression

    NASA Astrophysics Data System (ADS)

    Chen, Qing; Maitre, Henri; Pesquet-Popescu, Beatrice

    2003-06-01

    For most data hiding applications, the main source of concern is the effect of lossy compression on hidden information. The objective of watermarking is fundamentally in conflict with lossy compression. The latter attempts to remove all irrelevant and redundant information from a signal, while the former uses the irrelevant information to mask the presence of hidden data. Compression on a watermarked image can significantly affect the retrieval of the watermark. Past investigations of this problem have heavily relied on simulation. It is desirable not only to measure the effect of compression on embedded watermark, but also to control the embedding process to survive lossy compression. In this paper, we focus on oblivious watermarking by assuming that the watermarked image inevitably undergoes JPEG compression prior to watermark extraction. We propose an image-adaptive watermarking scheme where the watermarking algorithm and the JPEG compression standard are jointly considered. Watermark embedding takes into consideration the JPEG compression quality factor and exploits an HVS model to adaptively attain a proper trade-off among transparency, hiding data rate, and robustness to JPEG compression. The scheme estimates the image-dependent payload under JPEG compression to achieve the watermarking bit allocation in a determinate way, while maintaining consistent watermark retrieval performance.

  8. Open source tools for management and archiving of digital microscopy data to allow integration with patient pathology and treatment information.

    PubMed

    Khushi, Matloob; Edwards, Georgina; de Marcos, Diego Alonso; Carpenter, Jane E; Graham, J Dinny; Clarke, Christine L

    2013-02-12

    Virtual microscopy includes digitisation of histology slides and the use of computer technologies for complex investigation of diseases such as cancer. However, automated image analysis, or website publishing of such digital images, is hampered by their large file sizes. We have developed two Java based open source tools: Snapshot Creator and NDPI-Splitter. Snapshot Creator converts a portion of a large digital slide into a desired quality JPEG image. The image is linked to the patient's clinical and treatment information in a customised open source cancer data management software (Caisis) in use at the Australian Breast Cancer Tissue Bank (ABCTB) and then published on the ABCTB website (http://www.abctb.org.au) using Deep Zoom open source technology. Using the ABCTB online search engine, digital images can be searched by defining various criteria such as cancer type, or biomarkers expressed. NDPI-Splitter splits a large image file into smaller sections of TIFF images so that they can be easily analysed by image analysis software such as Metamorph or Matlab. NDPI-Splitter also has the capacity to filter out empty images. Snapshot Creator and NDPI-Splitter are novel open source Java tools. They convert digital slides into files of smaller size for further processing. In conjunction with other open source tools such as Deep Zoom and Caisis, this suite of tools is used for the management and archiving of digital microscopy images, enabling digitised images to be explored and zoomed online. Our online image repository also has the capacity to be used as a teaching resource. These tools also enable large files to be sectioned for image analysis. The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/5330903258483934.

  9. The JPEG XT suite of standards: status and future plans

    NASA Astrophysics Data System (ADS)

    Richter, Thomas; Bruylants, Tim; Schelkens, Peter; Ebrahimi, Touradj

    2015-09-01

    The JPEG standard has known an enormous market adoption. Daily, billions of pictures are created, stored and exchanged in this format. The JPEG committee acknowledges this success and spends continued efforts in maintaining and expanding the standard specifications. JPEG XT is a standardization effort targeting the extension of the JPEG features by enabling support for high dynamic range imaging, lossless and near-lossless coding, and alpha channel coding, while also guaranteeing backward and forward compatibility with the JPEG legacy format. This paper gives an overview of the current status of the JPEG XT standards suite. It discusses the JPEG legacy specification, and details how higher dynamic range support is facilitated both for integer and floating-point color representations. The paper shows how JPEG XT's support for lossless and near-lossless coding of low and high dynamic range images is achieved in combination with backward compatibility to JPEG legacy. In addition, the extensible boxed-based JPEG XT file format on which all following and future extensions of JPEG will be based is introduced. This paper also details how the lossy and lossless representations of alpha channels are supported to allow coding transparency information and arbitrarily shaped images. Finally, we conclude by giving prospects on upcoming JPEG standardization initiative JPEG Privacy & Security, and a number of other possible extensions in JPEG XT.

  10. Report about the Solar Eclipse on August 11, 1999

    NASA Astrophysics Data System (ADS)

    1999-08-01

    This webpage provides information about the total eclipse on Wednesday, August 11, 1999, as it was seen by ESO staff, mostly at or near the ESO Headquarters in Garching (Bavaria, Germany). The zone of totality was about 108 km wide and the ESO HQ were located only 8 km south of the line of maximum totality. The duration of the phase of totality was about 2 min 17 sec. The weather was quite troublesome in this geographical area. Heavy clouds moved across the sky during the entire event, but there were also some holes in between. Consequently, sites that were only a few kilometres from each other had very different viewing conditions. Some photos and spectra of the eclipsed Sun are displayed below, with short texts about the circumstances under which they were made. Please note that reproduction of pictures on this webpage is only permitted, if the author is mentioned as source. Information made available before the eclipse is available here. Eclipse Impressions at the ESO HQ Photo by Eddy Pomaroli Preparing for the Eclipse Photo: Eddy Pomaroli [JEG: 400 x 239 pix - 116k] [JPEG: 800 x 477 pix - 481k] [JPEG: 3000 x 1789 pix - 3.9M] Photo by Eddy Pomaroli During the 1st Partial Phase Photo: Eddy Pomaroli [JPEG: 400 x 275 pix - 135k] [JPEG: 800 x 549 pix - 434k] [JPEG: 2908 x 1997 pix - 5.9M] Photo by Hamid Mehrgan Heavy Clouds Above Digital Photo: Hamid Mehrgan [JPEG: 400 x 320 pix - 140k] [JPEG: 800 x 640 pix - 540k] [JPEG: 1280 x 1024 pix - 631k] Photo by Olaf Iwert Totality Approaching Digital Photo: Olaf Iwert [JPEG: 400 x 320 pix - 149k] [JPEG: 800 x 640 pix - 380k] [JPEG: 1280 x 1024 pix - 536k] Photo by Olaf Iwert Beginning of Totality Digital Photo: Olaf Iwert [JPEG: 400 x 236 pix - 86k] [JPEG: 800 x 471 pix - 184k] [JPEG: 1280 x 753 pix - 217k] Photo by Olaf Iwert A Happy Eclipse Watcher Digital Photo: Olaf Iwert [JPEG: 400 x 311 pix - 144k] [JPEG: 800 x 622 pix - 333k] [JPEG: 1280 x 995 pix - 644k] ESO HQ Eclipse Video Clip [MPEG-version] ESO HQ Eclipse Video Clip (2425 frames/01:37 min) [MPEG Video; 160x120 pix; 2.2M] [MPEG Video; 320x240 pix; 4.4Mb] [RealMedia; streaming; 33kps] [RealMedia; streaming; 200kps] This Video Clip was prepared from a "reportage" of the event at the ESO HQ that was transmitted in real-time to ESO-Chile via ESO's satellite link. It begins with some sequences of the first partial phase and the eclipse watchers. Clouds move over and the landscape darkens as the phase of totality approaches. The Sun is again visible at the very moment this phase ends. Some further sequences from the second partial phase follow. Produced by Herbert Zodet. Dire Forecasts The weather predictions in the days before the eclipse were not good for Munich and surroundings. A heavy front with rain and thick clouds that completely covered the sky moved across Bavaria the day before and the meteorologists predicted a 20% chance of seeing anything at all. On August 10, it seemed that the chances were best in France and in the western parts of Germany, and much less close to the Alps. This changed to the opposite during the night before the eclipse. Now the main concern in Munich was a weather front approaching from the west - would it reach this area before the eclipse? The better chances were then further east, nearer the Austrian border. Many people travelled back and forth along the German highways, many of which quickly became heavily congested. Preparations About 500 persons, mostly ESO staff with their families and friends, were present at the ESO HQ in the morning of August 11. Prior to the eclipse, they received information about the various aspects of solar eclipses and about the specific conditions of this one in the auditorium. Protective glasses were handed out and it was the idea that they would then follow the eclipse from outside. In view of the pessimistic weather forecasts, TV sets had been set up in two large rooms, but in the end most chose to watch the eclipse from the terasse in front of the cafeteria and from the area south of the building. Several telescopes were set up among the trees and on the adjoining field (just harvested). Clouds and Holes It was an unusual solar eclipse experience. Heavy clouds were passing by with sudden rainshowers, but fortunately there were also some holes with blue sky in between. While much of the first partial phase was visible through these, some really heavy clouds moved in a few minutes before the total phase, when the light had begun to fade. They drifted slowly - too slowly! - towards the east and the corona was never seen from the ESO HQ site. From here, the view towards the eclipsed Sun only cleared at the very instant of the second "diamond ring" phenomenon. This was beautiful, however, and evidently took most of the photographers by surprise, so very few, if any, photos were made of this memorable moment. Temperature Curve by Benoit Pirenne Temperature Curve on August 11 [JPEG: 646 x 395 pix - 35k] Measured by Benoit Pirenne - see also his meteorological webpage Nevertheless, the entire experience was fantastic - there were all the expected effects, the darkness, the cool air, the wind and the silence. It was very impressive indeed! And it was certainly a unique day in ESO history! Carolyn Collins Petersen from "Sky & Telescope" participated in the conference at ESO in the days before and watched the eclipse from the "Bürgerplatz" in Garching, about 1.5 km south of the ESO HQ. She managed to see part of the totality phase and filed some dramatic reports at the S&T Eclipse Expedition website. They describe very well the feelings of those in this area! Eclipse Photos Several members of the ESO staff went elsewhere and had more luck with the weather, especially at the moment of totality. Below are some of their impressive pictures. Eclipse Photo by Philippe Duhoux First "Diamond Ring" [JPEG: 400 x 292 pix - 34k] [JPEG: 800 x 583 pix - 144k] [JPEG: 2531 x 1846 pix - 1.3M] Eclipse Photo by Philippe Duhoux Totality [JPEG: 400 x 306 pix - 49k] [JPEG: 800 x 612 pix - 262k] [JPEG: 3039 x 1846 pix - 3.6M] Eclipse Photo by Philippe Duhoux Second "Diamond Ring" [JPEG: 400 x 301 pix - 34k] [JPEG: 800 x 601 pix - 163k] [JPEG: 2905 x 2181 pix - 2.0M] The Corona (Philippe Duhoux) "For the observation of the eclipse, I chose a field on a hill offering a wide view towards the western horizon and located about 10 kilometers north west of Garching." "While the partial phase was mostly cloudy, the sky went clear 3 minutes before the totality and remained so for about 15 minutes. Enough to enjoy the event!" "The images were taken on Agfa CT100 colour slide film with an Olympus OM-20 at the focus of a Maksutov telescope (f = 1000 mm, f/D = 10). The exposure times were automatically set by the camera. During the partial phase, I used an off-axis mask of 40 mm diameter with a mylar filter ND = 3.6, which I removed for the diamond rings and the corona." Note in particular the strong, detached protuberances to the right of the rim, particularly noticeable in the last photo. Eclipse Photo by Cyril Cavadore Totality [JPEG: 400 x 360 pix - 45k] [JPEG: 800 x 719 pix - 144k] [JPEG: 908 x 816 pix - 207k] The Corona (Cyril Cavadore) "We (C.Cavadore from ESO and L. Bernasconi and B. Gaillard from Obs. de la Cote d'Azur) took this photo in France at Vouzier (Champagne-Ardennes), between Reims and Nancy. A large blue opening developed in the sky at 10 o'clock and we decided to set up the telescope and the camera at that time. During the partial phase, a lot of clouds passed over, making it hard to focus properly. Nevertheless, 5 min before totality, a deep blue sky opened above us, allowing us to watch it and to take this picture. 5-10 Minutes after the totality, the sky was almost overcast up to the 4th contact". "The image was taken with a 2x2K (14 µm pixels) Thomson "homemade" CCD camera mounted on a CN212 Takahashi (200 mm diameter telescope) with a 1/10.000 neutral filter. The acquisition software set exposure time (2 sec) and took images in a complete automated way, allowing us to observe the eclipse by naked eye or with binoculars. To get as many images as possible during totality, we use binning 2x2 to reduce the readout time to 19 sec. Afterward, one of the best image was flat-fielded and processed with a special algorithm that modelled a fit the continuous component of the corona and then subtracted from the original image. The remaining details were enhanced by unsharp masking and added to the original image. Finally, gaussian histogram equalization was applied". Eclipse Photo by Eddy Pomaroli Second "Diamond Ring" [JPEG: 400 x 438 pix - 129k] [JPEG: 731 x 800 pix - 277k] [JPEG: 1940 x 2123 pix - 2.3M] Diamond Ring at ESO HQ (Eddy Pomaroli) "Despite the clouds, we saw the second "diamond ring" from the ESO HQ. In a sense, we were quite lucky, since the clouds were very heavy during the total phase and we might easily have missed it all!". "I used an old Minolta SRT-101 camera and a teleobjective (450 mm; f/8). The exposure was 1/125 sec on Kodak Elite 100 (pushed to 200 ASA). I had the feeling that the Sun would become visible and had the camera pointed, by good luck in the correct direction, as soon as the cloud moved away". Eclipse Photo by Roland Reiss First Partial Phase [JPEG: 400 x 330 pix - 94k] [JPEG: 800 x 660 pix - 492k] [JPEG: 3000 x 2475 pix - 4.5M] End of First Partial Phase (Roland Reiss) "I observed the eclipse from my home in Garching. The clouds kept moving and this was the last photo I was able to obtain during the first partial phase, before they blocked everything". "The photo is interesting, because it shows two more images of the eclipsed Sun, below the overexposed central part. In one of them, the remaining, narrow crescent is particularly well visible. They are caused by reflections in the camera. I used a Minolta camera and a Fuji colour slide film". Eclipse Spectra Some ESO people went a step further and obtained spectra of the Sun at the time of the eclipse. Eclipse Spectrum by Roland Reiss Coronal Spectrum [JPEG: 400 x 273 pix - 94k] [JPEG: 800 x 546 pix - 492k] [JPEG: 3000 x 2046 pix - 4.5M] Coronal Spectrum (CAOS Group) The Club of Amateurs in Optical Spectroscopy (with Carlos Guirao Sanchez, Gerardo Avila and Jesus Rodriguez) obtained a spectrum of the solar corona from a site in Garching, about 2 km south of the ESO HQ. "This is a plot of the spectrum and the corresponding CCD image that we took during the total eclipse. The main coronal lines are well visible and have been identified in the figure. Note in particular one at 6374 Angstrom that was first ascribed to the mysterious substance "Coronium". We now know that it is emitted by iron atoms that have lost nine electrons (Fe X)". The equipment was: * Telescope: Schmidt Cassegrain F/6.3; Diameter: 250 mm * FIASCO Spectrograph: Fibre: 135 micron core diameter F = 100 mm collimator, f = 80 mm camera; Grating: 1300 gr/mm blazed at 500 nm; SBIG ST8E CCD camera; Exposure time was 20 sec. Eclipse Spectrum by Bob Fosbury Chromospheric Spectrum [JPEG: 120 x 549 pix - 20k] Chromospheric and Coronal Spectra (Bob Fosbury) "The 11 August 1999 total solar eclipse was seen from a small farm complex called Wolfersberg in open fields some 20km ESE of the centre of Munich. It was chosen to be within the 2min band of totality but likely to be relatively unpopulated". "There were intermittent views of the Sun between first and second contact with quite a heavy rainshower which stopped 9min before totality. A large clear patch of sky revealed a perfect view of the Sun just 2min before second contact and it remained clear for at least half an hour after third contact". "The principal project was to photograph the spectrum of the chromosphere during totality using a transmission grating in front of a moderate telephoto lens. The desire to do this was stimulated by a view of the 1976 eclipse in Australia when I held the same grating up to the eclipsed Sun and was thrilled by the view of the emission line spectrum. The trick now was to get the exposure right!". "A sequence of 13 H-alpha images was combined into a looping movie. The exposure times were different, but some attempt has been made to equalise the intensities. The last two frames show the low chromosphere and then the photosphere emerging at 3rd contact. The [FeX] coronal line can be seen on the left in the middle of the sequence. I used a Hasselblad camera and Agfa slide film (RSX II 100)".

  11. TerraLook: Providing easy, no-cost access to satellite images for busy people and the technologically disinclined

    USGS Publications Warehouse

    Geller, G.N.; Fosnight, E.A.; Chaudhuri, Sambhudas

    2008-01-01

    Access to satellite images has been largely limited to communities with specialized tools and expertise, even though images could also benefit other communities. This situation has resulted in underutilization of the data. TerraLook, which consists of collections of georeferenced JPEG images and an open source toolkit to use them, makes satellite images available to those lacking experience with remote sensing. Users can find, roam, and zoom images, create and display vector overlays, adjust and annotate images so they can be used as a communication vehicle, compare images taken at different times, and perform other activities useful for natural resource management, sustainable development, education, and other activities. ?? 2007 IEEE.

  12. TerraLook: Providing easy, no-cost access to satellite images for busy people and the technologically disinclined

    USGS Publications Warehouse

    Geller, G.N.; Fosnight, E.A.; Chaudhuri, Sambhudas

    2007-01-01

    Access to satellite images has been largely limited to communities with specialized tools and expertise, even though images could also benefit other communities. This situation has resulted in underutilization of the data. TerraLook, which consists of collections of georeferenced JPEG images and an open source toolkit to use them, makes satellite images available to those lacking experience with remote sensing. Users can find, roam, and zoom images, create and display vector overlays, adjust and annotate images so they can be used as a communication vehicle, compare images taken at different times, and perform other activities useful for natural resource management, sustainable development, education, and other activities. ?? 2007 IEEE.

  13. Lossless compression of grayscale medical images: effectiveness of traditional and state-of-the-art approaches

    NASA Astrophysics Data System (ADS)

    Clunie, David A.

    2000-05-01

    Proprietary compression schemes have a cost and risk associated with their support, end of life and interoperability. Standards reduce this cost and risk. The new JPEG-LS process (ISO/IEC 14495-1), and the lossless mode of the proposed JPEG 2000 scheme (ISO/IEC CD15444-1), new standard schemes that may be incorporated into DICOM, are evaluated here. Three thousand, six hundred and seventy-nine (3,679) single frame grayscale images from multiple anatomical regions, modalities and vendors, were tested. For all images combined JPEG-LS and JPEG 2000 performed equally well (3.81), almost as well as CALIC (3.91), a complex predictive scheme used only as a benchmark. Both out-performed existing JPEG (3.04 with optimum predictor choice per image, 2.79 for previous pixel prediction as most commonly used in DICOM). Text dictionary schemes performed poorly (gzip 2.38), as did image dictionary schemes without statistical modeling (PNG 2.76). Proprietary transform based schemes did not perform as well as JPEG-LS or JPEG 2000 (S+P Arithmetic 3.4, CREW 3.56). Stratified by modality, JPEG-LS compressed CT images (4.00), MR (3.59), NM (5.98), US (3.4), IO (2.66), CR (3.64), DX (2.43), and MG (2.62). CALIC always achieved the highest compression except for one modality for which JPEG-LS did better (MG digital vendor A JPEG-LS 4.02, CALIC 4.01). JPEG-LS outperformed existing JPEG for all modalities. The use of standard schemes can achieve state of the art performance, regardless of modality, JPEG-LS is simple, easy to implement, consumes less memory, and is faster than JPEG 2000, though JPEG 2000 will offer lossy and progressive transmission. It is recommended that DICOM add transfer syntaxes for both JPEG-LS and JPEG 2000.

  14. 6 CFR 37.31 - Source document retention.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...

  15. 6 CFR 37.31 - Source document retention.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...

  16. 6 CFR 37.31 - Source document retention.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...

  17. 6 CFR 37.31 - Source document retention.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...

  18. 6 CFR 37.31 - Source document retention.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...

  19. Open source tools for management and archiving of digital microscopy data to allow integration with patient pathology and treatment information

    PubMed Central

    2013-01-01

    Background Virtual microscopy includes digitisation of histology slides and the use of computer technologies for complex investigation of diseases such as cancer. However, automated image analysis, or website publishing of such digital images, is hampered by their large file sizes. Results We have developed two Java based open source tools: Snapshot Creator and NDPI-Splitter. Snapshot Creator converts a portion of a large digital slide into a desired quality JPEG image. The image is linked to the patient’s clinical and treatment information in a customised open source cancer data management software (Caisis) in use at the Australian Breast Cancer Tissue Bank (ABCTB) and then published on the ABCTB website (http://www.abctb.org.au) using Deep Zoom open source technology. Using the ABCTB online search engine, digital images can be searched by defining various criteria such as cancer type, or biomarkers expressed. NDPI-Splitter splits a large image file into smaller sections of TIFF images so that they can be easily analysed by image analysis software such as Metamorph or Matlab. NDPI-Splitter also has the capacity to filter out empty images. Conclusions Snapshot Creator and NDPI-Splitter are novel open source Java tools. They convert digital slides into files of smaller size for further processing. In conjunction with other open source tools such as Deep Zoom and Caisis, this suite of tools is used for the management and archiving of digital microscopy images, enabling digitised images to be explored and zoomed online. Our online image repository also has the capacity to be used as a teaching resource. These tools also enable large files to be sectioned for image analysis. Virtual Slides The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/5330903258483934 PMID:23402499

  20. JHelioviewer. Time-dependent 3D visualisation of solar and heliospheric data

    NASA Astrophysics Data System (ADS)

    Müller, D.; Nicula, B.; Felix, S.; Verstringe, F.; Bourgoignie, B.; Csillaghy, A.; Berghmans, D.; Jiggens, P.; García-Ortiz, J. P.; Ireland, J.; Zahniy, S.; Fleck, B.

    2017-09-01

    Context. Solar observatories are providing the world-wide community with a wealth of data, covering wide time ranges (e.g. Solar and Heliospheric Observatory, SOHO), multiple viewpoints (Solar TErrestrial RElations Observatory, STEREO), and returning large amounts of data (Solar Dynamics Observatory, SDO). In particular, the large volume of SDO data presents challenges; the data are available only from a few repositories, and full-disk, full-cadence data for reasonable durations of scientific interest are difficult to download, due to their size and the download rates available to most users. From a scientist's perspective this poses three problems: accessing, browsing, and finding interesting data as efficiently as possible. Aims: To address these challenges, we have developed JHelioviewer, a visualisation tool for solar data based on the JPEG 2000 compression standard and part of the open source ESA/NASA Helioviewer Project. Since the first release of JHelioviewer in 2009, the scientific functionality of the software has been extended significantly, and the objective of this paper is to highlight these improvements. Methods: The JPEG 2000 standard offers useful new features that facilitate the dissemination and analysis of high-resolution image data and offers a solution to the challenge of efficiently browsing petabyte-scale image archives. The JHelioviewer software is open source, platform independent, and extendable via a plug-in architecture. Results: With JHelioviewer, users can visualise the Sun for any time period between September 1991 and today; they can perform basic image processing in real time, track features on the Sun, and interactively overlay magnetic field extrapolations. The software integrates solar event data and a timeline display. Once an interesting event has been identified, science quality data can be accessed for in-depth analysis. As a first step towards supporting science planning of the upcoming Solar Orbiter mission, JHelioviewer offers a virtual camera model that enables users to set the vantage point to the location of a spacecraft or celestial body at any given time.

  1. Analyzing huge pathology images with open source software.

    PubMed

    Deroulers, Christophe; Ameisen, David; Badoual, Mathilde; Gerin, Chloé; Granier, Alexandre; Lartaud, Marc

    2013-06-06

    Digital pathology images are increasingly used both for diagnosis and research, because slide scanners are nowadays broadly available and because the quantitative study of these images yields new insights in systems biology. However, such virtual slides build up a technical challenge since the images occupy often several gigabytes and cannot be fully opened in a computer's memory. Moreover, there is no standard format. Therefore, most common open source tools such as ImageJ fail at treating them, and the others require expensive hardware while still being prohibitively slow. We have developed several cross-platform open source software tools to overcome these limitations. The NDPITools provide a way to transform microscopy images initially in the loosely supported NDPI format into one or several standard TIFF files, and to create mosaics (division of huge images into small ones, with or without overlap) in various TIFF and JPEG formats. They can be driven through ImageJ plugins. The LargeTIFFTools achieve similar functionality for huge TIFF images which do not fit into RAM. We test the performance of these tools on several digital slides and compare them, when applicable, to standard software. A statistical study of the cells in a tissue sample from an oligodendroglioma was performed on an average laptop computer to demonstrate the efficiency of the tools. Our open source software enables dealing with huge images with standard software on average computers. They are cross-platform, independent of proprietary libraries and very modular, allowing them to be used in other open source projects. They have excellent performance in terms of execution speed and RAM requirements. They open promising perspectives both to the clinician who wants to study a single slide and to the research team or data centre who do image analysis of many slides on a computer cluster. The virtual slide(s) for this article can be found here:http://www.diagnosticpathology.diagnomx.eu/vs/5955513929846272.

  2. Analyzing huge pathology images with open source software

    PubMed Central

    2013-01-01

    Background Digital pathology images are increasingly used both for diagnosis and research, because slide scanners are nowadays broadly available and because the quantitative study of these images yields new insights in systems biology. However, such virtual slides build up a technical challenge since the images occupy often several gigabytes and cannot be fully opened in a computer’s memory. Moreover, there is no standard format. Therefore, most common open source tools such as ImageJ fail at treating them, and the others require expensive hardware while still being prohibitively slow. Results We have developed several cross-platform open source software tools to overcome these limitations. The NDPITools provide a way to transform microscopy images initially in the loosely supported NDPI format into one or several standard TIFF files, and to create mosaics (division of huge images into small ones, with or without overlap) in various TIFF and JPEG formats. They can be driven through ImageJ plugins. The LargeTIFFTools achieve similar functionality for huge TIFF images which do not fit into RAM. We test the performance of these tools on several digital slides and compare them, when applicable, to standard software. A statistical study of the cells in a tissue sample from an oligodendroglioma was performed on an average laptop computer to demonstrate the efficiency of the tools. Conclusions Our open source software enables dealing with huge images with standard software on average computers. They are cross-platform, independent of proprietary libraries and very modular, allowing them to be used in other open source projects. They have excellent performance in terms of execution speed and RAM requirements. They open promising perspectives both to the clinician who wants to study a single slide and to the research team or data centre who do image analysis of many slides on a computer cluster. Virtual slides The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/5955513929846272 PMID:23829479

  3. Workflow opportunities using JPEG 2000

    NASA Astrophysics Data System (ADS)

    Foshee, Scott

    2002-11-01

    JPEG 2000 is a new image compression standard from ISO/IEC JTC1 SC29 WG1, the Joint Photographic Experts Group (JPEG) committee. Better thought of as a sibling to JPEG rather than descendant, the JPEG 2000 standard offers wavelet based compression as well as companion file formats and related standardized technology. This paper examines the JPEG 2000 standard for features in four specific areas-compression, file formats, client-server, and conformance/compliance that enable image workflows.

  4. JPEG vs. JPEG 2000: an objective comparison of image encoding quality

    NASA Astrophysics Data System (ADS)

    Ebrahimi, Farzad; Chamik, Matthieu; Winkler, Stefan

    2004-11-01

    This paper describes an objective comparison of the image quality of different encoders. Our approach is based on estimating the visual impact of compression artifacts on perceived quality. We present a tool that measures these artifacts in an image and uses them to compute a prediction of the Mean Opinion Score (MOS) obtained in subjective experiments. We show that the MOS predictions by our proposed tool are a better indicator of perceived image quality than PSNR, especially for highly compressed images. For the encoder comparison, we compress a set of 29 test images with two JPEG encoders (Adobe Photoshop and IrfanView) and three JPEG2000 encoders (JasPer, Kakadu, and IrfanView) at various compression ratios. We compute blockiness, blur, and MOS predictions as well as PSNR of the compressed images. Our results show that the IrfanView JPEG encoder produces consistently better images than the Adobe Photoshop JPEG encoder at the same data rate. The differences between the JPEG2000 encoders in our test are less pronounced; JasPer comes out as the best codec, closely followed by IrfanView and Kakadu. Comparing the JPEG- and JPEG2000-encoding quality of IrfanView, we find that JPEG has a slight edge at low compression ratios, while JPEG2000 is the clear winner at medium and high compression ratios.

  5. Unequal power allocation for JPEG transmission over MIMO systems.

    PubMed

    Sabir, Muhammad Farooq; Bovik, Alan Conrad; Heath, Robert W

    2010-02-01

    With the introduction of multiple transmit and receive antennas in next generation wireless systems, real-time image and video communication are expected to become quite common, since very high data rates will become available along with improved data reliability. New joint transmission and coding schemes that explore advantages of multiple antenna systems matched with source statistics are expected to be developed. Based on this idea, we present an unequal power allocation scheme for transmission of JPEG compressed images over multiple-input multiple-output systems employing spatial multiplexing. The JPEG-compressed image is divided into different quality layers, and different layers are transmitted simultaneously from different transmit antennas using unequal transmit power, with a constraint on the total transmit power during any symbol period. Results show that our unequal power allocation scheme provides significant image quality improvement as compared to different equal power allocations schemes, with the peak-signal-to-noise-ratio gain as high as 14 dB at low signal-to-noise-ratios.

  6. Digital image modification detection using color information and its histograms.

    PubMed

    Zhou, Haoyu; Shen, Yue; Zhu, Xinghui; Liu, Bo; Fu, Zigang; Fan, Na

    2016-09-01

    The rapid development of many open source and commercial image editing software makes the authenticity of the digital images questionable. Copy-move forgery is one of the most widely used tampering techniques to create desirable objects or conceal undesirable objects in a scene. Existing techniques reported in the literature to detect such tampering aim to improve the robustness of these methods against the use of JPEG compression, blurring, noise, or other types of post processing operations. These post processing operations are frequently used with the intention to conceal tampering and reduce tampering clues. A robust method based on the color moments and other five image descriptors is proposed in this paper. The method divides the image into fixed size overlapping blocks. Clustering operation divides entire search space into smaller pieces with similar color distribution. Blocks from the tampered regions will reside within the same cluster since both copied and moved regions have similar color distributions. Five image descriptors are used to extract block features, which makes the method more robust to post processing operations. An ensemble of deep compositional pattern-producing neural networks are trained with these extracted features. Similarity among feature vectors in clusters indicates possible forged regions. Experimental results show that the proposed method can detect copy-move forgery even if an image was distorted by gamma correction, addictive white Gaussian noise, JPEG compression, or blurring. Copyright © 2016. Published by Elsevier Ireland Ltd.

  7. A generalized Benford's law for JPEG coefficients and its applications in image forensics

    NASA Astrophysics Data System (ADS)

    Fu, Dongdong; Shi, Yun Q.; Su, Wei

    2007-02-01

    In this paper, a novel statistical model based on Benford's law for the probability distributions of the first digits of the block-DCT and quantized JPEG coefficients is presented. A parametric logarithmic law, i.e., the generalized Benford's law, is formulated. Furthermore, some potential applications of this model in image forensics are discussed in this paper, which include the detection of JPEG compression for images in bitmap format, the estimation of JPEG compression Qfactor for JPEG compressed bitmap image, and the detection of double compressed JPEG image. The results of our extensive experiments demonstrate the effectiveness of the proposed statistical model.

  8. Image steganalysis using Artificial Bee Colony algorithm

    NASA Astrophysics Data System (ADS)

    Sajedi, Hedieh

    2017-09-01

    Steganography is the science of secure communication where the presence of the communication cannot be detected while steganalysis is the art of discovering the existence of the secret communication. Processing a huge amount of information takes extensive execution time and computational sources most of the time. As a result, it is needed to employ a phase of preprocessing, which can moderate the execution time and computational sources. In this paper, we propose a new feature-based blind steganalysis method for detecting stego images from the cover (clean) images with JPEG format. In this regard, we present a feature selection technique based on an improved Artificial Bee Colony (ABC). ABC algorithm is inspired by honeybees' social behaviour in their search for perfect food sources. In the proposed method, classifier performance and the dimension of the selected feature vector depend on using wrapper-based methods. The experiments are performed using two large data-sets of JPEG images. Experimental results demonstrate the effectiveness of the proposed steganalysis technique compared to the other existing techniques.

  9. An evaluation of the effect of JPEG, JPEG2000, and H.264/AVC on CQR codes decoding process

    NASA Astrophysics Data System (ADS)

    Vizcarra Melgar, Max E.; Farias, Mylène C. Q.; Zaghetto, Alexandre

    2015-02-01

    This paper presents a binarymatrix code based on QR Code (Quick Response Code), denoted as CQR Code (Colored Quick Response Code), and evaluates the effect of JPEG, JPEG2000 and H.264/AVC compression on the decoding process. The proposed CQR Code has three additional colors (red, green and blue), what enables twice as much storage capacity when compared to the traditional black and white QR Code. Using the Reed-Solomon error-correcting code, the CQR Code model has a theoretical correction capability of 38.41%. The goal of this paper is to evaluate the effect that degradations inserted by common image compression algorithms have on the decoding process. Results show that a successful decoding process can be achieved for compression rates up to 0.3877 bits/pixel, 0.1093 bits/pixel and 0.3808 bits/pixel for JPEG, JPEG2000 and H.264/AVC formats, respectively. The algorithm that presents the best performance is the H.264/AVC, followed by the JPEG2000, and JPEG.

  10. JPIC-Rad-Hard JPEG2000 Image Compression ASIC

    NASA Astrophysics Data System (ADS)

    Zervas, Nikos; Ginosar, Ran; Broyde, Amitai; Alon, Dov

    2010-08-01

    JPIC is a rad-hard high-performance image compression ASIC for the aerospace market. JPIC implements tier 1 of the ISO/IEC 15444-1 JPEG2000 (a.k.a. J2K) image compression standard [1] as well as the post compression rate-distortion algorithm, which is part of tier 2 coding. A modular architecture enables employing a single JPIC or multiple coordinated JPIC units. JPIC is designed to support wide data sources of imager in optical, panchromatic and multi-spectral space and airborne sensors. JPIC has been developed as a collaboration of Alma Technologies S.A. (Greece), MBT/IAI Ltd (Israel) and Ramon Chips Ltd (Israel). MBT IAI defined the system architecture requirements and interfaces, The JPEG2K-E IP core from Alma implements the compression algorithm [2]. Ramon Chips adds SERDES interfaces and host interfaces and integrates the ASIC. MBT has demonstrated the full chip on an FPGA board and created system boards employing multiple JPIC units. The ASIC implementation, based on Ramon Chips' 180nm CMOS RadSafe[TM] RH cell library enables superior radiation hardness.

  11. Request redirection paradigm in medical image archive implementation.

    PubMed

    Dragan, Dinu; Ivetić, Dragan

    2012-08-01

    It is widely recognized that the JPEG2000 facilitates issues in medical imaging: storage, communication, sharing, remote access, interoperability, and presentation scalability. Therefore, JPEG2000 support was added to the DICOM standard Supplement 61. Two approaches to support JPEG2000 medical image are explicitly defined by the DICOM standard: replacing the DICOM image format with corresponding JPEG2000 codestream, or by the Pixel Data Provider service, DICOM supplement 106. The latest one supposes two-step retrieval of medical image: DICOM request and response from a DICOM server, and then JPIP request and response from a JPEG2000 server. We propose a novel strategy for transmission of scalable JPEG2000 images extracted from a single codestream over DICOM network using the DICOM Private Data Element without sacrificing system interoperability. It employs the request redirection paradigm: DICOM request and response from JPEG2000 server through DICOM server. The paper presents programming solution for implementation of request redirection paradigm in a DICOM transparent manner. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  12. JPEG XS call for proposals subjective evaluations

    NASA Astrophysics Data System (ADS)

    McNally, David; Bruylants, Tim; Willème, Alexandre; Ebrahimi, Touradj; Schelkens, Peter; Macq, Benoit

    2017-09-01

    In March 2016 the Joint Photographic Experts Group (JPEG), formally known as ISO/IEC SC29 WG1, issued a call for proposals soliciting compression technologies for a low-latency, lightweight and visually transparent video compression scheme. Within the JPEG family of standards, this scheme was denominated JPEG XS. The subjective evaluation of visually lossless compressed video sequences at high resolutions and bit depths poses particular challenges. This paper describes the adopted procedures, the subjective evaluation setup, the evaluation process and summarizes the obtained results which were achieved in the context of the JPEG XS standardization process.

  13. Automated Selection of Hotspots (ASH): enhanced automated segmentation and adaptive step finding for Ki67 hotspot detection in adrenal cortical cancer.

    PubMed

    Lu, Hao; Papathomas, Thomas G; van Zessen, David; Palli, Ivo; de Krijger, Ronald R; van der Spek, Peter J; Dinjens, Winand N M; Stubbs, Andrew P

    2014-11-25

    In prognosis and therapeutics of adrenal cortical carcinoma (ACC), the selection of the most active areas in proliferative rate (hotspots) within a slide and objective quantification of immunohistochemical Ki67 Labelling Index (LI) are of critical importance. In addition to intratumoral heterogeneity in proliferative rate i.e. levels of Ki67 expression within a given ACC, lack of uniformity and reproducibility in the method of quantification of Ki67 LI may confound an accurate assessment of Ki67 LI. We have implemented an open source toolset, Automated Selection of Hotspots (ASH), for automated hotspot detection and quantification of Ki67 LI. ASH utilizes NanoZoomer Digital Pathology Image (NDPI) splitter to convert the specific NDPI format digital slide scanned from the Hamamatsu instrument into a conventional tiff or jpeg format image for automated segmentation and adaptive step finding hotspots detection algorithm. Quantitative hotspot ranking is provided by the functionality from the open source application ImmunoRatio as part of the ASH protocol. The output is a ranked set of hotspots with concomitant quantitative values based on whole slide ranking. We have implemented an open source automated detection quantitative ranking of hotspots to support histopathologists in selecting the 'hottest' hotspot areas in adrenocortical carcinoma. To provide wider community easy access to ASH we implemented a Galaxy virtual machine (VM) of ASH which is available from http://bioinformatics.erasmusmc.nl/wiki/Automated_Selection_of_Hotspots . The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/13000_2014_216.

  14. Reversible Watermarking Surviving JPEG Compression.

    PubMed

    Zain, J; Clarke, M

    2005-01-01

    This paper will discuss the properties of watermarking medical images. We will also discuss the possibility of such images being compressed by JPEG and give an overview of JPEG compression. We will then propose a watermarking scheme that is reversible and robust to JPEG compression. The purpose is to verify the integrity and authenticity of medical images. We used 800x600x8 bits ultrasound (US) images in our experiment. SHA-256 of the image is then embedded in the Least significant bits (LSB) of an 8x8 block in the Region of Non Interest (RONI). The image is then compressed using JPEG and decompressed using Photoshop 6.0. If the image has not been altered, the watermark extracted will match the hash (SHA256) of the original image. The result shown that the embedded watermark is robust to JPEG compression up to image quality 60 (~91% compressed).

  15. Dynamic power scheduling system for JPEG2000 delivery over wireless networks

    NASA Astrophysics Data System (ADS)

    Martina, Maurizio; Vacca, Fabrizio

    2003-06-01

    Third generation mobile terminals diffusion is encouraging the development of new multimedia based applications. The reliable transmission of audiovisual content will gain major interest being one of the most valuable services. Nevertheless, mobile scenario is severely power constrained: high compression ratios and refined energy management strategies are highly advisable. JPEG2000 as the source encoding stage assures excellent performance with extremely good visual quality. However the limited power budged imposes to limit the computational effort in order to save as much power as possible. Starting from an error prone environment, as the wireless one, high error-resilience features need to be employed. This paper tries to investigate the trade-off between quality and power in such a challenging environment.

  16. Image transmission system using adaptive joint source and channel decoding

    NASA Astrophysics Data System (ADS)

    Liu, Weiliang; Daut, David G.

    2005-03-01

    In this paper, an adaptive joint source and channel decoding method is designed to accelerate the convergence of the iterative log-dimain sum-product decoding procedure of LDPC codes as well as to improve the reconstructed image quality. Error resilience modes are used in the JPEG2000 source codec, which makes it possible to provide useful source decoded information to the channel decoder. After each iteration, a tentative decoding is made and the channel decoded bits are then sent to the JPEG2000 decoder. Due to the error resilience modes, some bits are known to be either correct or in error. The positions of these bits are then fed back to the channel decoder. The log-likelihood ratios (LLR) of these bits are then modified by a weighting factor for the next iteration. By observing the statistics of the decoding procedure, the weighting factor is designed as a function of the channel condition. That is, for lower channel SNR, a larger factor is assigned, and vice versa. Results show that the proposed joint decoding methods can greatly reduce the number of iterations, and thereby reduce the decoding delay considerably. At the same time, this method always outperforms the non-source controlled decoding method up to 5dB in terms of PSNR for various reconstructed images.

  17. Evaluation of image compression for computer-aided diagnosis of breast tumors in 3D sonography

    NASA Astrophysics Data System (ADS)

    Chen, We-Min; Huang, Yu-Len; Tao, Chi-Chuan; Chen, Dar-Ren; Moon, Woo-Kyung

    2006-03-01

    Medical imaging examinations form the basis for physicians diagnosing diseases, as evidenced by the increasing use of digital medical images for picture archiving and communications systems (PACS). However, with enlarged medical image databases and rapid growth of patients' case reports, PACS requires image compression to accelerate the image transmission rate and conserve disk space for diminishing implementation costs. For this purpose, JPEG and JPEG2000 have been accepted as legal formats for the digital imaging and communications in medicine (DICOM). The high compression ratio is felt to be useful for medical imagery. Therefore, this study evaluates the compression ratios of JPEG and JPEG2000 standards for computer-aided diagnosis (CAD) of breast tumors in 3-D medical ultrasound (US) images. The 3-D US data sets with various compression ratios are compressed using the two efficacious image compression standards. The reconstructed data sets are then diagnosed by a previous proposed CAD system. The diagnostic accuracy is measured based on receiver operating characteristic (ROC) analysis. Namely, the ROC curves are used to compare the diagnostic performance of two or more reconstructed images. Analysis results ensure a comparison of the compression ratios by using JPEG and JPEG2000 for 3-D US images. Results of this study provide the possible bit rates using JPEG and JPEG2000 for 3-D breast US images.

  18. Dynamic code block size for JPEG 2000

    NASA Astrophysics Data System (ADS)

    Tsai, Ping-Sing; LeCornec, Yann

    2008-02-01

    Since the standardization of the JPEG 2000, it has found its way into many different applications such as DICOM (digital imaging and communication in medicine), satellite photography, military surveillance, digital cinema initiative, professional video cameras, and so on. The unified framework of the JPEG 2000 architecture makes practical high quality real-time compression possible even in video mode, i.e. motion JPEG 2000. In this paper, we present a study of the compression impact using dynamic code block size instead of fixed code block size as specified in the JPEG 2000 standard. The simulation results show that there is no significant impact on compression if dynamic code block sizes are used. In this study, we also unveil the advantages of using dynamic code block sizes.

  19. Steganalysis based on JPEG compatibility

    NASA Astrophysics Data System (ADS)

    Fridrich, Jessica; Goljan, Miroslav; Du, Rui

    2001-11-01

    In this paper, we introduce a new forensic tool that can reliably detect modifications in digital images, such as distortion due to steganography and watermarking, in images that were originally stored in the JPEG format. The JPEG compression leave unique fingerprints and serves as a fragile watermark enabling us to detect changes as small as modifying the LSB of one randomly chosen pixel. The detection of changes is based on investigating the compatibility of 8x8 blocks of pixels with JPEG compression with a given quantization matrix. The proposed steganalytic method is applicable to virtually all steganongraphic and watermarking algorithms with the exception of those that embed message bits into the quantized JPEG DCT coefficients. The method can also be used to estimate the size of the secret message and identify the pixels that carry message bits. As a consequence of our steganalysis, we strongly recommend avoiding using images that have been originally stored in the JPEG format as cover-images for spatial-domain steganography.

  20. Performance of the JPEG Estimated Spectrum Adaptive Postfilter (JPEG-ESAP) for Low Bit Rates

    NASA Technical Reports Server (NTRS)

    Linares, Irving (Inventor)

    2016-01-01

    Frequency-based, pixel-adaptive filtering using the JPEG-ESAP algorithm for low bit rate JPEG formatted color images may allow for more compressed images while maintaining equivalent quality at a smaller file size or bitrate. For RGB, an image is decomposed into three color bands--red, green, and blue. The JPEG-ESAP algorithm is then applied to each band (e.g., once for red, once for green, and once for blue) and the output of each application of the algorithm is rebuilt as a single color image. The ESAP algorithm may be repeatedly applied to MPEG-2 video frames to reduce their bit rate by a factor of 2 or 3, while maintaining equivalent video quality, both perceptually, and objectively, as recorded in the computed PSNR values.

  1. Visualization of JPEG Metadata

    NASA Astrophysics Data System (ADS)

    Malik Mohamad, Kamaruddin; Deris, Mustafa Mat

    There are a lot of information embedded in JPEG image than just graphics. Visualization of its metadata would benefit digital forensic investigator to view embedded data including corrupted image where no graphics can be displayed in order to assist in evidence collection for cases such as child pornography or steganography. There are already available tools such as metadata readers, editors and extraction tools but mostly focusing on visualizing attribute information of JPEG Exif. However, none have been done to visualize metadata by consolidating markers summary, header structure, Huffman table and quantization table in a single program. In this paper, metadata visualization is done by developing a program that able to summarize all existing markers, header structure, Huffman table and quantization table in JPEG. The result shows that visualization of metadata helps viewing the hidden information within JPEG more easily.

  2. Design of a motion JPEG (M/JPEG) adapter card

    NASA Astrophysics Data System (ADS)

    Lee, D. H.; Sudharsanan, Subramania I.

    1994-05-01

    In this paper we describe a design of a high performance JPEG (Joint Photographic Experts Group) Micro Channel adapter card. The card, tested on a range of PS/2 platforms (models 50 to 95), can complete JPEG operations on a 640 by 240 pixel image within 1/60 of a second, thus enabling real-time capture and display of high quality digital video. The card accepts digital pixels for either a YUV 4:2:2 or an RGB 4:4:4 pixel bus and has been shown to handle up to 2.05 MBytes/second of compressed data. The compressed data is transmitted to a host memory area by Direct Memory Access operations. The card uses a single C-Cube's CL550 JPEG processor that complies with the baseline JPEG. We give broad descriptions of the hardware that controls the video interface, CL550, and the system interface. Some critical design points that enhance the overall performance of the M/JPEG systems are pointed out. The control of the adapter card is achieved by an interrupt driven software that runs under DOS. The software performs a variety of tasks that include change of color space (RGB or YUV), change of quantization and Huffman tables, odd and even field control and some diagnostic operations.

  3. Estimating JPEG2000 compression for image forensics using Benford's Law

    NASA Astrophysics Data System (ADS)

    Qadir, Ghulam; Zhao, Xi; Ho, Anthony T. S.

    2010-05-01

    With the tremendous growth and usage of digital images nowadays, the integrity and authenticity of digital content is becoming increasingly important, and a growing concern to many government and commercial sectors. Image Forensics, based on a passive statistical analysis of the image data only, is an alternative approach to the active embedding of data associated with Digital Watermarking. Benford's Law was first introduced to analyse the probability distribution of the 1st digit (1-9) numbers of natural data, and has since been applied to Accounting Forensics for detecting fraudulent income tax returns [9]. More recently, Benford's Law has been further applied to image processing and image forensics. For example, Fu et al. [5] proposed a Generalised Benford's Law technique for estimating the Quality Factor (QF) of JPEG compressed images. In our previous work, we proposed a framework incorporating the Generalised Benford's Law to accurately detect unknown JPEG compression rates of watermarked images in semi-fragile watermarking schemes. JPEG2000 (a relatively new image compression standard) offers higher compression rates and better image quality as compared to JPEG compression. In this paper, we propose the novel use of Benford's Law for estimating JPEG2000 compression for image forensics applications. By analysing the DWT coefficients and JPEG2000 compression on 1338 test images, the initial results indicate that the 1st digit probability of DWT coefficients follow the Benford's Law. The unknown JPEG2000 compression rates of the image can also be derived, and proved with the help of a divergence factor, which shows the deviation between the probabilities and Benford's Law. Based on 1338 test images, the mean divergence for DWT coefficients is approximately 0.0016, which is lower than DCT coefficients at 0.0034. However, the mean divergence for JPEG2000 images compression rate at 0.1 is 0.0108, which is much higher than uncompressed DWT coefficients. This result clearly indicates a presence of compression in the image. Moreover, we compare the results of 1st digit probability and divergence among JPEG2000 compression rates at 0.1, 0.3, 0.5 and 0.9. The initial results show that the expected difference among them could be used for further analysis to estimate the unknown JPEG2000 compression rates.

  4. A comparison of the fractal and JPEG algorithms

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.; Shahshahani, M.

    1991-01-01

    A proprietary fractal image compression algorithm and the Joint Photographic Experts Group (JPEG) industry standard algorithm for image compression are compared. In every case, the JPEG algorithm was superior to the fractal method at a given compression ratio according to a root mean square criterion and a peak signal to noise criterion.

  5. Non-parametric adaptative JPEG fragments carving

    NASA Astrophysics Data System (ADS)

    Amrouche, Sabrina Cherifa; Salamani, Dalila

    2018-04-01

    The most challenging JPEG recovery tasks arise when the file header is missing. In this paper we propose to use a two layer machine learning model to restore headerless JPEG images. We first build a classifier able to identify the structural properties of the images/fragments and then use an AutoEncoder (AE) to learn the fragment features for the header prediction. We define a JPEG universal header and the remaining free image parameters (Height, Width) are predicted with a Gradient Boosting Classifier. Our approach resulted in 90% accuracy using the manually defined features and 78% accuracy using the AE features.

  6. Detection of shifted double JPEG compression by an adaptive DCT coefficient model

    NASA Astrophysics Data System (ADS)

    Wang, Shi-Lin; Liew, Alan Wee-Chung; Li, Sheng-Hong; Zhang, Yu-Jin; Li, Jian-Hua

    2014-12-01

    In many JPEG image splicing forgeries, the tampered image patch has been JPEG-compressed twice with different block alignments. Such phenomenon in JPEG image forgeries is called the shifted double JPEG (SDJPEG) compression effect. Detection of SDJPEG-compressed patches could help in detecting and locating the tampered region. However, the current SDJPEG detection methods do not provide satisfactory results especially when the tampered region is small. In this paper, we propose a new SDJPEG detection method based on an adaptive discrete cosine transform (DCT) coefficient model. DCT coefficient distributions for SDJPEG and non-SDJPEG patches have been analyzed and a discriminative feature has been proposed to perform the two-class classification. An adaptive approach is employed to select the most discriminative DCT modes for SDJPEG detection. The experimental results show that the proposed approach can achieve much better results compared with some existing approaches in SDJPEG patch detection especially when the patch size is small.

  7. A block-based JPEG-LS compression technique with lossless region of interest

    NASA Astrophysics Data System (ADS)

    Deng, Lihua; Huang, Zhenghua; Yao, Shoukui

    2018-03-01

    JPEG-LS lossless compression algorithm is used in many specialized applications that emphasize on the attainment of high fidelity for its lower complexity and better compression ratios than the lossless JPEG standard. But it cannot prevent error diffusion because of the context dependence of the algorithm, and have low compression rate when compared to lossy compression. In this paper, we firstly divide the image into two parts: ROI regions and non-ROI regions. Then we adopt a block-based image compression technique to decrease the range of error diffusion. We provide JPEG-LS lossless compression for the image blocks which include the whole or part region of interest (ROI) and JPEG-LS near lossless compression for the image blocks which are included in the non-ROI (unimportant) regions. Finally, a set of experiments are designed to assess the effectiveness of the proposed compression method.

  8. Estimation of color filter array data from JPEG images for improved demosaicking

    NASA Astrophysics Data System (ADS)

    Feng, Wei; Reeves, Stanley J.

    2006-02-01

    On-camera demosaicking algorithms are necessarily simple and therefore do not yield the best possible images. However, off-camera demosaicking algorithms face the additional challenge that the data has been compressed and therefore corrupted by quantization noise. We propose a method to estimate the original color filter array (CFA) data from JPEG-compressed images so that more sophisticated (and better) demosaicking schemes can be applied to get higher-quality images. The JPEG image formation process, including simple demosaicking, color space transformation, chrominance channel decimation and DCT, is modeled as a series of matrix operations followed by quantization on the CFA data, which is estimated by least squares. An iterative method is used to conserve memory and speed computation. Our experiments show that the mean square error (MSE) with respect to the original CFA data is reduced significantly using our algorithm, compared to that of unprocessed JPEG and deblocked JPEG data.

  9. FBCOT: a fast block coding option for JPEG 2000

    NASA Astrophysics Data System (ADS)

    Taubman, David; Naman, Aous; Mathew, Reji

    2017-09-01

    Based on the EBCOT algorithm, JPEG 2000 finds application in many fields, including high performance scientific, geospatial and video coding applications. Beyond digital cinema, JPEG 2000 is also attractive for low-latency video communications. The main obstacle for some of these applications is the relatively high computational complexity of the block coder, especially at high bit-rates. This paper proposes a drop-in replacement for the JPEG 2000 block coding algorithm, achieving much higher encoding and decoding throughputs, with only modest loss in coding efficiency (typically < 0.5dB). The algorithm provides only limited quality/SNR scalability, but offers truly reversible transcoding to/from any standard JPEG 2000 block bit-stream. The proposed FAST block coder can be used with EBCOT's post-compression RD-optimization methodology, allowing a target compressed bit-rate to be achieved even at low latencies, leading to the name FBCOT (Fast Block Coding with Optimized Truncation).

  10. Optimal JPWL Forward Error Correction Rate Allocation for Robust JPEG 2000 Images and Video Streaming over Mobile Ad Hoc Networks

    NASA Astrophysics Data System (ADS)

    Agueh, Max; Diouris, Jean-François; Diop, Magaye; Devaux, François-Olivier; De Vleeschouwer, Christophe; Macq, Benoit

    2008-12-01

    Based on the analysis of real mobile ad hoc network (MANET) traces, we derive in this paper an optimal wireless JPEG 2000 compliant forward error correction (FEC) rate allocation scheme for a robust streaming of images and videos over MANET. The packet-based proposed scheme has a low complexity and is compliant to JPWL, the 11th part of the JPEG 2000 standard. The effectiveness of the proposed method is evaluated using a wireless Motion JPEG 2000 client/server application; and the ability of the optimal scheme to guarantee quality of service (QoS) to wireless clients is demonstrated.

  11. Scan-Based Implementation of JPEG 2000 Extensions

    NASA Technical Reports Server (NTRS)

    Rountree, Janet C.; Webb, Brian N.; Flohr, Thomas J.; Marcellin, Michael W.

    2001-01-01

    JPEG 2000 Part 2 (Extensions) contains a number of technologies that are of potential interest in remote sensing applications. These include arbitrary wavelet transforms, techniques to limit boundary artifacts in tiles, multiple component transforms, and trellis-coded quantization (TCQ). We are investigating the addition of these features to the low-memory (scan-based) implementation of JPEG 2000 Part 1. A scan-based implementation of TCQ has been realized and tested, with a very small performance loss as compared with the full image (frame-based) version. A proposed amendment to JPEG 2000 Part 2 will effect the syntax changes required to make scan-based TCQ compatible with the standard.

  12. JPEG2000 vs. full frame wavelet packet compression for smart card medical records.

    PubMed

    Leehan, Joaquín Azpirox; Lerallut, Jean-Francois

    2006-01-01

    This paper describes a comparison among different compression methods to be used in the context of electronic health records in the newer version of "smart cards". The JPEG2000 standard is compared to a full-frame wavelet packet compression method at high (33:1 and 50:1) compression rates. Results show that the full-frame method outperforms the JPEG2K standard qualitatively and quantitatively.

  13. Multiple descriptions based on multirate coding for JPEG 2000 and H.264/AVC.

    PubMed

    Tillo, Tammam; Baccaglini, Enrico; Olmo, Gabriella

    2010-07-01

    Multiple description coding (MDC) makes use of redundant representations of multimedia data to achieve resiliency. Descriptions should be generated so that the quality obtained when decoding a subset of them only depends on their number and not on the particular received subset. In this paper, we propose a method based on the principle of encoding the source at several rates, and properly blending the data encoded at different rates to generate the descriptions. The aim is to achieve efficient redundancy exploitation, and easy adaptation to different network scenarios by means of fine tuning of the encoder parameters. We apply this principle to both JPEG 2000 images and H.264/AVC video data. We consider as the reference scenario the distribution of contents on application-layer overlays with multiple-tree topology. The experimental results reveal that our method favorably compares with state-of-art MDC techniques.

  14. The impact of skull bone intensity on the quality of compressed CT neuro images

    NASA Astrophysics Data System (ADS)

    Kowalik-Urbaniak, Ilona; Vrscay, Edward R.; Wang, Zhou; Cavaro-Menard, Christine; Koff, David; Wallace, Bill; Obara, Boguslaw

    2012-02-01

    The increasing use of technologies such as CT and MRI, along with a continuing improvement in their resolution, has contributed to the explosive growth of digital image data being generated. Medical communities around the world have recognized the need for efficient storage, transmission and display of medical images. For example, the Canadian Association of Radiologists (CAR) has recommended compression ratios for various modalities and anatomical regions to be employed by lossy JPEG and JPEG2000 compression in order to preserve diagnostic quality. Here we investigate the effects of the sharp skull edges present in CT neuro images on JPEG and JPEG2000 lossy compression. We conjecture that this atypical effect is caused by the sharp edges between the skull bone and the background regions as well as between the skull bone and the interior regions. These strong edges create large wavelet coefficients that consume an unnecessarily large number of bits in JPEG2000 compression because of its bitplane coding scheme, and thus result in reduced quality at the interior region, which contains most diagnostic information in the image. To validate the conjecture, we investigate a segmentation based compression algorithm based on simple thresholding and morphological operators. As expected, quality is improved in terms of PSNR as well as the structural similarity (SSIM) image quality measure, and its multiscale (MS-SSIM) and informationweighted (IW-SSIM) versions. This study not only supports our conjecture, but also provides a solution to improve the performance of JPEG and JPEG2000 compression for specific types of CT images.

  15. Google Books: making the public domain universally accessible

    NASA Astrophysics Data System (ADS)

    Langley, Adam; Bloomberg, Dan S.

    2007-01-01

    Google Book Search is working with libraries and publishers around the world to digitally scan books. Some of those works are now in the public domain and, in keeping with Google's mission to make all the world's information useful and universally accessible, we wish to allow users to download them all. For users, it is important that the files are as small as possible and of printable quality. This means that a single codec for both text and images is impractical. We use PDF as a container for a mixture of JBIG2 and JPEG2000 images which are composed into a final set of pages. We discuss both the implementation of an open source JBIG2 encoder, which we use to compress text data, and the design of the infrastructure needed to meet the technical, legal and user requirements of serving many scanned works. We also cover the lessons learnt about dealing with different PDF readers and how to write files that work on most of the readers, most of the time.

  16. Prior-Based Quantization Bin Matching for Cloud Storage of JPEG Images.

    PubMed

    Liu, Xianming; Cheung, Gene; Lin, Chia-Wen; Zhao, Debin; Gao, Wen

    2018-07-01

    Millions of user-generated images are uploaded to social media sites like Facebook daily, which translate to a large storage cost. However, there exists an asymmetry in upload and download data: only a fraction of the uploaded images are subsequently retrieved for viewing. In this paper, we propose a cloud storage system that reduces the storage cost of all uploaded JPEG photos, at the expense of a controlled increase in computation mainly during download of requested image subset. Specifically, the system first selectively re-encodes code blocks of uploaded JPEG images using coarser quantization parameters for smaller storage sizes. Then during download, the system exploits known signal priors-sparsity prior and graph-signal smoothness prior-for reverse mapping to recover original fine quantization bin indices, with either deterministic guarantee (lossless mode) or statistical guarantee (near-lossless mode). For fast reverse mapping, we use small dictionaries and sparse graphs that are tailored for specific clusters of similar blocks, which are classified via tree-structured vector quantizer. During image upload, cluster indices identifying the appropriate dictionaries and graphs for the re-quantized blocks are encoded as side information using a differential distributed source coding scheme to facilitate reverse mapping during image download. Experimental results show that our system can reap significant storage savings (up to 12.05%) at roughly the same image PSNR (within 0.18 dB).

  17. JPEG and wavelet compression of ophthalmic images

    NASA Astrophysics Data System (ADS)

    Eikelboom, Robert H.; Yogesan, Kanagasingam; Constable, Ian J.; Barry, Christopher J.

    1999-05-01

    This study was designed to determine the degree and methods of digital image compression to produce ophthalmic imags of sufficient quality for transmission and diagnosis. The photographs of 15 subjects, which inclined eyes with normal, subtle and distinct pathologies, were digitized to produce 1.54MB images and compressed to five different methods: (i) objectively by calculating the RMS error between the uncompressed and compressed images, (ii) semi-subjectively by assessing the visibility of blood vessels, and (iii) subjectively by asking a number of experienced observers to assess the images for quality and clinical interpretation. Results showed that as a function of compressed image size, wavelet compressed images produced less RMS error than JPEG compressed images. Blood vessel branching could be observed to a greater extent after Wavelet compression compared to JPEG compression produced better images then a JPEG compression for a given image size. Overall, it was shown that images had to be compressed to below 2.5 percent for JPEG and 1.7 percent for Wavelet compression before fine detail was lost, or when image quality was too poor to make a reliable diagnosis.

  18. JPEG2000 and dissemination of cultural heritage over the Internet.

    PubMed

    Politou, Eugenia A; Pavlidis, George P; Chamzas, Christodoulos

    2004-03-01

    By applying the latest technologies in image compression for managing the storage of massive image data within cultural heritage databases and by exploiting the universality of the Internet we are now able not only to effectively digitize, record and preserve, but also to promote the dissemination of cultural heritage. In this work we present an application of the latest image compression standard JPEG2000 in managing and browsing image databases, focusing on the image transmission aspect rather than database management and indexing. We combine the technologies of JPEG2000 image compression with client-server socket connections and client browser plug-in, as to provide with an all-in-one package for remote browsing of JPEG2000 compressed image databases, suitable for the effective dissemination of cultural heritage.

  19. JPEG2000 encoding with perceptual distortion control.

    PubMed

    Liu, Zhen; Karam, Lina J; Watson, Andrew B

    2006-07-01

    In this paper, a new encoding approach is proposed to control the JPEG2000 encoding in order to reach a desired perceptual quality. The new method is based on a vision model that incorporates various masking effects of human visual perception and a perceptual distortion metric that takes spatial and spectral summation of individual quantization errors into account. Compared with the conventional rate-based distortion minimization JPEG2000 encoding, the new method provides a way to generate consistent quality images at a lower bit rate.

  20. Generalised Category Attack—Improving Histogram-Based Attack on JPEG LSB Embedding

    NASA Astrophysics Data System (ADS)

    Lee, Kwangsoo; Westfeld, Andreas; Lee, Sangjin

    We present a generalised and improved version of the category attack on LSB steganography in JPEG images with straddled embedding path. It detects more reliably low embedding rates and is also less disturbed by double compressed images. The proposed methods are evaluated on several thousand images. The results are compared to both recent blind and specific attacks for JPEG embedding. The proposed attack permits a more reliable detection, although it is based on first order statistics only. Its simple structure makes it very fast.

  1. Adaptive image coding based on cubic-spline interpolation

    NASA Astrophysics Data System (ADS)

    Jiang, Jian-Xing; Hong, Shao-Hua; Lin, Tsung-Ching; Wang, Lin; Truong, Trieu-Kien

    2014-09-01

    It has been investigated that at low bit rates, downsampling prior to coding and upsampling after decoding can achieve better compression performance than standard coding algorithms, e.g., JPEG and H. 264/AVC. However, at high bit rates, the sampling-based schemes generate more distortion. Additionally, the maximum bit rate for the sampling-based scheme to outperform the standard algorithm is image-dependent. In this paper, a practical adaptive image coding algorithm based on the cubic-spline interpolation (CSI) is proposed. This proposed algorithm adaptively selects the image coding method from CSI-based modified JPEG and standard JPEG under a given target bit rate utilizing the so called ρ-domain analysis. The experimental results indicate that compared with the standard JPEG, the proposed algorithm can show better performance at low bit rates and maintain the same performance at high bit rates.

  2. A threshold-based fixed predictor for JPEG-LS image compression

    NASA Astrophysics Data System (ADS)

    Deng, Lihua; Huang, Zhenghua; Yao, Shoukui

    2018-03-01

    In JPEG-LS, fixed predictor based on median edge detector (MED) only detect horizontal and vertical edges, and thus produces large prediction errors in the locality of diagonal edges. In this paper, we propose a threshold-based edge detection scheme for the fixed predictor. The proposed scheme can detect not only the horizontal and vertical edges, but also diagonal edges. For some certain thresholds, the proposed scheme can be simplified to other existing schemes. So, it can also be regarded as the integration of these existing schemes. For a suitable threshold, the accuracy of horizontal and vertical edges detection is higher than the existing median edge detection in JPEG-LS. Thus, the proposed fixed predictor outperforms the existing JPEG-LS predictors for all images tested, while the complexity of the overall algorithm is maintained at a similar level.

  3. Privacy protection in surveillance systems based on JPEG DCT baseline compression and spectral domain watermarking

    NASA Astrophysics Data System (ADS)

    Sablik, Thomas; Velten, Jörg; Kummert, Anton

    2015-03-01

    An novel system for automatic privacy protection in digital media based on spectral domain watermarking and JPEG compression is described in the present paper. In a first step private areas are detected. Therefore a detection method is presented. The implemented method uses Haar cascades to detects faces. Integral images are used to speed up calculations and the detection. Multiple detections of one face are combined. Succeeding steps comprise embedding the data into the image as part of JPEG compression using spectral domain methods and protecting the area of privacy. The embedding process is integrated into and adapted to JPEG compression. A Spread Spectrum Watermarking method is used to embed the size and position of the private areas into the cover image. Different methods for embedding regarding their robustness are compared. Moreover the performance of the method concerning tampered images is presented.

  4. Camera-Model Identification Using Markovian Transition Probability Matrix

    NASA Astrophysics Data System (ADS)

    Xu, Guanshuo; Gao, Shang; Shi, Yun Qing; Hu, Ruimin; Su, Wei

    Detecting the (brands and) models of digital cameras from given digital images has become a popular research topic in the field of digital forensics. As most of images are JPEG compressed before they are output from cameras, we propose to use an effective image statistical model to characterize the difference JPEG 2-D arrays of Y and Cb components from the JPEG images taken by various camera models. Specifically, the transition probability matrices derived from four different directional Markov processes applied to the image difference JPEG 2-D arrays are used to identify statistical difference caused by image formation pipelines inside different camera models. All elements of the transition probability matrices, after a thresholding technique, are directly used as features for classification purpose. Multi-class support vector machines (SVM) are used as the classification tool. The effectiveness of our proposed statistical model is demonstrated by large-scale experimental results.

  5. Toward privacy-preserving JPEG image retrieval

    NASA Astrophysics Data System (ADS)

    Cheng, Hang; Wang, Jingyue; Wang, Meiqing; Zhong, Shangping

    2017-07-01

    This paper proposes a privacy-preserving retrieval scheme for JPEG images based on local variance. Three parties are involved in the scheme: the content owner, the server, and the authorized user. The content owner encrypts JPEG images for privacy protection by jointly using permutation cipher and stream cipher, and then, the encrypted versions are uploaded to the server. With an encrypted query image provided by an authorized user, the server may extract blockwise local variances in different directions without knowing the plaintext content. After that, it can calculate the similarity between the encrypted query image and each encrypted database image by a local variance-based feature comparison mechanism. The authorized user with the encryption key can decrypt the returned encrypted images with plaintext content similar to the query image. The experimental results show that the proposed scheme not only provides effective privacy-preserving retrieval service but also ensures both format compliance and file size preservation for encrypted JPEG images.

  6. A novel high-frequency encoding algorithm for image compression

    NASA Astrophysics Data System (ADS)

    Siddeq, Mohammed M.; Rodrigues, Marcos A.

    2017-12-01

    In this paper, a new method for image compression is proposed whose quality is demonstrated through accurate 3D reconstruction from 2D images. The method is based on the discrete cosine transform (DCT) together with a high-frequency minimization encoding algorithm at compression stage and a new concurrent binary search algorithm at decompression stage. The proposed compression method consists of five main steps: (1) divide the image into blocks and apply DCT to each block; (2) apply a high-frequency minimization method to the AC-coefficients reducing each block by 2/3 resulting in a minimized array; (3) build a look up table of probability data to enable the recovery of the original high frequencies at decompression stage; (4) apply a delta or differential operator to the list of DC-components; and (5) apply arithmetic encoding to the outputs of steps (2) and (4). At decompression stage, the look up table and the concurrent binary search algorithm are used to reconstruct all high-frequency AC-coefficients while the DC-components are decoded by reversing the arithmetic coding. Finally, the inverse DCT recovers the original image. We tested the technique by compressing and decompressing 2D images including images with structured light patterns for 3D reconstruction. The technique is compared with JPEG and JPEG2000 through 2D and 3D RMSE. Results demonstrate that the proposed compression method is perceptually superior to JPEG with equivalent quality to JPEG2000. Concerning 3D surface reconstruction from images, it is demonstrated that the proposed method is superior to both JPEG and JPEG2000.

  7. Color Facsimile.

    DTIC Science & Technology

    1995-02-01

    modification of existing JPEG compression and decompression software available from Independent JPEG Users Group to process CIELAB color images and to use...externally specificed Huffman tables. In addition a conversion program was written to convert CIELAB color space images to red, green, blue color space

  8. New Paranal Views

    NASA Astrophysics Data System (ADS)

    2001-01-01

    Last year saw very good progress at ESO's Paranal Observatory , the site of the Very Large Telescope (VLT). The third and fourth 8.2-m Unit Telescopes, MELIPAL and YEPUN had "First Light" (cf. PR 01/00 and PR 18/00 ), while the first two, ANTU and KUEYEN , were busy collecting first-class data for hundreds of astronomers. Meanwhile, work continued towards the next phase of the VLT project, the combination of the telescopes into the VLT Interferometer. The test instrument, VINCI (cf. PR 22/00 ) is now being installed in the VLTI Laboratory at the centre of the observing platform on the top of Paranal. Below is a new collection of video sequences and photos that illustrate the latest developments at the Paranal Observatory. The were obtained by the EPR Video Team in December 2000. The photos are available in different formats, including "high-resolution" that is suitable for reproduction purposes. A related ESO Video News Reel for professional broadcasters will soon become available and will be announced via the usual channels. Overview Paranal Observatory (Dec. 2000) Video Clip 02a/01 [MPEG - 4.5Mb] ESO PR Video Clip 02a/01 "Paranal Observatory (December 2000)" (4875 frames/3:15 min) [MPEG Video+Audio; 160x120 pix; 4.5Mb] [MPEG Video+Audio; 320x240 pix; 13.5 Mb] [RealMedia; streaming; 34kps] [RealMedia; streaming; 200kps] ESO Video Clip 02a/01 shows some of the construction activities at the Paranal Observatory in December 2000, beginning with a general view of the site. Then follow views of the Residencia , a building that has been designed by Architects Auer and Weber in Munich - it integrates very well into the desert, creating a welcome recreational site for staff and visitors in this harsh environment. The next scenes focus on the "stations" for the auxiliary telescopes for the VLTI and the installation of two delay lines in the 140-m long underground tunnel. The following part of the video clip shows the start-up of the excavation work for the 2.6-m VLT Survey Telescope (VST) as well as the location known as the "NTT Peak", now under consideration for the installation of the 4-m VISTA telescope. The last images are from to the second 8.2-m Unit Telescope, KUEYEN, that has been in full use by the astronomers with the UVES and FORS2 instruments since April 2000. ESO PR Photo 04a/01 ESO PR Photo 04a/01 [Preview - JPEG: 466 x 400 pix - 58k] [Normal - JPEG: 931 x 800 pix - 688k] [Hires - JPEG: 3000 x 2577 pix - 7.6M] Caption : PR Photo 04a/01 shows an afternoon view from the Paranal summit towards East, with the Base Camp and the new Residencia on the slope to the right, above the valley in the shadow of the mountain. ESO PR Photo 04b/01 ESO PR Photo 04b/01 [Preview - JPEG: 791 x 400 pix - 89k] [Normal - JPEG: 1582 x 800 pix - 1.1Mk] [Hires - JPEG: 3000 x 1517 pix - 3.6M] PR Photo 04b/01 shows the ramp leading to the main entrance to the partly subterranean Residencia , with the steel skeleton for the dome over the central area in place. ESO PR Photo 04c/01 ESO PR Photo 04c/01 [Preview - JPEG: 498 x 400 pix - 65k] [Normal - JPEG: 995 x 800 pix - 640k] [Hires - JPEG: 3000 x 2411 pix - 6.6M] PR Photo 04c/01 is an indoor view of the reception hall under the dome, looking towards the main entrance. ESO PR Photo 04d/01 ESO PR Photo 04d/01 [Preview - JPEG: 472 x 400 pix - 61k] [Normal - JPEG: 944 x 800 pix - 632k] [Hires - JPEG: 3000 x 2543 pix - 5.8M] PR Photo 04d/01 shows the ramps from the reception area towards the rooms. The VLT Interferometer The Delay Lines consitute a most important element of the VLT Interferometer , cf. PR Photos 26a-e/00. At this moment, two Delay Lines are operational on site. A third system will be integrated early this year. The VLTI Delay Line is located in an underground tunnel that is 168 metres long and 8 metres wide. This configuration has been designed to accommodate up to eight Delay Lines, including their transfer optics in an ideal environment: stable temperature, high degree of cleanliness, low levels of straylight, low air turbulence. The positions of the Delay Line carriages are computed to adjust the Optical Path Lengths requested for the fringe pattern observation. The positions are controlled in real time by a laser metrology system, specially developed for this purpose. The position precision is about 20 nm (1 nm = 10 -9 m, or 1 millionth of a millimetre) over a distance of 120 metres. The maximum velocity is 0.50 m/s in position mode and maximum 0.05 m/s in operation. The system is designed for 25 year of operation and to survive earthquake up to 8.6 magnitude on the Richter scale. The VLTI Delay Line is a three-year project, carried out by ESO in collaboration with Dutch Space Holdings (formerly Fokker Space) and TPD-TNO . VLTI Delay Lines (December 2000) - ESO PR Video Clip 02b/01 [MPEG - 3.6Mb] ESO PR Video Clip 02b/01 "VLTI Delay Lines (December 2000)" (2000 frames/1:20 min) [MPEG Video+Audio; 160x120 pix; 3.6Mb] [MPEG Video+Audio; 320x240 pix; 13.7 Mb] [RealMedia; streaming; 34kps] [RealMedia; streaming; 200kps] ESO Video Clip 02b/00 shows the Delay Lines of the VLT Interferometer facility at Paranal during tests. One of the carriages is moving on 66-metre long rectified rails, driven by a linear motor. The carriage is equipped with three wheels in order to preserve high guidance accuracy. Another important element is the Cat's Eye that reflects the light from the telescope to the VLT instrumentation. This optical system is made of aluminium (including the mirrors) to avoid thermo-mechanical problems. ESO PR Photo 04e/01 ESO PR Photo 04e/01 [Preview - JPEG: 400 x 402 pix - 62k] [Normal - JPEG: 800 x 804 pix - 544k] [Hires - JPEG: 3000 x 3016 pix - 6.2M] Caption : PR Photo 04e/01 shows one of the 30 "stations" for the movable 1.8-m Auxiliary Telescopes. When one of these telescopes is positioned ("parked") on top of it, The light will be guided through the hole towards the Interferometric Tunnel and the Delay Lines. ESO PR Photo 04f/01 ESO PR Photo 04f/01 [Preview - JPEG: 568 x 400 pix - 96k] [Normal - JPEG: 1136 x 800 pix - 840k] [Hires - JPEG: 3000 x 2112 pix - 4.6M] PR Photo 04f/01 shows a general view of the Interferometric Tunnel and the Delay Lines. ESO PR Photo 04g/01 ESO PR Photo 04g/01 [Preview - JPEG: 406 x 400 pix - 62k] [Normal - JPEG: 812 x 800 pix - 448k] [Hires - JPEG: 3000 x 2956 pix - 5.5M] PR Photo 04g/01 shows one of the Delay Line carriages in parking position. The "NTT Peak" The "NTT Peak" is a mountain top located about 2 km to the north of Paranal. It received this name when ESO considered to move the 3.58-m New Technology Telescope from La Silla to this peak. The possibility of installing the 4-m VISTA telescope (cf. PR 03/00 ) on this peak is now being discussed. ESO PR Photo 04h/01 ESO PR Photo 04h/01 [Preview - JPEG: 630 x 400 pix - 89k] [Normal - JPEG: 1259 x 800 pix - 1.1M] [Hires - JPEG: 3000 x 1907 pix - 5.2M] PR Photo 04h/01 shows the view from the "NTT Peak" towards south, vith the Paranal mountain and the VLT enclosures in the background. ESO PR Photo 04i/01 ESO PR Photo 04i/01 [Preview - JPEG: 516 x 400 pix - 50k] [Normal - JPEG: 1031 x 800 pix - 664k] [Hires - JPEG: 3000 x 2328 pix - 6.0M] PR Photo 04i/01 is a view towards the "NTT Peak" from the top of the Paranal mountain. The access road and the concrete pillar that was used to support a site testing telescope at the top of this peak are seen This is the caption to ESO PR Photos 04a-1/01 and PR Video Clips 02a-b/01 . They may be reproduced, if credit is given to the European Southern Observatory. The ESO PR Video Clips service to visitors to the ESO website provides "animated" illustrations of the ongoing work and events at the European Southern Observatory. The most recent clip was: ESO PR Video Clip 01/01 about the Physics On Stage Festival (11 January 2001) . Information is also available on the web about other ESO videos.

  9. Estimated spectrum adaptive postfilter and the iterative prepost filtering algirighms

    NASA Technical Reports Server (NTRS)

    Linares, Irving (Inventor)

    2004-01-01

    The invention presents The Estimated Spectrum Adaptive Postfilter (ESAP) and the Iterative Prepost Filter (IPF) algorithms. These algorithms model a number of image-adaptive post-filtering and pre-post filtering methods. They are designed to minimize Discrete Cosine Transform (DCT) blocking distortion caused when images are highly compressed with the Joint Photographic Expert Group (JPEG) standard. The ESAP and the IPF techniques of the present invention minimize the mean square error (MSE) to improve the objective and subjective quality of low-bit-rate JPEG gray-scale images while simultaneously enhancing perceptual visual quality with respect to baseline JPEG images.

  10. An efficient multiple exposure image fusion in JPEG domain

    NASA Astrophysics Data System (ADS)

    Hebbalaguppe, Ramya; Kakarala, Ramakrishna

    2012-01-01

    In this paper, we describe a method to fuse multiple images taken with varying exposure times in the JPEG domain. The proposed algorithm finds its application in HDR image acquisition and image stabilization for hand-held devices like mobile phones, music players with cameras, digital cameras etc. Image acquisition at low light typically results in blurry and noisy images for hand-held camera's. Altering camera settings like ISO sensitivity, exposure times and aperture for low light image capture results in noise amplification, motion blur and reduction of depth-of-field respectively. The purpose of fusing multiple exposures is to combine the sharp details of the shorter exposure images with high signal-to-noise-ratio (SNR) of the longer exposure images. The algorithm requires only a single pass over all images, making it efficient. It comprises of - sigmoidal boosting of shorter exposed images, image fusion, artifact removal and saturation detection. Algorithm does not need more memory than a single JPEG macro block to be kept in memory making it feasible to be implemented as the part of a digital cameras hardware image processing engine. The Artifact removal step reuses the JPEGs built-in frequency analysis and hence benefits from the considerable optimization and design experience that is available for JPEG.

  11. Teaching Resources

    Science.gov Websites

    & Legislation Links Discussion Lists Quick Links AAPT eMentoring ComPADRE Review of High School Take Physics" Poster Why Physics Poster Thumbnail Download normal resolution JPEG Download high resolution JPEG Download Spanish Version Recruiting Physics Students in High School (FED newsletter article

  12. Visually Lossless JPEG 2000 for Remote Image Browsing

    PubMed Central

    Oh, Han; Bilgin, Ali; Marcellin, Michael

    2017-01-01

    Image sizes have increased exponentially in recent years. The resulting high-resolution images are often viewed via remote image browsing. Zooming and panning are desirable features in this context, which result in disparate spatial regions of an image being displayed at a variety of (spatial) resolutions. When an image is displayed at a reduced resolution, the quantization step sizes needed for visually lossless quality generally increase. This paper investigates the quantization step sizes needed for visually lossless display as a function of resolution, and proposes a method that effectively incorporates the resulting (multiple) quantization step sizes into a single JPEG2000 codestream. This codestream is JPEG2000 Part 1 compliant and allows for visually lossless decoding at all resolutions natively supported by the wavelet transform as well as arbitrary intermediate resolutions, using only a fraction of the full-resolution codestream. When images are browsed remotely using the JPEG2000 Interactive Protocol (JPIP), the required bandwidth is significantly reduced, as demonstrated by extensive experimental results. PMID:28748112

  13. Wavelet-Smoothed Interpolation of Masked Scientific Data for JPEG 2000 Compression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brislawn, Christopher M.

    2012-08-13

    How should we manage scientific data with 'holes'? Some applications, like JPEG 2000, expect logically rectangular data, but some sources, like the Parallel Ocean Program (POP), generate data that isn't defined on certain subsets. We refer to grid points that lack well-defined, scientifically meaningful sample values as 'masked' samples. Wavelet-smoothing is a highly scalable interpolation scheme for regions with complex boundaries on logically rectangular grids. Computation is based on forward/inverse discrete wavelet transforms, so runtime complexity and memory scale linearly with respect to sample count. Efficient state-of-the-art minimal realizations yield small constants (O(10)) for arithmetic complexity scaling, and in-situ implementationmore » techniques make optimal use of memory. Implementation in two dimensions using tensor product filter banks is straighsorward and should generalize routinely to higher dimensions. No hand-tuning required when the interpolation mask changes, making the method aeractive for problems with time-varying masks. Well-suited for interpolating undefined samples prior to JPEG 2000 encoding. The method outperforms global mean interpolation, as judged by both SNR rate-distortion performance and low-rate artifact mitigation, for data distributions whose histograms do not take the form of sharply peaked, symmetric, unimodal probability density functions. These performance advantages can hold even for data whose distribution differs only moderately from the peaked unimodal case, as demonstrated by POP salinity data. The interpolation method is very general and is not tied to any particular class of applications, could be used for more generic smooth interpolation.« less

  14. Cell edge detection in JPEG2000 wavelet domain - analysis on sigmoid function edge model.

    PubMed

    Punys, Vytenis; Maknickas, Ramunas

    2011-01-01

    Big virtual microscopy images (80K x 60K pixels and larger) are usually stored using the JPEG2000 image compression scheme. Diagnostic quantification, based on image analysis, might be faster if performed on compressed data (approx. 20 times less the original amount), representing the coefficients of the wavelet transform. The analysis of possible edge detection without reverse wavelet transform is presented in the paper. Two edge detection methods, suitable for JPEG2000 bi-orthogonal wavelets, are proposed. The methods are adjusted according calculated parameters of sigmoid edge model. The results of model analysis indicate more suitable method for given bi-orthogonal wavelet.

  15. Overview of the JPEG XS objective evaluation procedures

    NASA Astrophysics Data System (ADS)

    Willème, Alexandre; Richter, Thomas; Rosewarne, Chris; Macq, Benoit

    2017-09-01

    JPEG XS is a standardization activity conducted by the Joint Photographic Experts Group (JPEG), formally known as ISO/IEC SC29 WG1 group that aims at standardizing a low-latency, lightweight and visually lossless video compression scheme. This codec is intended to be used in applications where image sequences would otherwise be transmitted or stored in uncompressed form, such as in live production (through SDI or IP transport), display links, or frame buffers. Support for compression ratios ranging from 2:1 to 6:1 allows significant bandwidth and power reduction for signal propagation. This paper describes the objective quality assessment procedures conducted as part of the JPEG XS standardization activity. Firstly, this paper discusses the objective part of the experiments that led to the technology selection during the 73th WG1 meeting in late 2016. This assessment consists of PSNR measurements after a single and multiple compression decompression cycles at various compression ratios. After this assessment phase, two proposals among the six responses to the CfP were selected and merged to form the first JPEG XS test model (XSM). Later, this paper describes the core experiments (CEs) conducted so far on the XSM. These experiments are intended to evaluate its performance in more challenging scenarios, such as insertion of picture overlays, robustness to frame editing, assess the impact of the different algorithmic choices, and also to measure the XSM performance using the HDR VDP metric.

  16. Rate distortion optimal bit allocation methods for volumetric data using JPEG 2000.

    PubMed

    Kosheleva, Olga M; Usevitch, Bryan E; Cabrera, Sergio D; Vidal, Edward

    2006-08-01

    Computer modeling programs that generate three-dimensional (3-D) data on fine grids are capable of generating very large amounts of information. These data sets, as well as 3-D sensor/measured data sets, are prime candidates for the application of data compression algorithms. A very flexible and powerful compression algorithm for imagery data is the newly released JPEG 2000 standard. JPEG 2000 also has the capability to compress volumetric data, as described in Part 2 of the standard, by treating the 3-D data as separate slices. As a decoder standard, JPEG 2000 does not describe any specific method to allocate bits among the separate slices. This paper proposes two new bit allocation algorithms for accomplishing this task. The first procedure is rate distortion optimal (for mean squared error), and is conceptually similar to postcompression rate distortion optimization used for coding codeblocks within JPEG 2000. The disadvantage of this approach is its high computational complexity. The second bit allocation algorithm, here called the mixed model (MM) approach, mathematically models each slice's rate distortion curve using two distinct regions to get more accurate modeling at low bit rates. These two bit allocation algorithms are applied to a 3-D Meteorological data set. Test results show that the MM approach gives distortion results that are nearly identical to the optimal approach, while significantly reducing computational complexity.

  17. Applications of the JPEG standard in a medical environment

    NASA Astrophysics Data System (ADS)

    Wittenberg, Ulrich

    1993-10-01

    JPEG is a very versatile image coding and compression standard for single images. Medical images make a higher demand on image quality and precision than the usual 'pretty pictures'. In this paper the potential applications of the various JPEG coding modes in a medical environment are evaluated. Due to legal reasons the lossless modes are especially interesting. The spatial modes are equally important because medical data may well exceed the maximum of 12 bit precision allowed for the DCT modes. The performance of the spatial predictors is investigated. From the users point of view the progressive modes, which provide a fast but coarse approximation of the final image, reduce the subjective time one has to wait for it, so they also reduce the user's frustration. Even the lossy modes will find some applications, but they have to be handled with care, because repeated lossy coding and decoding leads to a degradation of the image quality. The amount of this degradation is investigated. The JPEG standard alone is not sufficient for a PACS because it does not store enough additional data such as creation data or details of the imaging modality. Therefore it will be an imbedded coding format in standards like TIFF or ACR/NEMA. It is concluded that the JPEG standard is versatile enough to match the requirements of the medical community.

  18. Improved JPEG anti-forensics with better image visual quality and forensic undetectability.

    PubMed

    Singh, Gurinder; Singh, Kulbir

    2017-08-01

    There is an immediate need to validate the authenticity of digital images due to the availability of powerful image processing tools that can easily manipulate the digital image information without leaving any traces. The digital image forensics most often employs the tampering detectors based on JPEG compression. Therefore, to evaluate the competency of the JPEG forensic detectors, an anti-forensic technique is required. In this paper, two improved JPEG anti-forensic techniques are proposed to remove the blocking artifacts left by the JPEG compression in both spatial and DCT domain. In the proposed framework, the grainy noise left by the perceptual histogram smoothing in DCT domain can be reduced significantly by applying the proposed de-noising operation. Two types of denoising algorithms are proposed, one is based on the constrained minimization problem of total variation of energy and other on the normalized weighted function. Subsequently, an improved TV based deblocking operation is proposed to eliminate the blocking artifacts in the spatial domain. Then, a decalibration operation is applied to bring the processed image statistics back to its standard position. The experimental results show that the proposed anti-forensic approaches outperform the existing state-of-the-art techniques in achieving enhanced tradeoff between image visual quality and forensic undetectability, but with high computational cost. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Digital cinema system using JPEG2000 movie of 8-million pixel resolution

    NASA Astrophysics Data System (ADS)

    Fujii, Tatsuya; Nomura, Mitsuru; Shirai, Daisuke; Yamaguchi, Takahiro; Fujii, Tetsuro; Ono, Sadayasu

    2003-05-01

    We have developed a prototype digital cinema system that can store, transmit and display extra high quality movies of 8-million pixel resolution, using JPEG2000 coding algorithm. The image quality is 4 times better than HDTV in resolution, and enables us to replace conventional films with digital cinema archives. Using wide-area optical gigabit IP networks, cinema contents are distributed and played back as a video-on-demand (VoD) system. The system consists of three main devices, a video server, a real-time JPEG2000 decoder, and a large-venue LCD projector. All digital movie data are compressed by JPEG2000 and stored in advance. The coded streams of 300~500 Mbps can be continuously transmitted from the PC server using TCP/IP. The decoder can perform the real-time decompression at 24/48 frames per second, using 120 parallel JPEG2000 processing elements. The received streams are expanded into 4.5Gbps raw video signals. The prototype LCD projector uses 3 pieces of 3840×2048 pixel reflective LCD panels (D-ILA) to show RGB 30-bit color movies fed by the decoder. The brightness exceeds 3000 ANSI lumens for a 300-inch screen. The refresh rate is chosen to 96Hz to thoroughly eliminate flickers, while preserving compatibility to cinema movies of 24 frames per second.

  20. Switching theory-based steganographic system for JPEG images

    NASA Astrophysics Data System (ADS)

    Cherukuri, Ravindranath C.; Agaian, Sos S.

    2007-04-01

    Cellular communications constitute a significant portion of the global telecommunications market. Therefore, the need for secured communication over a mobile platform has increased exponentially. Steganography is an art of hiding critical data into an innocuous signal, which provide answers to the above needs. The JPEG is one of commonly used format for storing and transmitting images on the web. In addition, the pictures captured using mobile cameras are in mostly in JPEG format. In this article, we introduce a switching theory based steganographic system for JPEG images which is applicable for mobile and computer platforms. The proposed algorithm uses the fact that energy distribution among the quantized AC coefficients varies from block to block and coefficient to coefficient. Existing approaches are effective with a part of these coefficients but when employed over all the coefficients they show there ineffectiveness. Therefore, we propose an approach that works each set of AC coefficients with different frame work thus enhancing the performance of the approach. The proposed system offers a high capacity and embedding efficiency simultaneously withstanding to simple statistical attacks. In addition, the embedded information could be retrieved without prior knowledge of the cover image. Based on simulation results, the proposed method demonstrates an improved embedding capacity over existing algorithms while maintaining a high embedding efficiency and preserving the statistics of the JPEG image after hiding information.

  1. A Powerful Twin Arrives

    NASA Astrophysics Data System (ADS)

    1999-11-01

    First Images from FORS2 at VLT KUEYEN on Paranal The first, major astronomical instrument to be installed at the ESO Very Large Telescope (VLT) was FORS1 ( FO cal R educer and S pectrograph) in September 1998. Immediately after being attached to the Cassegrain focus of the first 8.2-m Unit Telescope, ANTU , it produced a series of spectacular images, cf. ESO PR 14/98. Many important observations have since been made with this outstanding facility. Now FORS2 , its powerful twin, has been installed at the second VLT Unit Telescope, KUEYEN . It is the fourth major instrument at the VLT after FORS1 , ISAAC and UVES.. The FORS2 Commissioning Team that is busy installing and testing this large and complex instrument reports that "First Light" was successfully achieved already on October 29, 1999, only two days after FORS2 was first mounted at the Cassegrain focus. Since then, various observation modes have been carefully tested, including normal and high-resolution imaging, echelle and multi-object spectroscopy, as well as fast photometry with millisecond time resolution. A number of fine images were obtained during this work, some of which are made available with the present Press Release. The FORS instruments ESO PR Photo 40a/99 ESO PR Photo 40a/99 [Preview - JPEG: 400 x 345 pix - 203k] [Normal - JPEG: 800 x 689 pix - 563kb] [Full-Res - JPEG: 1280 x 1103 pix - 666kb] Caption to PR Photo 40a/99: This digital photo shows the twin instruments, FORS2 at KUEYEN (in the foreground) and FORS1 at ANTU, seen in the background through the open ventilation doors in the two telescope enclosures. Although they look alike, the two instruments have specific functions, as described in the text. FORS1 and FORS2 are the products of one of the most thorough and advanced technological studies ever made of a ground-based astronomical instrument. They have been specifically designed to investigate the faintest and most remote objects in the universe. They are "multi-mode instruments" that may be used in several different observation modes. FORS2 is largely identical to FORS1 , but there are a number of important differences. For example, it contains a Mask Exchange Unit (MXU) for laser-cut star-plates [1] that may be inserted at the focus, allowing a large number of spectra of different objects, in practice up to about 70, to be taken simultaneously. Highly sophisticated software assigns slits to individual objects in an optimal way, ensuring a great degree of observing efficiency. Instead of the polarimetry optics found in FORS1 , FORS2 has new grisms that allow the use of higher spectral resolutions. The FORS project was carried out under ESO contract by a consortium of three German astronomical institutes, the Heidelberg State Observatory and the University Observatories of Göttingen and Munich. The participating institutes have invested a total of about 180 man-years of work in this unique programme. The photos below demonstrate some of the impressive possibilities with this new instrument. They are based on observations with the FORS2 standard resolution collimator (field size 6.8 x 6.8 armin = 2048 x 2048 pixels; 1 pixel = 0.20 arcsec). In addition, observations of the Crab pulsar demonstrate a new observing mode, high-speed photometry. Protostar HH-34 in Orion ESO PR Photo 40b/99 ESO PR Photo 40b/99 [Preview - JPEG: 400 x 444 pix - 220kb] [Normal - JPEG: 800 x 887 pix - 806kb] [Full-Res - JPEG: 2000 x 2217 pix - 3.6Mb] The Area around HH-34 in Orion ESO PR Photo 40c/99 ESO PR Photo 40c/99 [Preview - JPEG: 400 x 494 pix - 262kb] [Full-Res - JPEG: 802 x 991 pix - 760 kb] The HH-34 Superjet in Orion (centre) PR Photo 40b/99 shows a three-colour composite of the young object Herbig-Haro 34 (HH-34) , now in the protostar stage of evolution. It is based on CCD frames obtained with the FORS2 instrument in imaging mode, on November 2 and 6, 1999. This object has a remarkable, very complicated appearance that includes two opposite jets that ram into the surrounding interstellar matter. This structure is produced by a machine-gun-like blast of "bullets" of dense gas ejected from the star at high velocities (approaching 250 km/sec). This seems to indicate that the star experiences episodic "outbursts" when large chunks of material fall onto it from a surrounding disk. HH-34 is located at a distance of approx. 1,500 light-years, near the famous Orion Nebula , one of the most productive star birth regions. Note also the enigmatic "waterfall" to the upper left, a feature that is still unexplained. PR Photo 40c/99 is an enlargement of a smaller area around the central object. Technical information : Photo 40b/99 is based on a composite of three images taken through three different filters: B (wavelength 429 nm; Full-Width-Half-Maximum (FWHM) 88 nm; exposure time 10 min; here rendered as blue), H-alpha (centered on the hydrogen emission line at wavelength 656 nm; FWHM 6 nm; 30 min; green) and S II (centrered at the emission lines of inonized sulphur at wavelength 673 nm; FWHM 6 nm; 30 min; red) during a period of 0.8 arcsec seeing. The field shown measures 6.8 x 6.8 arcmin and the images were recorded in frames of 2048 x 2048 pixels, each measuring 0.2 arcsec. The Full Resolution version shows the original pixels. North is up; East is left. N 70 Nebula in the Large Magellanic Cloud ESO PR Photo 40d/99 ESO PR Photo 40d/99 [Preview - JPEG: 400 x 444 pix - 360kb] [Normal - JPEG: 800 x 887 pix - 1.0Mb] [Full-Res - JPEG: 1997 x 2213 pix - 3.4Mb] The N 70 Nebula in the LMC ESO PR Photo 40e/99 ESO PR Photo 40e/99 [Preview - JPEG: 400 x 485 pix - 346kb] [Full-Res - JPEG: 986 x 1196 pix - 1.2Mb] The N70 Nebula in the LMC (detail) PR Photo 40d/99 shows a three-colour composite of the N 70 nebula. It is a "Super Bubble" in the Large Magellanic Cloud (LMC) , a satellite galaxy to the Milky Way system, located in the southern sky at a distance of about 160,000 light-years. This photo is based on CCD frames obtained with the FORS2 instrument in imaging mode in the morning of November 5, 1999. N 70 is a luminous bubble of interstellar gas, measuring about 300 light-years in diameter. It was created by winds from hot, massive stars and supernova explosions and the interior is filled with tenuous, hot expanding gas. An object like N70 provides astronomers with an excellent opportunity to explore the connection between the lifecycles of stars and the evolution of galaxies. Very massive stars profoundly affect their environment. They stir and mix the interstellar clouds of gas and dust, and they leave their mark in the compositions and locations of future generations of stars and star systems. PR Photo 40e/99 is an enlargement of a smaller area of this nebula. Technical information : Photos 40d/99 is based on a composite of three images taken through three different filters: B (429 nm; FWHM 88 nm; 3 min; here rendered as blue), V (554 nm; FWHM 111 nm; 3 min; green) and H-alpha (656 nm; FWHM 6 nm; 3 min; red) during a period of 1.0 arcsec seeing. The field shown measures 6.8 x 6.8 arcmin and the images were recorded in frames of 2048 x 2048 pixels, each measuring 0.2 arcsec. The Full Resolution version shows the original pixels. North is up; East is left. The Crab Nebula in Taurus ESO PR Photo 40f/99 ESO PR Photo 40f/99 [Preview - JPEG: 400 x 446 pix - 262k] [Normal - JPEG: 800 x 892 pix - 839 kb] [Full-Res - JPEG: 2036 x 2269 pix - 3.6Mb] The Crab Nebula in Taurus ESO PR Photo 40g/99 ESO PR Photo 40g/99 [Preview - JPEG: 400 x 444 pix - 215kb] [Full-Res - JPEG: 817 x 907 pix - 485 kb] The Crab Nebula in Taurus (detail) PR Photo 40f/99 shows a three colour composite of the well-known Crab Nebula (also known as "Messier 1" ), as observed with the FORS2 instrument in imaging mode in the morning of November 10, 1999. It is the remnant of a supernova explosion at a distance of about 6,000 light-years, observed almost 1000 years ago, in the year 1054. It contains a neutron star near its center that spins 30 times per second around its axis (see below). PR Photo 40g/99 is an enlargement of a smaller area. More information on the Crab Nebula and its pulsar is available on the web, e.g. at a dedicated website for Messier objects. In this picture, the green light is predominantly produced by hydrogen emission from material ejected by the star that exploded. The blue light is predominantly emitted by very high-energy ("relativistic") electrons that spiral in a large-scale magnetic field (so-called syncrotron emission ). It is believed that these electrons are continuously accelerated and ejected by the rapidly spinning neutron star at the centre of the nebula and which is the remnant core of the exploded star. This pulsar has been identified with the lower/right of the two close stars near the geometric center of the nebula, immediately left of the small arc-like feature, best seen in PR Photo 40g/99 . Technical information : Photo 40f/99 is based on a composite of three images taken through three different optical filters: B (429 nm; FWHM 88 nm; 5 min; here rendered as blue), R (657 nm; FWHM 150 nm; 1 min; green) and S II (673 nm; FWHM 6 nm; 5 min; red) during periods of 0.65 arcsec (R, S II) and 0.80 (B) seeing, respectively. The field shown measures 6.8 x 6.8 arcmin and the images were recorded in frames of 2048 x 2048 pixels, each measuring 0.2 arcsec. The Full Resolution version shows the original pixels. North is up; East is left. The High Time Resolution mode (HIT) of FORS2 ESO PR Photo 40h/99 ESO PR Photo 40h/99 [Preview - JPEG: 400 x 304 pix - 90kb] [Normal - JPEG: 707 x 538 pix - 217kb] Time Sequence of the Pulsar in the Crab Nebula ESO PR Photo 40i/99 ESO PR Photo 40i/99 [Preview - JPEG: 400 x 324 pix - 42kb] [Normal - JPEG: 800 x 647 pix - 87kb] Lightcurve of the Pulsar in the Crab Nebula In combination with the large light collecting power of the VLT Unit Telescopes, the high time resolution (25 nsec = 0.000000025 sec) of the ESO-developed FIERA CCD-detector controller opens a new observing window for celestial objects that undergo light intensity variations on very short time scales. A first implementation of this type of observing mode was tested with FORS2 during the first commissioning phase, by means of one of the most fascinating astronomical objects, the rapidly spinning neutron star in the Crab Nebula . It is also known as the Crab pulsar and is an exceedingly dense object that represents an extreme state of matter - it weighs as much as the Sun, but measures only about 30 km across. The result presented here was obtained in the so-called trailing mode , during which one of the rectangular openings of the Multi-Object Spectroscopy (MOS) assembly within FORS2 is placed in front of the lower end of the field. In this way, the entire surface of the CCD is covered, except the opening in which the object under investigation is positioned. By rotating this opening, some neighbouring objects (e.g. stars for alignment) may be observed simultaneously. As soon as the shutter is opened, the charges on the chip are progressively shifted upwards, one pixel at a time, until those first collected in the bottom row behind the opening have reached the top row. Then the entire CCD is read out and the digital data with the full image is stored in the computer. In this way, successive images (or spectra) of the object are recorded in the same frame, displaying the intensity variation with time during the exposure. For this observation, the total exposure lasted 2.5 seconds. During this time interval the image of the pulsar (and those of some neighbouring stars) were shifted 2048 times over the 2048 rows of the CCD. Each individual exposure therefore lasted exactly 1.2 msec (0.0012 sec), corresponding to a nominal time-resolution of 2.4 msec (2 pixels). Faster or slower time resolutions are possible by increasing or decreasing the shift and read-out rate [2]. In ESO PR Photo 40h/99 , the continuous lines in the top and bottom half are produced by normal stars of constant brightness, while the series of dots represents the individual pulses of the Crab pulsar, one every 33 milliseconds (i.e. the neutron star rotates around its axis 30 times per second). It is also obvious that these dots are alternatively brighter and fainter: they mirror the double-peaked profile of the light pulses, as shown in ESO PR Photo 40i/99 . In this diagramme, the time increases along the abscissa axis (1 pixel = 1.2 msec) and the momentary intensity (uncalibrated) is along the ordinate axis. One full revolution of the neutron star corresponds to the distance from one high peak to the next, and the diagramme therefore covers six consecutive revolutions (about 200 milliseconds). Following thorough testing, this new observing mode will allow to investigate the brightness variations of this and many other objects in great detail in order to gain new and fundamental insights in the physical mechanisms that produce the radiation pulses. In addition, it is foreseen to do high time resolution spectroscopy of rapidly varying phenomena. Pushing it to the limits with an 8.2-m telescope like KUEYEN will be a real challenge to the observers that will most certainly lead to great and exciting research projects in various fields of modern astrophysics. Technical information : The frame shown in Photo 40h/99 was obtained during a total exposure time of 2.5 sec without any optical filtre. During this time, the charges on the CCD were shifted over 2048 rows; each row was therefore exposed during 1.2 msec. The bright continuous line comes from the star next to the pulsar; the orientation was such that the "observation slit" was placed over two neighbouring stars. Preliminary data reduction: 11 pixels were added across the pulsar image to increase the signal-to-noise ratio and the background light from the Crab Nebula was subtracted for the same reason. Division by a brighter star (also background-subtracted, but not shown in the image) helped to reduce the influence of the Earth's atmosphere. Notes [1] The masks are produced by the Mask Manufacturing Unit (MMU) built by the VIRMOS Consortium for the VIMOS and NIRMOS instruments that will be installed at the VLT MELIPAL and YEPUN telescopes, respectively. [2] The time resolution achieved during the present test was limited by the maximum charge transfer rate of this particular CCD chip; in the future, FORS2 may be equipped with a new chip with a rate that is up to 20 times faster. How to obtain ESO Press Information ESO Press Information is made available on the World-Wide Web (URL: http://www.eso.org../ ). ESO Press Photos may be reproduced, if credit is given to the European Southern Observatory.

  2. Codestream-Based Identification of JPEG 2000 Images with Different Coding Parameters

    NASA Astrophysics Data System (ADS)

    Watanabe, Osamu; Fukuhara, Takahiro; Kiya, Hitoshi

    A method of identifying JPEG 2000 images with different coding parameters, such as code-block sizes, quantization-step sizes, and resolution levels, is presented. It does not produce false-negative matches regardless of different coding parameters (compression rate, code-block size, and discrete wavelet transform (DWT) resolutions levels) or quantization step sizes. This feature is not provided by conventional methods. Moreover, the proposed approach is fast because it uses the number of zero-bit-planes that can be extracted from the JPEG 2000 codestream by only parsing the header information without embedded block coding with optimized truncation (EBCOT) decoding. The experimental results revealed the effectiveness of image identification based on the new method.

  3. Demonstration of Inexact Computing Implemented in the JPEG Compression Algorithm using Probabilistic Boolean Logic applied to CMOS Components

    DTIC Science & Technology

    2015-12-24

    Signal to Noise Ratio SPICE Simulation Program with Integrated Circuit Emphasis TIFF Tagged Image File Format USC University of Southern California xvii...sources can create errors in digital circuits. These effects can be simulated using Simulation Program with Integrated Circuit Emphasis ( SPICE ) or...compute summary statistics. 4.1 Circuit Simulations Noisy analog circuits can be simulated in SPICE or Cadence SpectreTM software via noisy voltage

  4. 77 FR 59692 - 2014 Diversity Immigrant Visa Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-28

    ... the E-DV system. The entry will not be accepted and must be resubmitted. Group or family photographs... must be in the Joint Photographic Experts Group (JPEG) format. Image File Size: The maximum file size...). Image File Format: The image must be in the Joint Photographic Experts Group (JPEG) format. Image File...

  5. History of the Universe Poster

    Science.gov Websites

    History of the Universe Poster You are free to use these images if you give credit to: Particle Data Group at Lawrence Berkeley National Lab. New Version (2014) History of the Universe Poster Download: JPEG version PDF version Old Version (2013) History of the Universe Poster Download: JPEG version

  6. Image Size Variation Influence on Corrupted and Non-viewable BMP Image

    NASA Astrophysics Data System (ADS)

    Azmi, Tengku Norsuhaila T.; Azma Abdullah, Nurul; Rahman, Nurul Hidayah Ab; Hamid, Isredza Rahmi A.; Chai Wen, Chuah

    2017-08-01

    Image is one of the evidence component seek in digital forensics. Joint Photographic Experts Group (JPEG) format is most popular used in the Internet because JPEG files are very lossy and easy to compress that can speed up Internet transmitting processes. However, corrupted JPEG images are hard to recover due to the complexities of determining corruption point. Nowadays Bitmap (BMP) images are preferred in image processing compared to another formats because BMP image contain all the image information in a simple format. Therefore, in order to investigate the corruption point in JPEG, the file is required to be converted into BMP format. Nevertheless, there are many things that can influence the corrupting of BMP image such as the changes of image size that make the file non-viewable. In this paper, the experiment indicates that the size of BMP file influences the changes in the image itself through three conditions, deleting, replacing and insertion. From the experiment, we learnt by correcting the file size, it can able to produce a viewable file though partially. Then, it can be investigated further to identify the corruption point.

  7. An FPGA-Based People Detection System

    NASA Astrophysics Data System (ADS)

    Nair, Vinod; Laprise, Pierre-Olivier; Clark, James J.

    2005-12-01

    This paper presents an FPGA-based system for detecting people from video. The system is designed to use JPEG-compressed frames from a network camera. Unlike previous approaches that use techniques such as background subtraction and motion detection, we use a machine-learning-based approach to train an accurate detector. We address the hardware design challenges involved in implementing such a detector, along with JPEG decompression, on an FPGA. We also present an algorithm that efficiently combines JPEG decompression with the detection process. This algorithm carries out the inverse DCT step of JPEG decompression only partially. Therefore, it is computationally more efficient and simpler to implement, and it takes up less space on the chip than the full inverse DCT algorithm. The system is demonstrated on an automated video surveillance application and the performance of both hardware and software implementations is analyzed. The results show that the system can detect people accurately at a rate of about[InlineEquation not available: see fulltext.] frames per second on a Virtex-II 2V1000 using a MicroBlaze processor running at[InlineEquation not available: see fulltext.], communicating with dedicated hardware over FSL links.

  8. A high-throughput two channel discrete wavelet transform architecture for the JPEG2000 standard

    NASA Astrophysics Data System (ADS)

    Badakhshannoory, Hossein; Hashemi, Mahmoud R.; Aminlou, Alireza; Fatemi, Omid

    2005-07-01

    The Discrete Wavelet Transform (DWT) is increasingly recognized in image and video compression standards, as indicated by its use in JPEG2000. The lifting scheme algorithm is an alternative DWT implementation that has a lower computational complexity and reduced resource requirement. In the JPEG2000 standard two lifting scheme based filter banks are introduced: the 5/3 and 9/7. In this paper a high throughput, two channel DWT architecture for both of the JPEG2000 DWT filters is presented. The proposed pipelined architecture has two separate input channels that process the incoming samples simultaneously with minimum memory requirement for each channel. The architecture had been implemented in VHDL and synthesized on a Xilinx Virtex2 XCV1000. The proposed architecture applies DWT on a 2K by 1K image at 33 fps with a 75 MHZ clock frequency. This performance is achieved with 70% less resources than two independent single channel modules. The high throughput and reduced resource requirement has made this architecture the proper choice for real time applications such as Digital Cinema.

  9. Embedding intensity image into a binary hologram with strong noise resistant capability

    NASA Astrophysics Data System (ADS)

    Zhuang, Zhaoyong; Jiao, Shuming; Zou, Wenbin; Li, Xia

    2017-11-01

    A digital hologram can be employed as a host image for image watermarking applications to protect information security. Past research demonstrates that a gray level intensity image can be embedded into a binary Fresnel hologram by error diffusion method or bit truncation coding method. However, the fidelity of the retrieved watermark image from binary hologram is generally not satisfactory, especially when the binary hologram is contaminated with noise. To address this problem, we propose a JPEG-BCH encoding method in this paper. First, we employ the JPEG standard to compress the intensity image into a binary bit stream. Next, we encode the binary bit stream with BCH code to obtain error correction capability. Finally, the JPEG-BCH code is embedded into the binary hologram. By this way, the intensity image can be retrieved with high fidelity by a BCH-JPEG decoder even if the binary hologram suffers from serious noise contamination. Numerical simulation results show that the image quality of retrieved intensity image with our proposed method is superior to the state-of-the-art work reported.

  10. High-quality JPEG compression history detection for fake uncompressed images

    NASA Astrophysics Data System (ADS)

    Zhang, Rong; Wang, Rang-Ding; Guo, Li-Jun; Jiang, Bao-Chuan

    2017-05-01

    Authenticity is one of the most important evaluation factors of images for photography competitions or journalism. Unusual compression history of an image often implies the illicit intent of its author. Our work aims at distinguishing real uncompressed images from fake uncompressed images that are saved in uncompressed formats but have been previously compressed. To detect the potential image JPEG compression, we analyze the JPEG compression artifacts based on the tetrolet covering, which corresponds to the local image geometrical structure. Since the compression can alter the structure information, the tetrolet covering indexes may be changed if a compression is performed on the test image. Such changes can provide valuable clues about the image compression history. To be specific, the test image is first compressed with different quality factors to generate a set of temporary images. Then, the test image is compared with each temporary image block-by-block to investigate whether the tetrolet covering index of each 4×4 block is different between them. The percentages of the changed tetrolet covering indexes corresponding to the quality factors (from low to high) are computed and used to form the p-curve, the local minimum of which may indicate the potential compression. Our experimental results demonstrate the advantage of our method to detect JPEG compressions of high quality, even the highest quality factors such as 98, 99, or 100 of the standard JPEG compression, from uncompressed-format images. At the same time, our detection algorithm can accurately identify the corresponding compression quality factor.

  11. Influence of image compression on the interpretation of spectral-domain optical coherence tomography in exudative age-related macular degeneration

    PubMed Central

    Kim, J H; Kang, S W; Kim, J-r; Chang, Y S

    2014-01-01

    Purpose To evaluate the effect of image compression of spectral-domain optical coherence tomography (OCT) images in the examination of eyes with exudative age-related macular degeneration (AMD). Methods Thirty eyes from 30 patients who were diagnosed with exudative AMD were included in this retrospective observational case series. The horizontal OCT scans centered at the center of the fovea were conducted using spectral-domain OCT. The images were exported to Tag Image File Format (TIFF) and 100, 75, 50, 25 and 10% quality of Joint Photographic Experts Group (JPEG) format. OCT images were taken before and after intravitreal ranibizumab injections, and after relapse. The prevalence of subretinal and intraretinal fluids was determined. Differences in choroidal thickness between the TIFF and JPEG images were compared with the intra-observer variability. Results The prevalence of subretinal and intraretinal fluids was comparable regardless of the degree of compression. However, the chorio–scleral interface was not clearly identified in many images with a high degree of compression. In images with 25 and 10% quality of JPEG, the difference in choroidal thickness between the TIFF images and the respective JPEG images was significantly greater than the intra-observer variability of the TIFF images (P=0.029 and P=0.024, respectively). Conclusions In OCT images of eyes with AMD, 50% of the quality of the JPEG format would be an optimal degree of compression for efficient data storage and transfer without sacrificing image quality. PMID:24788012

  12. A Novel 2D Image Compression Algorithm Based on Two Levels DWT and DCT Transforms with Enhanced Minimize-Matrix-Size Algorithm for High Resolution Structured Light 3D Surface Reconstruction

    NASA Astrophysics Data System (ADS)

    Siddeq, M. M.; Rodrigues, M. A.

    2015-09-01

    Image compression techniques are widely used on 2D image 2D video 3D images and 3D video. There are many types of compression techniques and among the most popular are JPEG and JPEG2000. In this research, we introduce a new compression method based on applying a two level discrete cosine transform (DCT) and a two level discrete wavelet transform (DWT) in connection with novel compression steps for high-resolution images. The proposed image compression algorithm consists of four steps. (1) Transform an image by a two level DWT followed by a DCT to produce two matrices: DC- and AC-Matrix, or low and high frequency matrix, respectively, (2) apply a second level DCT on the DC-Matrix to generate two arrays, namely nonzero-array and zero-array, (3) apply the Minimize-Matrix-Size algorithm to the AC-Matrix and to the other high-frequencies generated by the second level DWT, (4) apply arithmetic coding to the output of previous steps. A novel decompression algorithm, Fast-Match-Search algorithm (FMS), is used to reconstruct all high-frequency matrices. The FMS-algorithm computes all compressed data probabilities by using a table of data, and then using a binary search algorithm for finding decompressed data inside the table. Thereafter, all decoded DC-values with the decoded AC-coefficients are combined in one matrix followed by inverse two levels DCT with two levels DWT. The technique is tested by compression and reconstruction of 3D surface patches. Additionally, this technique is compared with JPEG and JPEG2000 algorithm through 2D and 3D root-mean-square-error following reconstruction. The results demonstrate that the proposed compression method has better visual properties than JPEG and JPEG2000 and is able to more accurately reconstruct surface patches in 3D.

  13. JPEG 2000 Encoding with Perceptual Distortion Control

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Liu, Zhen; Karam, Lina J.

    2008-01-01

    An alternative approach has been devised for encoding image data in compliance with JPEG 2000, the most recent still-image data-compression standard of the Joint Photographic Experts Group. Heretofore, JPEG 2000 encoding has been implemented by several related schemes classified as rate-based distortion-minimization encoding. In each of these schemes, the end user specifies a desired bit rate and the encoding algorithm strives to attain that rate while minimizing a mean squared error (MSE). While rate-based distortion minimization is appropriate for transmitting data over a limited-bandwidth channel, it is not the best approach for applications in which the perceptual quality of reconstructed images is a major consideration. A better approach for such applications is the present alternative one, denoted perceptual distortion control, in which the encoding algorithm strives to compress data to the lowest bit rate that yields at least a specified level of perceptual image quality. Some additional background information on JPEG 2000 is prerequisite to a meaningful summary of JPEG encoding with perceptual distortion control. The JPEG 2000 encoding process includes two subprocesses known as tier-1 and tier-2 coding. In order to minimize the MSE for the desired bit rate, a rate-distortion- optimization subprocess is introduced between the tier-1 and tier-2 subprocesses. In tier-1 coding, each coding block is independently bit-plane coded from the most-significant-bit (MSB) plane to the least-significant-bit (LSB) plane, using three coding passes (except for the MSB plane, which is coded using only one "clean up" coding pass). For M bit planes, this subprocess involves a total number of (3M - 2) coding passes. An embedded bit stream is then generated for each coding block. Information on the reduction in distortion and the increase in the bit rate associated with each coding pass is collected. This information is then used in a rate-control procedure to determine the contribution of each coding block to the output compressed bit stream.

  14. Evaluation of the robustness of the preprocessing technique improving reversible compressibility of CT images: Tested on various CT examinations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeon, Chang Ho; Kim, Bohyoung; Gu, Bon Seung

    2013-10-15

    Purpose: To modify the preprocessing technique, which was previously proposed, improving compressibility of computed tomography (CT) images to cover the diversity of three dimensional configurations of different body parts and to evaluate the robustness of the technique in terms of segmentation correctness and increase in reversible compression ratio (CR) for various CT examinations.Methods: This study had institutional review board approval with waiver of informed patient consent. A preprocessing technique was previously proposed to improve the compressibility of CT images by replacing pixel values outside the body region with a constant value resulting in maximizing data redundancy. Since the technique wasmore » developed aiming at only chest CT images, the authors modified the segmentation method to cover the diversity of three dimensional configurations of different body parts. The modified version was evaluated as follows. In randomly selected 368 CT examinations (352 787 images), each image was preprocessed by using the modified preprocessing technique. Radiologists visually confirmed whether the segmented region covers the body region or not. The images with and without the preprocessing were reversibly compressed using Joint Photographic Experts Group (JPEG), JPEG2000 two-dimensional (2D), and JPEG2000 three-dimensional (3D) compressions. The percentage increase in CR per examination (CR{sub I}) was measured.Results: The rate of correct segmentation was 100.0% (95% CI: 99.9%, 100.0%) for all the examinations. The median of CR{sub I} were 26.1% (95% CI: 24.9%, 27.1%), 40.2% (38.5%, 41.1%), and 34.5% (32.7%, 36.2%) in JPEG, JPEG2000 2D, and JPEG2000 3D, respectively.Conclusions: In various CT examinations, the modified preprocessing technique can increase in the CR by 25% or more without concerning about degradation of diagnostic information.« less

  15. Fragmentation Point Detection of JPEG Images at DHT Using Validator

    NASA Astrophysics Data System (ADS)

    Mohamad, Kamaruddin Malik; Deris, Mustafa Mat

    File carving is an important, practical technique for data recovery in digital forensics investigation and is particularly useful when filesystem metadata is unavailable or damaged. The research on reassembly of JPEG files with RST markers, fragmented within the scan area have been done before. However, fragmentation within Define Huffman Table (DHT) segment is yet to be resolved. This paper analyzes the fragmentation within the DHT area and list out all the fragmentation possibilities. Two main contributions are made in this paper. Firstly, three fragmentation points within DHT area are listed. Secondly, few novel validators are proposed to detect these fragmentations. The result obtained from tests done on manually fragmented JPEG files, showed that all three fragmentation points within DHT are successfully detected using validators.

  16. Helioviewer.org: An Open-source Tool for Visualizing Solar Data

    NASA Astrophysics Data System (ADS)

    Hughitt, V. Keith; Ireland, J.; Schmiedel, P.; Dimitoglou, G.; Mueller, D.; Fleck, B.

    2009-05-01

    As the amount of solar data available to scientists continues to increase at faster and faster rates, it is important that there exist simple tools for navigating this data quickly with a minimal amount of effort. By combining heterogeneous solar physics datatypes such as full-disk images and coronagraphs, along with feature and event information, Helioviewer offers a simple and intuitive way to browse multiple datasets simultaneously. Images are stored in a repository using the JPEG 2000 format and tiled dynamically upon a client's request. By tiling images and serving only the portions of the image requested, it is possible for the client to work with very large images without having to fetch all of the data at once. Currently, Helioviewer enables users to browse the entire SOHO data archive, updated hourly, as well as data feature/event catalog data from eight different catalogs including active region, flare, coronal mass ejection, type II radio burst data. In addition to a focus on intercommunication with other virtual observatories and browsers (VSO, HEK, etc), Helioviewer will offer a number of externally-available application programming interfaces (APIs) to enable easy third party use, adoption and extension. Future functionality will include: support for additional data-sources including TRACE, SDO and STEREO, dynamic movie generation, a navigable timeline of recorded solar events, social annotation, and basic client-side image processing.

  17. Performance comparison of leading image codecs: H.264/AVC Intra, JPEG2000, and Microsoft HD Photo

    NASA Astrophysics Data System (ADS)

    Tran, Trac D.; Liu, Lijie; Topiwala, Pankaj

    2007-09-01

    This paper provides a detailed rate-distortion performance comparison between JPEG2000, Microsoft HD Photo, and H.264/AVC High Profile 4:4:4 I-frame coding for high-resolution still images and high-definition (HD) 1080p video sequences. This work is an extension to our previous comparative study published in previous SPIE conferences [1, 2]. Here we further optimize all three codecs for compression performance. Coding simulations are performed on a set of large-format color images captured from mainstream digital cameras and 1080p HD video sequences commonly used for H.264/AVC standardization work. Overall, our experimental results show that all three codecs offer very similar coding performances at the high-quality, high-resolution setting. Differences tend to be data-dependent: JPEG2000 with the wavelet technology tends to be the best performer with smooth spatial data; H.264/AVC High-Profile with advanced spatial prediction modes tends to cope best with more complex visual content; Microsoft HD Photo tends to be the most consistent across the board. For the still-image data sets, JPEG2000 offers the best R-D performance gains (around 0.2 to 1 dB in peak signal-to-noise ratio) over H.264/AVC High-Profile intra coding and Microsoft HD Photo. For the 1080p video data set, all three codecs offer very similar coding performance. As in [1, 2], neither do we consider scalability nor complexity in this study (JPEG2000 is operating in non-scalable, but optimal performance mode).

  18. On-demand rendering of an oblique slice through 3D volumetric data using JPEG2000 client-server framework

    NASA Astrophysics Data System (ADS)

    Joshi, Rajan L.

    2006-03-01

    In medical imaging, the popularity of image capture modalities such as multislice CT and MRI is resulting in an exponential increase in the amount of volumetric data that needs to be archived and transmitted. At the same time, the increased data is taxing the interpretation capabilities of radiologists. One of the workflow strategies recommended for radiologists to overcome the data overload is the use of volumetric navigation. This allows the radiologist to seek a series of oblique slices through the data. However, it might be inconvenient for a radiologist to wait until all the slices are transferred from the PACS server to a client, such as a diagnostic workstation. To overcome this problem, we propose a client-server architecture based on JPEG2000 and JPEG2000 Interactive Protocol (JPIP) for rendering oblique slices through 3D volumetric data stored remotely at a server. The client uses the JPIP protocol for obtaining JPEG2000 compressed data from the server on an as needed basis. In JPEG2000, the image pixels are wavelet-transformed and the wavelet coefficients are grouped into precincts. Based on the positioning of the oblique slice, compressed data from only certain precincts is needed to render the slice. The client communicates this information to the server so that the server can transmit only relevant compressed data. We also discuss the use of caching on the client side for further reduction in bandwidth requirements. Finally, we present simulation results to quantify the bandwidth savings for rendering a series of oblique slices.

  19. 75 FR 60846 - Bureau of Consular Affairs; Registration for the Diversity Immigrant (DV-2012) Visa Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-01

    ... need to submit a photo for a child who is already a U.S. citizen or a Legal Permanent Resident. Group... Joint Photographic Experts Group (JPEG) format; it must have a maximum image file size of two hundred... (dpi); the image file format in Joint Photographic Experts Group (JPEG) format; the maximum image file...

  20. 78 FR 59743 - Bureau of Consular Affairs; Registration for the Diversity Immigrant (DV-2015) Visa Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-27

    ... already a U.S. citizen or a Lawful Permanent Resident, but you will not be penalized if you do. Group... specifications: Image File Format: The miage must be in the Joint Photographic Experts Group (JPEG) format. Image... in the Joint Photographic Experts Group (JPEG) format. Image File Size: The maximum image file size...

  1. Building a Steganography Program Including How to Load, Process, and Save JPEG and PNG Files in Java

    ERIC Educational Resources Information Center

    Courtney, Mary F.; Stix, Allen

    2006-01-01

    Instructors teaching beginning programming classes are often interested in exercises that involve processing photographs (i.e., files stored as .jpeg). They may wish to offer activities such as color inversion, the color manipulation effects archived with pixel thresholding, or steganography, all of which Stevenson et al. [4] assert are sought by…

  2. Application of reversible denoising and lifting steps with step skipping to color space transforms for improved lossless compression

    NASA Astrophysics Data System (ADS)

    Starosolski, Roman

    2016-07-01

    Reversible denoising and lifting steps (RDLS) are lifting steps integrated with denoising filters in such a way that, despite the inherently irreversible nature of denoising, they are perfectly reversible. We investigated the application of RDLS to reversible color space transforms: RCT, YCoCg-R, RDgDb, and LDgEb. In order to improve RDLS effects, we propose a heuristic for image-adaptive denoising filter selection, a fast estimator of the compressed image bitrate, and a special filter that may result in skipping of the steps. We analyzed the properties of the presented methods, paying special attention to their usefulness from a practical standpoint. For a diverse image test-set and lossless JPEG-LS, JPEG 2000, and JPEG XR algorithms, RDLS improves the bitrates of all the examined transforms. The most interesting results were obtained for an estimation-based heuristic filter selection out of a set of seven filters; the cost of this variant was similar to or lower than the transform cost, and it improved the average lossless JPEG 2000 bitrates by 2.65% for RDgDb and by over 1% for other transforms; bitrates of certain images were improved to a significantly greater extent.

  3. Toward objective image quality metrics: the AIC Eval Program of the JPEG

    NASA Astrophysics Data System (ADS)

    Richter, Thomas; Larabi, Chaker

    2008-08-01

    Objective quality assessment of lossy image compression codecs is an important part of the recent call of the JPEG for Advanced Image Coding. The target of the AIC ad-hoc group is twofold: First, to receive state-of-the-art still image codecs and to propose suitable technology for standardization; and second, to study objective image quality metrics to evaluate the performance of such codes. Even tthough the performance of an objective metric is defined by how well it predicts the outcome of a subjective assessment, one can also study the usefulness of a metric in a non-traditional way indirectly, namely by measuring the subjective quality improvement of a codec that has been optimized for a specific objective metric. This approach shall be demonstrated here on the recently proposed HDPhoto format14 introduced by Microsoft and a SSIM-tuned17 version of it by one of the authors. We compare these two implementations with JPEG1 in two variations and a visual and PSNR optimal JPEG200013 implementation. To this end, we use subjective and objective tests based on the multiscale SSIM and a new DCT based metric.

  4. Interband coding extension of the new lossless JPEG standard

    NASA Astrophysics Data System (ADS)

    Memon, Nasir D.; Wu, Xiaolin; Sippy, V.; Miller, G.

    1997-01-01

    Due to the perceived inadequacy of current standards for lossless image compression, the JPEG committee of the International Standards Organization (ISO) has been developing a new standard. A baseline algorithm, called JPEG-LS, has already been completed and is awaiting approval by national bodies. The JPEG-LS baseline algorithm despite being simple is surprisingly efficient, and provides compression performance that is within a few percent of the best and more sophisticated techniques reported in the literature. Extensive experimentations performed by the authors seem to indicate that an overall improvement by more than 10 percent in compression performance will be difficult to obtain even at the cost of great complexity; at least not with traditional approaches to lossless image compression. However, if we allow inter-band decorrelation and modeling in the baseline algorithm, nearly 30 percent improvement in compression gains for specific images in the test set become possible with a modest computational cost. In this paper we propose and investigate a few techniques for exploiting inter-band correlations in multi-band images. These techniques have been designed within the framework of the baseline algorithm, and require minimal changes to the basic architecture of the baseline, retaining its essential simplicity.

  5. Parallel design of JPEG-LS encoder on graphics processing units

    NASA Astrophysics Data System (ADS)

    Duan, Hao; Fang, Yong; Huang, Bormin

    2012-01-01

    With recent technical advances in graphic processing units (GPUs), GPUs have outperformed CPUs in terms of compute capability and memory bandwidth. Many successful GPU applications to high performance computing have been reported. JPEG-LS is an ISO/IEC standard for lossless image compression which utilizes adaptive context modeling and run-length coding to improve compression ratio. However, adaptive context modeling causes data dependency among adjacent pixels and the run-length coding has to be performed in a sequential way. Hence, using JPEG-LS to compress large-volume hyperspectral image data is quite time-consuming. We implement an efficient parallel JPEG-LS encoder for lossless hyperspectral compression on a NVIDIA GPU using the computer unified device architecture (CUDA) programming technology. We use the block parallel strategy, as well as such CUDA techniques as coalesced global memory access, parallel prefix sum, and asynchronous data transfer. We also show the relation between GPU speedup and AVIRIS block size, as well as the relation between compression ratio and AVIRIS block size. When AVIRIS images are divided into blocks, each with 64×64 pixels, we gain the best GPU performance with 26.3x speedup over its original CPU code.

  6. Tampered Region Localization of Digital Color Images Based on JPEG Compression Noise

    NASA Astrophysics Data System (ADS)

    Wang, Wei; Dong, Jing; Tan, Tieniu

    With the availability of various digital image edit tools, seeing is no longer believing. In this paper, we focus on tampered region localization for image forensics. We propose an algorithm which can locate tampered region(s) in a lossless compressed tampered image when its unchanged region is output of JPEG decompressor. We find the tampered region and the unchanged region have different responses for JPEG compression. The tampered region has stronger high frequency quantization noise than the unchanged region. We employ PCA to separate different spatial frequencies quantization noises, i.e. low, medium and high frequency quantization noise, and extract high frequency quantization noise for tampered region localization. Post-processing is involved to get final localization result. The experimental results prove the effectiveness of our proposed method.

  7. Helioviewer.org: Browsing Very Large Image Archives Online Using JPEG 2000

    NASA Astrophysics Data System (ADS)

    Hughitt, V. K.; Ireland, J.; Mueller, D.; Dimitoglou, G.; Garcia Ortiz, J.; Schmidt, L.; Wamsler, B.; Beck, J.; Alexanderian, A.; Fleck, B.

    2009-12-01

    As the amount of solar data available to scientists continues to increase at faster and faster rates, it is important that there exist simple tools for navigating this data quickly with a minimal amount of effort. By combining heterogeneous solar physics datatypes such as full-disk images and coronagraphs, along with feature and event information, Helioviewer offers a simple and intuitive way to browse multiple datasets simultaneously. Images are stored in a repository using the JPEG 2000 format and tiled dynamically upon a client's request. By tiling images and serving only the portions of the image requested, it is possible for the client to work with very large images without having to fetch all of the data at once. In addition to a focus on intercommunication with other virtual observatories and browsers (VSO, HEK, etc), Helioviewer will offer a number of externally-available application programming interfaces (APIs) to enable easy third party use, adoption and extension. Recent efforts have resulted in increased performance, dynamic movie generation, and improved support for mobile web browsers. Future functionality will include: support for additional data-sources including RHESSI, SDO, STEREO, and TRACE, a navigable timeline of recorded solar events, social annotation, and basic client-side image processing.

  8. The Helioviewer Project: Solar Data Visualization and Exploration

    NASA Astrophysics Data System (ADS)

    Hughitt, V. Keith; Ireland, J.; Müller, D.; García Ortiz, J.; Dimitoglou, G.; Fleck, B.

    2011-05-01

    SDO has only been operating a little over a year, but in that short time it has already transmitted hundreds of terabytes of data, making it impossible for data providers to maintain a complete archive of data online. By storing an extremely efficiently compressed subset of the data, however, the Helioviewer project has been able to maintain a continuous record of high-quality SDO images starting from soon after the commissioning phase. The Helioviewer project was not designed to deal with SDO alone, however, and continues to add support for new types of data, the most recent of which are STEREO EUVI and COR1/COR2 images. In addition to adding support for new types of data, improvements have been made to both the server-side and client-side products that are part of the project. A new open-source JPEG2000 (JPIP) streaming server has been developed offering a vastly more flexible and reliable backend for the Java/OpenGL application JHelioviewer. Meanwhile the web front-end, Helioviewer.org, has also made great strides both in improving reliability, and also in adding new features such as the ability to create and share movies on YouTube. Helioviewer users are creating nearly two thousand movies a day from the over six million images that are available to them, and that number continues to grow each day. We provide an overview of recent progress with the various Helioviewer Project components and discuss plans for future development.

  9. Adaptive intercolor error prediction coder for lossless color (rgb) picutre compression

    NASA Astrophysics Data System (ADS)

    Mann, Y.; Peretz, Y.; Mitchell, Harvey B.

    2001-09-01

    Most of the current lossless compression algorithms, including the new international baseline JPEG-LS algorithm, do not exploit the interspectral correlations that exist between the color planes in an input color picture. To improve the compression performance (i.e., lower the bit rate) it is necessary to exploit these correlations. A major concern is to find efficient methods for exploiting the correlations that, at the same time, are compatible with and can be incorporated into the JPEG-LS algorithm. One such algorithm is the method of intercolor error prediction (IEP), which when used with the JPEG-LS algorithm, results on average in a reduction of 8% in the overall bit rate. We show how the IEP algorithm can be simply modified and that it nearly doubles the size of the reduction in bit rate to 15%.

  10. Digital Semaphore: Technical Feasibility of QR Code Optical Signaling for Fleet Communications

    DTIC Science & Technology

    2013-06-01

    Standards (http://www.iso.org) JIS Japanese Industrial Standard JPEG Joint Photographic Experts Group (digital image format; http://www.jpeg.org) LED...Denso Wave corporation in the 1990s for the Japanese automotive manufacturing industry. See Appendix A for full details. Reed-Solomon Error...eliminates camera blur induced by the shutter, providing clear images at extremely high frame rates. Thusly, digital cinema cameras are more suitable

  11. JPEG 2000-based compression of fringe patterns for digital holographic microscopy

    NASA Astrophysics Data System (ADS)

    Blinder, David; Bruylants, Tim; Ottevaere, Heidi; Munteanu, Adrian; Schelkens, Peter

    2014-12-01

    With the advent of modern computing and imaging technologies, digital holography is becoming widespread in various scientific disciplines such as microscopy, interferometry, surface shape measurements, vibration analysis, data encoding, and certification. Therefore, designing an efficient data representation technology is of particular importance. Off-axis holograms have very different signal properties with respect to regular imagery, because they represent a recorded interference pattern with its energy biased toward the high-frequency bands. This causes traditional images' coders, which assume an underlying 1/f2 power spectral density distribution, to perform suboptimally for this type of imagery. We propose a JPEG 2000-based codec framework that provides a generic architecture suitable for the compression of many types of off-axis holograms. This framework has a JPEG 2000 codec at its core, extended with (1) fully arbitrary wavelet decomposition styles and (2) directional wavelet transforms. Using this codec, we report significant improvements in coding performance for off-axis holography relative to the conventional JPEG 2000 standard, with Bjøntegaard delta-peak signal-to-noise ratio improvements ranging from 1.3 to 11.6 dB for lossy compression in the 0.125 to 2.00 bpp range and bit-rate reductions of up to 1.6 bpp for lossless compression.

  12. Mutual information-based analysis of JPEG2000 contexts.

    PubMed

    Liu, Zhen; Karam, Lina J

    2005-04-01

    Context-based arithmetic coding has been widely adopted in image and video compression and is a key component of the new JPEG2000 image compression standard. In this paper, the contexts used in JPEG2000 are analyzed using the mutual information, which is closely related to the compression performance. We first show that, when combining the contexts, the mutual information between the contexts and the encoded data will decrease unless the conditional probability distributions of the combined contexts are the same. Given I, the initial number of contexts, and F, the final desired number of contexts, there are S(I, F) possible context classification schemes where S(I, F) is called the Stirling number of the second kind. The optimal classification scheme is the one that gives the maximum mutual information. Instead of using an exhaustive search, the optimal classification scheme can be obtained through a modified generalized Lloyd algorithm with the relative entropy as the distortion metric. For binary arithmetic coding, the search complexity can be reduced by using dynamic programming. Our experimental results show that the JPEG2000 contexts capture the correlations among the wavelet coefficients very well. At the same time, the number of contexts used as part of the standard can be reduced without loss in the coding performance.

  13. A new JPEG-based steganographic algorithm for mobile devices

    NASA Astrophysics Data System (ADS)

    Agaian, Sos S.; Cherukuri, Ravindranath C.; Schneider, Erik C.; White, Gregory B.

    2006-05-01

    Currently, cellular phones constitute a significant portion of the global telecommunications market. Modern cellular phones offer sophisticated features such as Internet access, on-board cameras, and expandable memory which provide these devices with excellent multimedia capabilities. Because of the high volume of cellular traffic, as well as the ability of these devices to transmit nearly all forms of data. The need for an increased level of security in wireless communications is becoming a growing concern. Steganography could provide a solution to this important problem. In this article, we present a new algorithm for JPEG-compressed images which is applicable to mobile platforms. This algorithm embeds sensitive information into quantized discrete cosine transform coefficients obtained from the cover JPEG. These coefficients are rearranged based on certain statistical properties and the inherent processing and memory constraints of mobile devices. Based on the energy variation and block characteristics of the cover image, the sensitive data is hidden by using a switching embedding technique proposed in this article. The proposed system offers high capacity while simultaneously withstanding visual and statistical attacks. Based on simulation results, the proposed method demonstrates an improved retention of first-order statistics when compared to existing JPEG-based steganographic algorithms, while maintaining a capacity which is comparable to F5 for certain cover images.

  14. Vulnerability Analysis of HD Photo Image Viewer Applications

    DTIC Science & Technology

    2007-09-01

    the successor to the ubiquitous JPEG image format, as well as the eventual de facto standard in the digital photography market. With massive efforts...renamed to HD Photo in November of 2006, is being touted as the successor to the ubiquitous JPEG image format, as well as the eventual de facto standard...associated state-of-the-art compression algorithm “specifically designed [for] all types of continuous tone photographic” images [HDPhotoFeatureSpec

  15. A new security solution to JPEG using hyper-chaotic system and modified zigzag scan coding

    NASA Astrophysics Data System (ADS)

    Ji, Xiao-yong; Bai, Sen; Guo, Yu; Guo, Hui

    2015-05-01

    Though JPEG is an excellent compression standard of images, it does not provide any security performance. Thus, a security solution to JPEG was proposed in Zhang et al. (2014). But there are some flaws in Zhang's scheme and in this paper we propose a new scheme based on discrete hyper-chaotic system and modified zigzag scan coding. By shuffling the identifiers of zigzag scan encoded sequence with hyper-chaotic sequence and accurately encrypting the certain coefficients which have little relationship with the correlation of the plain image in zigzag scan encoded domain, we achieve high compression performance and robust security simultaneously. Meanwhile we present and analyze the flaws in Zhang's scheme through theoretical analysis and experimental verification, and give the comparisons between our scheme and Zhang's. Simulation results verify that our method has better performance in security and efficiency.

  16. Perceptually-Based Adaptive JPEG Coding

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Rosenholtz, Ruth; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    An extension to the JPEG standard (ISO/IEC DIS 10918-3) allows spatial adaptive coding of still images. As with baseline JPEG coding, one quantization matrix applies to an entire image channel, but in addition the user may specify a multiplier for each 8 x 8 block, which multiplies the quantization matrix, yielding the new matrix for the block. MPEG 1 and 2 use much the same scheme, except there the multiplier changes only on macroblock boundaries. We propose a method for perceptual optimization of the set of multipliers. We compute the perceptual error for each block based upon DCT quantization error adjusted according to contrast sensitivity, light adaptation, and contrast masking, and pick the set of multipliers which yield maximally flat perceptual error over the blocks of the image. We investigate the bitrate savings due to this adaptive coding scheme and the relative importance of the different sorts of masking on adaptive coding.

  17. Compression of electromyographic signals using image compression techniques.

    PubMed

    Costa, Marcus Vinícius Chaffim; Berger, Pedro de Azevedo; da Rocha, Adson Ferreira; de Carvalho, João Luiz Azevedo; Nascimento, Francisco Assis de Oliveira

    2008-01-01

    Despite the growing interest in the transmission and storage of electromyographic signals for long periods of time, few studies have addressed the compression of such signals. In this article we present an algorithm for compression of electromyographic signals based on the JPEG2000 coding system. Although the JPEG2000 codec was originally designed for compression of still images, we show that it can also be used to compress EMG signals for both isotonic and isometric contractions. For EMG signals acquired during isometric contractions, the proposed algorithm provided compression factors ranging from 75 to 90%, with an average PRD ranging from 3.75% to 13.7%. For isotonic EMG signals, the algorithm provided compression factors ranging from 75 to 90%, with an average PRD ranging from 3.4% to 7%. The compression results using the JPEG2000 algorithm were compared to those using other algorithms based on the wavelet transform.

  18. Lossless Compression of JPEG Coded Photo Collections.

    PubMed

    Wu, Hao; Sun, Xiaoyan; Yang, Jingyu; Zeng, Wenjun; Wu, Feng

    2016-04-06

    The explosion of digital photos has posed a significant challenge to photo storage and transmission for both personal devices and cloud platforms. In this paper, we propose a novel lossless compression method to further reduce the size of a set of JPEG coded correlated images without any loss of information. The proposed method jointly removes inter/intra image redundancy in the feature, spatial, and frequency domains. For each collection, we first organize the images into a pseudo video by minimizing the global prediction cost in the feature domain. We then present a hybrid disparity compensation method to better exploit both the global and local correlations among the images in the spatial domain. Furthermore, the redundancy between each compensated signal and the corresponding target image is adaptively reduced in the frequency domain. Experimental results demonstrate the effectiveness of the proposed lossless compression method. Compared to the JPEG coded image collections, our method achieves average bit savings of more than 31%.

  19. High-speed low-complexity video coding with EDiCTius: a DCT coding proposal for JPEG XS

    NASA Astrophysics Data System (ADS)

    Richter, Thomas; Fößel, Siegfried; Keinert, Joachim; Scherl, Christian

    2017-09-01

    In its 71th meeting, the JPEG committee issued a call for low complexity, high speed image coding, designed to address the needs of low-cost video-over-ip applications. As an answer to this call, Fraunhofer IIS and the Computing Center of the University of Stuttgart jointly developed an embedded DCT image codec requiring only minimal resources while maximizing throughput on FPGA and GPU implementations. Objective and subjective tests performed for the 73rd meeting confirmed its excellent performance and suitability for its purpose, and it was selected as one of the two key contributions for the development of a joined test model. In this paper, its authors describe the design principles of the codec, provide a high-level overview of the encoder and decoder chain and provide evaluation results on the test corpus selected by the JPEG committee.

  20. Hunting the Southern Skies with SIMBA

    NASA Astrophysics Data System (ADS)

    2001-08-01

    First Images from the New "Millimetre Camera" on SEST at La Silla Summary A new instrument, SIMBA ("SEST IMaging Bolometer Array") , has been installed at the Swedish-ESO Submillimetre Telescope (SEST) at the ESO La Silla Observatory in July 2001. It records astronomical images at a wavelength of 1.2 mm and is able to quickly map large sky areas. In order to achieve the best possible sensitivity, SIMBA is cooled to only 0.3 deg above the absolute zero on the temperature scale. SIMBA is the first imaging millimetre instrument in the southern hemisphere . Radiation at this wavelength is mostly emitted from cold dust and ionized gas in a variety of objects in the Universe. Among other, SIMBA now opens exciting prospects for in-depth studies of the "hidden" sites of star formation , deep inside dense interstellar nebulae. While such clouds are impenetrable to optical light, they are transparent to millimetre radiation and SIMBA can therefore observe the associated phenomena, in particular the dust around nascent stars . This sophisticated instrument can also search for disks of cold dust around nearby stars in which planets are being formed or which may be left-overs of this basic process. Equally important, SIMBA may observe extremely distant galaxies in the early universe , recording them while they were still in the formation stage. Various SIMBA images have been obtained during the first tests of the new instrument. The first observations confirm the great promise for unique astronomical studies of the southern sky in the millimetre wavelength region. These results also pave the way towards the Atacama Large Millimeter Array (ALMA) , the giant, joint research project that is now under study in Europe, the USA and Japan. PR Photo 28a/01 : SIMBA image centered on the infrared source IRAS 17175-3544 PR Photo 28b/01 : SIMBA image centered on the infrared source IRAS 18434-0242 PR Photo 28c/01 : SIMBA image centered on the infrared source IRAS 17271-3439 PR Photo 28d/01 : View of the SIMBA instrument First observations with SIMBA SIMBA ("SEST IMaging Bolometer Array") was built and installed at the Swedish-ESO Submillimetre Telescope (SEST) at La Silla (Chile) within an international collaboration between the University of Bochum and the Max Planck Institute for Radio Astronomy in Germany, the Swedish National Facility for Radio Astronomy and ESO . The SIMBA ("Lion" in Swahili) instrument detects radiation at a wavelength of 1.2 mm . It has 37 "horns" and acts like a camera with 37 picture elements (pixels). By changing the pointing direction of the telescope, relatively large sky fields can be imaged. As the first and only imaging millimetre instrument in the southern hemisphere , SIMBA now looks up towards rich and virgin hunting grounds in the sky. Observations at millimetre wavelengths are particularly useful for studies of star formation , deep inside dense interstellar clouds that are impenetrable to optical light. Other objects for which SIMBA is especially suited include planet-forming disks of cold dust around nearby stars and extremely distant galaxies in the early universe , still in the stage of formation. During the first observations, SIMBA was used to study the gas and dust content of star-forming regions in our own Milky Way Galaxy, as well as in the Magellanic Clouds and more distant galaxies. It was also used to record emission from planetary nebulae , clouds of matter ejected by dying stars. Moreover, attempts were made to detect distant galaxies and quasars radiating at mm-wavelengths and located in two well-studied sky fields, the "Hubble Deep Field South" and the "Chandra Deep Field" [1]. Observations with SEST and SIMBA also serve to identify objects that can be observed at higher resolution and at shorter wavelengths with future southern submm telescopes and interferometers such as APEX (see MPG Press Release 07/01 of 6 July 2001) and ALMA. SIMBA images regions of high-mass star formation ESO PR Photo 28a/01 ESO PR Photo 28a/01 [Preview - JPEG: 400 x 568 pix - 61k] [Normal - JPEG: 800 x 1136 pix - 200k] Caption : This intensity-coded, false-colour SIMBA image is centered on the infrared source IRAS 17175-3544 and covers the well-known high-mass star formation complex NGC 6334 , at a distance of 5500 light-years. The southern bright source is an ultra-compact region of ionized hydrogen ("HII region") created by a star or several stars already formed. The northern bright source has not yet developed an HII region and may be a star or a cluster of stars that are presently forming. A remarkable, narrow, linear dust filament extends over the image; it was known to exist before, but the SIMBA image now shows it to a much larger extent and much more clearly. This and the following images cover an area of about 15 arcmin x 6 arcmin on the sky and have a pixel size of 8 arcsec. ESO PR Photo 28b/01 ESO PR Photo 28b/01 [Preview - JPEG: 532 x 400 pix - 52k] [Normal - JPEG: 1064 x 800 pix - 168k] Caption : This SIMBA image is centered on the object IRAS 18434-0242 . It includes many bright sources that are associated with dense cores and compact HII regions located deep inside the cloud. A much less detailed map was made several years ago with a single channel bolometer on SEST. The new SIMBA map is more extended and shows more sources. ESO PR Photo 28c/01 ESO PR Photo 28c/01 [Preview - JPEG: 400 x 505 pix - 59k] [Normal - JPEG: 800 x 1009 pix - 160k] Caption : Another SIMBA image is centered on IRAS 17271-3439 and includes an extended bright source that is associated with several compact HII regions as well as a cluster of weaker sources. Some of the recent SIMBA images are shown above; they were taken during test observations, and within a pilot survey of high-mass starforming regions . Stars form in interstellar clouds that consist of gas and dust. The denser parts of these clouds can collapse into cold and dense cores which may form stars. Often many stars are formed in clusters, at about the same time. The newborn stars heat up the surrounding regions of the cloud . Radiation is emitted, first at mm-wavelengths and later at infrared wavelengths as the cloud core gets hotter. If very massive stars are formed, their UV-radiation ionizes the immediate surrounding gas and this ionized gas also emits at mm-wavelengths. These ionized regions are called ultra compact HII regions . Because the stars form deep inside the interstellar clouds, the obscuration at visible wavelengths is very high and it is not possible to see these regions optically. The objects selected for the SIMBA survey are from a catalog of objects, first detected at long infrared wavelengths with the IRAS satellite (launched in 1983), hence the designations indicated in Photos 28a-c/01 . From 1995 to 1998, the ESA Infrared Space Observatory (ISO) gathered an enormous amount of valuable data, obtaining images and spectra in the broad infrared wavelength region from 2.5 to 240 µm (0.025 to 0.240 mm), i.e. just shortward of the millimetre region in which SIMBA operates. ISO produced mid-infrared images of field size and angular resolution (sharpness) comparable to those of SIMBA. It will obviously be most interesting to combine the images that will be made with SIMBA with imaging and spectral data from ISO and also with those obtained by large ground-based telescopes in the near- and mid-infrared spectral regions. Some technical details about the SIMBA instrument ESO PR Photo 28d/01 ESO PR Photo 28d/01 [Preview - JPEG: 509 x 400 pix - 83k] [Normal - JPEG: 1017 x 800 pix - 528k] Caption : The SIMBA instrument - with the cover removed - in the SEST electronics laboratory. The 37 antenna horns to the right, each of which produces one picture element (pixel) of the combined image. The bolometer elements are located behind the horns. The cylindrical aluminium foil covered unit is the cooler that keeps SIMBA at extremely low temperature (-272.85 °C, or only 0.3 deg above the absolute zero) when it is mounted in the telescope. SIMBA is unique because of its ability to quickly map large sky areas due to the fast scanning mode. In order to achieve low noise and good sensitivity, the instrument is cooled to only 0.3 deg above the absolute zero, i.e., to -272.85 °C. SIMBA consists of 37 horns (each providing one pixel on the sky) arranged in a hexagonal pattern, cf. Photo 28d/01 . To form images, the sky position of the telescope is changed according to a raster pattern - in this way all of a celestial object and the surrounding sky field may be "scanned" fast, at speeds of typically 80 arcsec per second. This makes SIMBA a very efficient facility: for instance, a fully sampled image of good sensitivity with a field size of 15 arcmin x 6 arcmin can be taken in 15 minutes. If higher sensitivity is needed (to observe fainter sources), more images may be obtained of the same field and then added together. Large sky areas can be covered by combining many images taken at different positions. The image resolution (the "telescope beamsize") is 22 arcsec, corresponding to the angular resolution of this 15-m telescope at the indicated wavelength. Note [1} Observations of the HDFS and CDFS fields in other wavebands with other telescopes at the ESO observatories have been reported earlier, e.g. within the ESO Imaging Survey Project (EIS) (the "EIS Deep-Survey"). It is the ESO policy on these fields to make data public world-wide.

  1. An Implementation of Privacy Protection for a Surveillance Camera Using ROI Coding of JPEG2000 with Face Detection

    NASA Astrophysics Data System (ADS)

    Muneyasu, Mitsuji; Odani, Shuhei; Kitaura, Yoshihiro; Namba, Hitoshi

    On the use of a surveillance camera, there is a case where privacy protection should be considered. This paper proposes a new privacy protection method by automatically degrading the face region in surveillance images. The proposed method consists of ROI coding of JPEG2000 and a face detection method based on template matching. The experimental result shows that the face region can be detected and hidden correctly.

  2. Illumination-tolerant face verification of low-bit-rate JPEG2000 wavelet images with advanced correlation filters for handheld devices

    NASA Astrophysics Data System (ADS)

    Wijaya, Surya Li; Savvides, Marios; Vijaya Kumar, B. V. K.

    2005-02-01

    Face recognition on mobile devices, such as personal digital assistants and cell phones, is a big challenge owing to the limited computational resources available to run verifications on the devices themselves. One approach is to transmit the captured face images by use of the cell-phone connection and to run the verification on a remote station. However, owing to limitations in communication bandwidth, it may be necessary to transmit a compressed version of the image. We propose using the image compression standard JPEG2000, which is a wavelet-based compression engine used to compress the face images to low bit rates suitable for transmission over low-bandwidth communication channels. At the receiver end, the face images are reconstructed with a JPEG2000 decoder and are fed into the verification engine. We explore how advanced correlation filters, such as the minimum average correlation energy filter [Appl. Opt. 26, 3633 (1987)] and its variants, perform by using face images captured under different illumination conditions and encoded with different bit rates under the JPEG2000 wavelet-encoding standard. We evaluate the performance of these filters by using illumination variations from the Carnegie Mellon University's Pose, Illumination, and Expression (PIE) face database. We also demonstrate the tolerance of these filters to noisy versions of images with illumination variations.

  3. Digitized hand-wrist radiographs: comparison of subjective and software-derived image quality at various compression ratios.

    PubMed

    McCord, Layne K; Scarfe, William C; Naylor, Rachel H; Scheetz, James P; Silveira, Anibal; Gillespie, Kevin R

    2007-05-01

    The objectives of this study were to compare the effect of JPEG 2000 compression of hand-wrist radiographs on observer image quality qualitative assessment and to compare with a software-derived quantitative image quality index. Fifteen hand-wrist radiographs were digitized and saved as TIFF and JPEG 2000 images at 4 levels of compression (20:1, 40:1, 60:1, and 80:1). The images, including rereads, were viewed by 13 orthodontic residents who determined the image quality rating on a scale of 1 to 5. A quantitative analysis was also performed by using a readily available software based on the human visual system (Image Quality Measure Computer Program, version 6.2, Mitre, Bedford, Mass). ANOVA was used to determine the optimal compression level (P < or =.05). When we compared subjective indexes, JPEG compression greater than 60:1 significantly reduced image quality. When we used quantitative indexes, the JPEG 2000 images had lower quality at all compression ratios compared with the original TIFF images. There was excellent correlation (R2 >0.92) between qualitative and quantitative indexes. Image Quality Measure indexes are more sensitive than subjective image quality assessments in quantifying image degradation with compression. There is potential for this software-based quantitative method in determining the optimal compression ratio for any image without the use of subjective raters.

  4. Clinical evaluation of JPEG2000 compression for digital mammography

    NASA Astrophysics Data System (ADS)

    Sung, Min-Mo; Kim, Hee-Joung; Kim, Eun-Kyung; Kwak, Jin-Young; Yoo, Jae-Kyung; Yoo, Hyung-Sik

    2002-06-01

    Medical images, such as computed radiography (CR), and digital mammographic images will require large storage facilities and long transmission times for picture archiving and communications system (PACS) implementation. American College of Radiology and National Equipment Manufacturers Association (ACR/NEMA) group is planning to adopt a JPEG2000 compression algorithm in digital imaging and communications in medicine (DICOM) standard to better utilize medical images. The purpose of the study was to evaluate the compression ratios of JPEG2000 for digital mammographic images using peak signal-to-noise ratio (PSNR), receiver operating characteristic (ROC) analysis, and the t-test. The traditional statistical quality measures such as PSNR, which is a commonly used measure for the evaluation of reconstructed images, measures how the reconstructed image differs from the original by making pixel-by-pixel comparisons. The ability to accurately discriminate diseased cases from normal cases is evaluated using ROC curve analysis. ROC curves can be used to compare the diagnostic performance of two or more reconstructed images. The t test can be also used to evaluate the subjective image quality of reconstructed images. The results of the t test suggested that the possible compression ratios using JPEG2000 for digital mammographic images may be as much as 15:1 without visual loss or with preserving significant medical information at a confidence level of 99%, although both PSNR and ROC analyses suggest as much as 80:1 compression ratio can be achieved without affecting clinical diagnostic performance.

  5. IIPImage: Large-image visualization

    NASA Astrophysics Data System (ADS)

    Pillay, Ruven

    2014-08-01

    IIPImage is an advanced high-performance feature-rich image server system that enables online access to full resolution floating point (as well as other bit depth) images at terabyte scales. Paired with the VisiOmatic (ascl:1408.010) celestial image viewer, the system can comfortably handle gigapixel size images as well as advanced image features such as both 8, 16 and 32 bit depths, CIELAB colorimetric images and scientific imagery such as multispectral images. Streaming is tile-based, which enables viewing, navigating and zooming in real-time around gigapixel size images. Source images can be in either TIFF or JPEG2000 format. Whole images or regions within images can also be rapidly and dynamically resized and exported by the server from a single source image without the need to store multiple files in various sizes.

  6. JPEG XS-based frame buffer compression inside HEVC for power-aware video compression

    NASA Astrophysics Data System (ADS)

    Willème, Alexandre; Descampe, Antonin; Rouvroy, Gaël.; Pellegrin, Pascal; Macq, Benoit

    2017-09-01

    With the emergence of Ultra-High Definition video, reference frame buffers (FBs) inside HEVC-like encoders and decoders have to sustain huge bandwidth. The power consumed by these external memory accesses accounts for a significant share of the codec's total consumption. This paper describes a solution to significantly decrease the FB's bandwidth, making HEVC encoder more suitable for use in power-aware applications. The proposed prototype consists in integrating an embedded lightweight, low-latency and visually lossless codec at the FB interface inside HEVC in order to store each reference frame as several compressed bitstreams. As opposed to previous works, our solution compresses large picture areas (ranging from a CTU to a frame stripe) independently in order to better exploit the spatial redundancy found in the reference frame. This work investigates two data reuse schemes namely Level-C and Level-D. Our approach is made possible thanks to simplified motion estimation mechanisms further reducing the FB's bandwidth and inducing very low quality degradation. In this work, we integrated JPEG XS, the upcoming standard for lightweight low-latency video compression, inside HEVC. In practice, the proposed implementation is based on HM 16.8 and on XSM 1.1.2 (JPEG XS Test Model). Through this paper, the architecture of our HEVC with JPEG XS-based frame buffer compression is described. Then its performance is compared to HM encoder. Compared to previous works, our prototype provides significant external memory bandwidth reduction. Depending on the reuse scheme, one can expect bandwidth and FB size reduction ranging from 50% to 83.3% without significant quality degradation.

  7. Image quality (IQ) guided multispectral image compression

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik

    2016-05-01

    Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.

  8. Sharper and Deeper Views with MACAO-VLTI

    NASA Astrophysics Data System (ADS)

    2003-05-01

    "First Light" with Powerful Adaptive Optics System for the VLT Interferometer Summary On April 18, 2003, a team of engineers from ESO celebrated the successful accomplishment of "First Light" for the MACAO-VLTI Adaptive Optics facility on the Very Large Telescope (VLT) at the Paranal Observatory (Chile). This is the second Adaptive Optics (AO) system put into operation at this observatory, following the NACO facility ( ESO PR 25/01 ). The achievable image sharpness of a ground-based telescope is normally limited by the effect of atmospheric turbulence. However, with Adaptive Optics (AO) techniques, this major drawback can be overcome so that the telescope produces images that are as sharp as theoretically possible, i.e., as if they were taken from space. The acronym "MACAO" stands for "Multi Application Curvature Adaptive Optics" which refers to the particular way optical corrections are made which "eliminate" the blurring effect of atmospheric turbulence. The MACAO-VLTI facility was developed at ESO. It is a highly complex system of which four, one for each 8.2-m VLT Unit Telescope, will be installed below the telescopes (in the Coudé rooms). These systems correct the distortions of the light beams from the large telescopes (induced by the atmospheric turbulence) before they are directed towards the common focus at the VLT Interferometer (VLTI). The installation of the four MACAO-VLTI units of which the first one is now in place, will amount to nothing less than a revolution in VLT interferometry . An enormous gain in efficiency will result, because of the associated 100-fold gain in sensitivity of the VLTI. Put in simple words, with MACAO-VLTI it will become possible to observe celestial objects 100 times fainter than now . Soon the astronomers will be thus able to obtain interference fringes with the VLTI ( ESO PR 23/01 ) of a large number of objects hitherto out of reach with this powerful observing technique, e.g. external galaxies. The ensuing high-resolution images and spectra will open entirely new perspectives in extragalactic research and also in the studies of many faint objects in our own galaxy, the Milky Way. During the present period, the first of the four MACAO-VLTI facilties was installed, integrated and tested by means of a series of observations. For these tests, an infrared camera was specially developed which allowed a detailed evaluation of the performance. It also provided some first, spectacular views of various celestial objects, some of which are shown here. PR Photo 12a/03 : View of the first MACAO-VLTI facility at Paranal PR Photo 12b/03 : The star HIC 59206 (uncorrected image). PR Photo 12c/03 : HIC 59206 (AO corrected image) PR Photo 12e/03 : HIC 69495 (AO corrected image) PR Photo 12f/03 : 3-D plot of HIC 69495 images (without and with AO correction) PR Photo 12g/03 : 3-D plot of the artificially dimmed star HIC 74324 (without and with AO correction) PR Photo 12d/03 : The MACAO-VLTI commissioning team at "First Light" PR Photo 12h/03 : K-band image of the Galactic Center PR Photo 12i/03 : K-band image of the unstable star Eta Carinae PR Photo 12j/03 : K-band image of the peculiar star Frosty Leo MACAO - the Multi Application Curvature Adaptive Optics facility ESO PR Photo 12a/03 ESO PR Photo 12a/03 [Preview - JPEG: 408 x 400 pix - 56k [Normal - JPEG: 815 x 800 pix - 720k] Captions : PR Photo 12a/03 is a front view of the first MACAO-VLTI unit, now installed at the 8.2-m VLT KUEYEN telescope. Adaptive Optics (AO) systems work by means of a computer-controlled deformable mirror (DM) that counteracts the image distortion induced by atmospheric turbulence. It is based on real-time optical corrections computed from image data obtained by a "wavefront sensor" (a special camera) at very high speed, many hundreds of times each second. The ESO Multi Application Curvature Adaptive Optics (MACAO) system uses a 60-element bimorph deformable mirror (DM) and a 60-element curvature wavefront sensor, with a "heartbeat" of 350 Hz (times per second). With this high spatial and temporal correcting power, MACAO is able to nearly restore the theoretically possible ("diffraction-limited") image quality of an 8.2-m VLT Unit Telescope in the near-infrared region of the spectrum, at a wavelength of about 2 µm. The resulting image resolution (sharpness) of the order of 60 milli-arcsec is an improvement by more than a factor of 10 as compared to standard seeing-limited observations. Without the benefit of the AO technique, such image sharpness could only be obtained if the telescope were placed above the Earth's atmosphere. The technical development of MACAO-VLTI in its present form was begun in 1999 and with project reviews at 6 months' intervals, the project quickly reached cruising speed. The effective design is the result of a very fruitful collaboration between the AO department at ESO and European industry which contributed with the diligent fabrication of numerous high-tech components, including the bimorph DM with 60 actuators, a fast-reaction tip-tilt mount and many others. The assembly, tests and performance-tuning of this complex real-time system was assumed by ESO-Garching staff. Installation at Paranal The first crates of the 60+ cubic-meter shipment with MACAO components arrived at the Paranal Observatory on March 12, 2003. Shortly thereafter, ESO engineers and technicians began the painstaking assembly of this complex instrument, below the VLT 8.2-m KUEYEN telescope (formerly UT2). They followed a carefully planned scheme, involving installation of the electronics, water cooling systems, mechanical and optical components. At the end, they performed the demanding optical alignment, delivering a fully assembled instrument one week before the planned first test observations. This extra week provided a very welcome and useful opportunity to perform a multitude of tests and calibrations in preparation of the actual observations. AO to the service of Interferometry The VLT Interferometer (VLTI) combines starlight captured by two or more 8.2- VLT Unit Telescopes (later also from four moveable1.8-m Auxiliary Telescopes) and allows to vastly increase the image resolution. The light beams from the telescopes are brought together "in phase" (coherently). Starting out at the primary mirrors, they undergo numerous reflections along their different paths over total distances of several hundred meters before they reach the interferometric Laboratory where they are combined to within a fraction of a wavelength, i.e., within nanometers! The gain by the interferometric technique is enormous - combining the light beams from two telescopes separated by 100 metres allows observation of details which could otherwise only be resolved by a single telescope with a diameter of 100 metres. Sophisticated data reduction is necessary to interpret interferometric measurements and to deduce important physical parameters of the observed objects like the diameters of stars, etc., cf. ESO PR 22/02 . The VLTI measures the degree of coherence of the combined beams as expressed by the contrast of the observed interferometric fringe pattern. The higher the degree of coherence between the individual beams, the stronger is the measured signal. By removing wavefront aberrations introduced by atmospheric turbulence, the MACAO-VLTI systems enormously increase the efficiency of combining the individual telescope beams. In the interferometric measurement process, the starlight must be injected into optical fibers which are extremely small in order to accomplish their function; only 6 µm (0.006 mm) in diameter. Without the "refocussing" action of MACAO, only a tiny fraction of the starlight captured by the telescopes can be injected into the fibers and the VLTI would not be working at the peak of efficiency for which it has been designed. MACAO-VLTI will now allow a gain of a factor 100 in the injected light flux - this will be tested in detail when two VLT Unit Telescopes, both equipped with MACAO-VLTI's, work together. However, the very good performance actually achieved with the first system makes the engineers very confident that a gain of this order will indeed be reached. This ultimate test will be performed as soon as the second MACAO-VLTI system has been installed later this year. MACAO-VLTI First Light After one month of installation work and following tests by means of an artificial light source installed in the Nasmyth focus of KUEYEN, MACAO-VLTI had "First Light" on April 18 when it received "real" light from several astronomical obejcts. During the preceding performance tests to measure the image improvement (sharpness, light energy concentration) in near-infrared spectral bands at 1.2, 1.6 and 2.2 µm, MACAO-VLTI was checked by means of a custom-made Infrared Test Camera developed for this purpose by ESO. This intermediate test was required to ensure the proper functioning of MACAO before it is used to feed a corrected beam of light into the VLTI. After only a few nights of testing and optimizing of the various functions and operational parameters, MACAO-VLTI was ready to be used for astronomical observations. The images below were taken under average seeing conditions and illustrate the improvement of the image quality when using MACAO-VLTI . MACAO-VLTI - First Images Here are some of the first images obtained with the test camera at the first MACAO-VLTI system, now installed at the 8.2-m VLT KUEYEN telescope. ESO PR Photo 12b/03 ESO PR Photo 12b/03 [Preview - JPEG: 400 x 468 pix - 25k [Normal - JPEG: 800 x 938 pix - 291k] ESO PR Photo 12c/03 ESO PR Photo 12c/03 [Preview - JPEG: 400 x 469 pix - 14k [Normal - JPEG: 800 x 938 pix - 135k] Captions : PR Photos 12b-c/03 show the first image, obtained by the first MACAO-VLTI system at the 8.2-m VLT KUEYEN telescope in the infrared K-band (wavelength 2.2 µm). It displays images of the star HIC 59206 (visual magnitude 10) obtained before (left; Photo 12b/03 ) and after (right; Photo 12c/03 ) the adaptive optics system was switched on. The binary is separated by 0.120 arcsec and the image was taken under medium seeing conditions (0.75 arcsec) seeing. The dramatic improvement in image quality is obvious. ESO PR Photo 12d/03 ESO PR Photo 12d/03 [Preview - JPEG: 400 x 427 pix - 18k [Normal - JPEG: 800 x 854 pix - 205k] ESO PR Photo 12e/03 ESO PR Photo 12e/03 [Preview - JPEG: 483 x 400 pix - 17k [Normal - JPEG: 966 x 800 pix - 169k] Captions : PR Photo 12d/03 shows one of the best images obtained with MACAO-VLTI (logarithmic intensity scale). The seeing was 0.8 arcsec at the time of the observations and three diffraction rings can clearly be seen around the star HIC 69495 of visual magnitude 9.9. This pattern is only well visible when the image resolution is very close to the theoretical limit. The exposure of the point-like source lasted 100 seconds through a narrow K-band filter. It has a Strehl ratio (a measure of light concentration) of about 55% and a Full-Width- Half-Maximum (FWHM) of 0.060 arcsec. The 3-D plot ( PRPhoto 12e/03 ) demonstrates the tremendous gain in peak intensity of the AO image (right) in peak intensity as compared to "open-loop" image (the "noise" to the left) obtained without the benefit of AO. ESO PR Photo 12f/03 ESO PR Photo 12f/03 [Preview - JPEG: 494 x 400 pix - 20k [Normal - JPEG: 988 x 800 pix - 204k] Caption : PR Photo 12f/03 demonstrates the correction performance of MACAO-VLTI when using a faint guide star. The observed star ( HIC 74324 (stellar spectral type G0 and visual magnitude 9.4) was artificially dimmed by a neutral optical filter to visual magnitude 16.5. The observation was carried out in 0.55 arcsec seeing and with a rather short atmospheric correlation time of 3 milliseconds at visible wavelengths. The Strehl ratio in the 25-second K-band exposure is about 10% and the FWHM is 0.14 arcseconds. The uncorrected image is shown to the left for comparison. The improvement is again impressive, even for a star as faint as this, indicating that guide stars of this magnitude are feasible during future observations. ESO PR Photo 12g/03 ESO PR Photo 12g/03 [Preview - JPEG: 528 x 400 pix - 48k [Normal - JPEG: 1055 x 800 pix - 542k] Captions : PR Photo 12g/03 shows some of the MACAO-VLTI commissioning team members in the VLT Control Room at the moment of "First Light" during the night between April 18-19, 2003. Sitting: Markus Kasper, Enrico Fedrigo - Standing: Robin Arsenault, Sebastien Tordo, Christophe Dupuy, Toomas Erm, Jason Spyromilio, Rob Donaldson (all from ESO). PR Photos 12b-c/03 show the first image in the infrared K-band (wavelength 2.2 µm) of a star (visual magnitude 10) obtained without and with image corrections by means of adaptive optics. PR Photo 12d/03 displays one of the best images obtained with MACAO-VLTI during the early tests. It shows a Strehl ratio (measure of light concentration) that fulfills the specifications according to which MACAO-VLTI was built. This enormous improvement when using AO techniques is clearly demonstrated in PR Photo 12e/03 , with the uncorrected image profile (left) hardly visible when compared to the corrected profile (right). PR Photo 11f/03 demonstrates the correction capabilities of MACAO-VLTI when using a faint guide star. Tests using different spectral types showed that the limiting visual magnitude varies between 16 for early-type B-stars and about 18 for late-type M-stars. Astronomical Objects seen at the Diffraction Limit The following examples of MACAO-VLTI observations of two well-known astronomical objects were obtained in order to provisionally evaluate the research opportunities now opening with MACAO-VLTI. They may well be compared with space-based images. The Galactic Center ESO PR Photo 12h/03 ESO PR Photo 12h/03 [Preview - JPEG: 693 x 400 pix - 46k [Normal - JPEG: 1386 x 800 pix - 403k] Caption : PR Photo 12h/03 shows a 90-second K-band exposure of the central 6 x 13 arcsec 2 around the Galactic Center obtained by MACAO-VLTI under average atmospheric conditions (0.8 arcsec seeing). Although the 14.6 magnitude guide star is located roughly 20 arcsec from the field center - this leading to isoplanatic degradation of image sharpness - the present image is nearly diffraction limited and has a point-source FWHM of about 0.115 arcsec. The center of our own galaxy is located in the Sagittarius constellation at a distance of approximately 30,000 light-years. PR Photo 12h/03 shows a short-exposure infrared view of this region, obtained by MACAO-VLTI during the early test phase. Recent AO observations using the NACO facility at the VLT provide compelling evidence that a supermassive black hole with 2.6 million solar masses is located at the very center, cf. ESO PR 17/02 . This result, based on astrometric observations of a star orbiting the black hole and approaching it to within a distance of only 17 light-hours, would not have been possible without images of diffraction limited resolution. Eta Carinae ESO PR Photo 12i/03 ESO PR Photo 12i/03 [Preview - JPEG: 400 x 482 pix - 25k [Normal - JPEG: 800 x 963 pix - 313k] Caption : PR Photo 12i/03 displays an infrared narrow K-band image of the massive star Eta Carinae . The image quality is difficult to estimate because the central star saturated the detector, but the clear structure of the diffraction spikes and the size of the smallest features visible in the photo indicate a near-diffraction limited performance. The field measures about 6.5 x 6.5 arcsec 2. Eta Carinae is one of the heaviest stars known, with a mass that probably exceeds 100 solar masses. It is about 4 million times brighter than the Sun, making it one of the most luminous stars known. Such a massive star has a comparatively short lifetime of about 1 million years only and - measured in the cosmic timescale- Eta Carinae must have formed quite recently. This star is highly unstable and prone to violent outbursts. They are caused by the very high radiation pressure at the star's upper layers, which blows significant portions of the matter at the "surface" into space during violent eruptions that may last several years. The last of these outbursts occurred between 1835 and 1855 and peaked in 1843. Despite its comparaticely large distance - some 7,500 to 10,000 light-years - Eta Carinae briefly became the second brightest star in the sky at that time (with an apparent magnitude -1), only surpassed by Sirius. Frosty Leo ESO PR Photo 12j/03 ESO PR Photo 12j/03 [Preview - JPEG: 411 x 400 pix - 22k [Normal - JPEG: 821 x 800 pix - 344k] Caption : PR Photo 12j/03 shows a 5 x 5 arcsec 2 K-band image of the peculiar star known as "Frosty Leo" obtained in 0.7 arcsec seeing. Although the object is comparatively bright (visual magnitude 11), it is a difficult AO target because of its extension of about 3 arcsec at visible wavelengths. The corrected image quality is about FWHM 0.1 arcsec. Frosty Leo is a magnitude 11 (post-AGB) star surrounded by an envelope of gas, dust, and large amounts of ice (hence the name). The associated nebula is of "butterfly" shape (bipolar morphology) and it is one of the best known examples of the brief transitional phase between two late evolutionary stages, asymptotic giant branch (AGB) and the subsequent planetary nebulae (PNe). For a three-solar-mass object like this one, this phase is believed to last only a few thousand years, the wink of an eye in the life of the star. Hence, objects like this one are very rare and Frosty Leo is one of the nearest and brightest among them.

  9. Study and validation of tools interoperability in JPSEC

    NASA Astrophysics Data System (ADS)

    Conan, V.; Sadourny, Y.; Jean-Marie, K.; Chan, C.; Wee, S.; Apostolopoulos, J.

    2005-08-01

    Digital imagery is important in many applications today, and the security of digital imagery is important today and is likely to gain in importance in the near future. The emerging international standard ISO/IEC JPEG-2000 Security (JPSEC) is designed to provide security for digital imagery, and in particular digital imagery coded with the JPEG-2000 image coding standard. One of the primary goals of a standard is to ensure interoperability between creators and consumers produced by different manufacturers. The JPSEC standard, similar to the popular JPEG and MPEG family of standards, specifies only the bitstream syntax and the receiver's processing, and not how the bitstream is created or the details of how it is consumed. This paper examines the interoperability for the JPSEC standard, and presents an example JPSEC consumption process which can provide insights in the design of JPSEC consumers. Initial interoperability tests between different groups with independently created implementations of JPSEC creators and consumers have been successful in providing the JPSEC security services of confidentiality (via encryption) and authentication (via message authentication codes, or MACs). Further interoperability work is on-going.

  10. Wavelet-based compression of M-FISH images.

    PubMed

    Hua, Jianping; Xiong, Zixiang; Wu, Qiang; Castleman, Kenneth R

    2005-05-01

    Multiplex fluorescence in situ hybridization (M-FISH) is a recently developed technology that enables multi-color chromosome karyotyping for molecular cytogenetic analysis. Each M-FISH image set consists of a number of aligned images of the same chromosome specimen captured at different optical wavelength. This paper presents embedded M-FISH image coding (EMIC), where the foreground objects/chromosomes and the background objects/images are coded separately. We first apply critically sampled integer wavelet transforms to both the foreground and the background. We then use object-based bit-plane coding to compress each object and generate separate embedded bitstreams that allow continuous lossy-to-lossless compression of the foreground and the background. For efficient arithmetic coding of bit planes, we propose a method of designing an optimal context model that specifically exploits the statistical characteristics of M-FISH images in the wavelet domain. Our experiments show that EMIC achieves nearly twice as much compression as Lempel-Ziv-Welch coding. EMIC also performs much better than JPEG-LS and JPEG-2000 for lossless coding. The lossy performance of EMIC is significantly better than that of coding each M-FISH image with JPEG-2000.

  11. Diagnostic accuracy of chest X-rays acquired using a digital camera for low-cost teleradiology.

    PubMed

    Szot, Agnieszka; Jacobson, Francine L; Munn, Samson; Jazayeri, Darius; Nardell, Edward; Harrison, David; Drosten, Ralph; Ohno-Machado, Lucila; Smeaton, Laura M; Fraser, Hamish S F

    2004-02-01

    Store-and-forward telemedicine, using e-mail to send clinical data and digital images, offers a low-cost alternative for physicians in developing countries to obtain second opinions from specialists. To explore the potential usefulness of this technique, 91 chest X-ray images were photographed using a digital camera and a view box. Four independent readers (three radiologists and one pulmonologist) read two types of digital (JPEG and JPEG2000) and original film images and indicated their confidence in the presence of eight features known to be radiological indicators of tuberculosis (TB). The results were compared to a "gold standard" established by two different radiologists, and assessed using receiver operating characteristic (ROC) curve analysis. There was no statistical difference in the overall performance between the readings from the original films and both types of digital images. The size of JPEG2000 images was approximately 120KB, making this technique feasible for slow internet connections. Our preliminary results show the potential usefulness of this technique particularly for tuberculosis and lung disease, but further studies are required to refine its potential.

  12. JPEG2000 still image coding quality.

    PubMed

    Chen, Tzong-Jer; Lin, Sheng-Chieh; Lin, You-Chen; Cheng, Ren-Gui; Lin, Li-Hui; Wu, Wei

    2013-10-01

    This work demonstrates the image qualities between two popular JPEG2000 programs. Two medical image compression algorithms are both coded using JPEG2000, but they are different regarding the interface, convenience, speed of computation, and their characteristic options influenced by the encoder, quantization, tiling, etc. The differences in image quality and compression ratio are also affected by the modality and compression algorithm implementation. Do they provide the same quality? The qualities of compressed medical images from two image compression programs named Apollo and JJ2000 were evaluated extensively using objective metrics. These algorithms were applied to three medical image modalities at various compression ratios ranging from 10:1 to 100:1. Following that, the quality of the reconstructed images was evaluated using five objective metrics. The Spearman rank correlation coefficients were measured under every metric in the two programs. We found that JJ2000 and Apollo exhibited indistinguishable image quality for all images evaluated using the above five metrics (r > 0.98, p < 0.001). It can be concluded that the image quality of the JJ2000 and Apollo algorithms is statistically equivalent for medical image compression.

  13. The Capodimonte Deep Field

    NASA Astrophysics Data System (ADS)

    2001-04-01

    A Window towards the Distant Universe Summary The Osservatorio Astronomico Capodimonte Deep Field (OACDF) is a multi-colour imaging survey project that is opening a new window towards the distant universe. It is conducted with the ESO Wide Field Imager (WFI) , a 67-million pixel advanced camera attached to the MPG/ESO 2.2-m telescope at the La Silla Observatory (Chile). As a pilot project at the Osservatorio Astronomico di Capodimonte (OAC) [1], the OACDF aims at providing a large photometric database for deep extragalactic studies, with important by-products for galactic and planetary research. Moreover, it also serves to gather experience in the proper and efficient handling of very large data sets, preparing for the arrival of the VLT Survey Telescope (VST) with the 1 x 1 degree 2 OmegaCam facility. PR Photo 15a/01 : Colour composite of the OACDF2 field . PR Photo 15b/01 : Interacting galaxies in the OACDF2 field. PR Photo 15c/01 : Spiral galaxy and nebulous object in the OACDF2 field. PR Photo 15d/01 : A galaxy cluster in the OACDF2 field. PR Photo 15e/01 : Another galaxy cluster in the OACDF2 field. PR Photo 15f/01 : An elliptical galaxy in the OACDF2 field. The Capodimonte Deep Field ESO PR Photo 15a/01 ESO PR Photo 15a/01 [Preview - JPEG: 400 x 426 pix - 73k] [Normal - JPEG: 800 x 851 pix - 736k] [Hi-Res - JPEG: 3000 x 3190 pix - 7.3M] Caption : This three-colour image of about 1/4 of the Capodimonte Deep Field (OACDF) was obtained with the Wide-Field Imager (WFI) on the MPG/ESO 2.2-m telescope at the la Silla Observatory. It covers "OACDF Subfield no. 2 (OACDF2)" with an area of about 35 x 32 arcmin 2 (about the size of the full moon), and it is one of the "deepest" wide-field images ever obtained. Technical information about this photo is available below. With the comparatively few large telescopes available in the world, it is not possible to study the Universe to its outmost limits in all directions. Instead, astronomers try to obtain the most detailed information possible in selected viewing directions, assuming that what they find there is representative for the Universe as a whole. This is the philosophy behind the so-called "deep-field" projects that subject small areas of the sky to intensive observations with different telescopes and methods. The astronomers determine the properties of the objects seen, as well as their distances and are then able to obtain a map of the space within the corresponding cone-of-view (the "pencil beam"). Recent, successful examples of this technique are the "Hubble Deep Field" (cf. ESO PR Photo 26/98 ) and the "Chandra Deep Field" ( ESO PR 05/01 ). In this context, the Capodimonte Deep Field (OACDF) is a pilot research project, now underway at the Osservatorio Astronomico di Capodimonte (OAC) in Napoli (Italy). It is a multi-colour imaging survey performed with the Wide Field Imager (WFI) , a 67-million pixel (8k x 8k) digital camera that is installed at the 2.2-m MPG/ESO Telescope at ESO's La Silla Observatory in Chile. The scientific goal of the OACDF is to provide an important database for subsequent extragalactic, galactic and planetary studies. It will allow the astronomers at OAC - who are involved in the VLT Survey Telescope (VST) project - to gain insight into the processing (and use) of the large data flow from a camera similar to, but four times smaller than the OmegaCam wide-field camera that will be installed at the VST. The field selection for the OACDF was based on the following criteria: * There must be no stars brighter than about 9th magnitude in the field, in order to avoid saturation of the CCD detector and effects from straylight in the telescope and camera. No Solar System planets should be near the field during the observations; * It must be located far from the Milky Way plane (at high galactic latitude) in order to reduce the number of galactic stars seen in this direction; * It must be located in the southern sky in order to optimize observing conditions (in particular, the altitude of the field above the horizon), as seen from the La Silla and Paranal sites; * There should be little interstellar material in this direction that may obscure the view towards the distant Universe; * Observations in this field should have been made with the Hubble Space Telescope (HST) that may serve for comparison and calibration purposes. Based on these criteria, the astronomers selected a field measuring about 1 x 1 deg 2 in the southern constellation of Corvus (The Raven). This is now known as the Capodimonte Deep Field (OACDF) . The above photo ( PR Photo 15a/01 ) covers one-quarter of the full field (Subfield No. 2 - OACDF2) - some of the objects seen in this area are shown below in more detail. More than 35,000 objects have been found in this area; the faintest are nearly 100 million fainter than what can be perceived with the unaided eye in the dark sky. Selected objects in the Capodimonte Deep Field ESO PR Photo 15b/01 ESO PR Photo 15b/01 [Preview - JPEG: 400 x 435 pix - 60k] [Normal - JPEG: 800 x 870 pix - 738k] [Hi-Res - JPEG: 3000 x 3261 pix - 5.1M] Caption : Enlargement of the interacting galaxies that are seen in the upper left corner of the OACDF2 field shown in PR Photo 15a/01 . The enlargement covers 1250 x 1130 WFI pixels (1 pixel = 0.24 arcsec), or about 5.0 x 4.5 arcmin 2 in the sky. The lower spiral is itself an interactive double. ESO PR Photo 15c/01 ESO PR Photo 15c/01 [Preview - JPEG: 557 x 400 pix - 93k] [Normal - JPEG: 1113 x 800 pix - 937k] [Hi-Res - JPEG: 3000 x 2156 pix - 4.0M] Caption : Enlargement of a spiral galaxy and a nebulous object in this area. The field shown covers 1250 x 750 pixels, or about 5 x 3 arcmin 2 in the sky. Note the very red objects next to the two bright stars in the lower-right corner. The colours of these objects are consistent with those of spheroidal galaxies at intermediate distances (redshifts). ESO PR Photo 15d/01 ESO PR Photo 15d/01 [Preview - JPEG: 400 x 530 pix - 68k] [Normal - JPEG: 800 x 1060 pix - 870k] [Hi-Res - JPEG: 2768 x 3668 pix - 6.2M] Caption : A further enlargement of a galaxy cluster of which most members are located in the north-east quadrant (upper left) and have a reddish colour. The nebulous object to the upper left is a dwarf galaxy of spheroidal shape. The red object, located near the centre of the field and resembling a double star, is very likely a gravitational lens [2]. Some of the very red, point-like objects in the field may be distant quasars, very-low mass stars or, possibly, relatively nearby brown dwarf stars. The field shown covers 1380 x 1630 pixels, or 5.5 x 6.5 arcmin 2. ESO PR Photo 15e/01 ESO PR Photo 15e/01 [Preview - JPEG: 400 x 418 pix - 56k] [Normal - JPEG: 800 x 835 pix - 700k] [Hi-Res - JPEG: 3000 x 3131 pix - 5.0M] Caption : Enlargement of a moderately distant galaxy cluster in the south-east quadrant (lower left) of the OACDF2 field. The field measures 1380 x 1260 pixels, or about 5.5 x 5.0 arcmin 2 in the sky. ESO PR Photo 15f/01 ESO PR Photo 15f/01 [Preview - JPEG: 449 x 400 pix - 68k] [Normal - JPEG: 897 x 800 pix - 799k] [Hi-Res - JPEG: 3000 x 2675 pix - 5.6M] Caption : Enlargement of the elliptical galaxy that is located to the west (right) in the OACDF2 field. The numerous tiny objects surrounding the galaxy may be globular clusters. The fuzzy object on the right edge of the field may be a dwarf spheroidal galaxy. The size of the field is about 6 x 5 arcmin 2. Technical Information about the OACDF Survey The observations for the OACDF project were performed in three different ESO periods (18-22 April 1999, 7-12 March 2000 and 26-30 April 2000). Some 100 Gbyte of raw data were collected during each of the three observing runs. The first OACDF run was done just after the commissioning of the ESO-WFI. The observational strategy was to perform a 1 x 1 deg 2 short-exposure ("shallow") survey and then a 0.5 x 1 deg 2 "deep" survey. The shallow survey was performed in the B, V, R and I broad-band filters. Four adjacent 30 x 30 arcmin 2 fields, together covering a 1 x 1 deg 2 field in the sky, were observed for the shallow survey. Two of these fields were chosen for the 0.5 x 1 deg 2 deep survey; OACDF2 shown above is one of these. The deep survey was performed in the B, V, R broad-bands and in other intermediate-band filters. The OACDF data are fully reduced and the catalogue extraction has started. A two-processor (500 Mhz each) DS20 machine with 100 Gbyte of hard disk, specifically acquired at the OAC for WFI data reduction, was used. The detailed guidelines of the data reduction, as well as the catalogue extraction, are reported in a research paper that will appear in the European research journal Astronomy & Astrophysics . Notes [1]: The team members are: Massimo Capaccioli, Juan M. Alcala', Roberto Silvotti, Magda Arnaboldi, Vincenzo Ripepi, Emanuella Puddu, Massimo Dall'Ora, Giuseppe Longo and Roberto Scaramella . [2]: This is a preliminary result by Juan Alcala', Massimo Capaccioli, Giuseppe Longo, Mikhail Sazhin, Roberto Silvotti and Vincenzo Testa , based on recent observations with the Telescopio Nazionale Galileo (TNG) which show that the spectra of the two objects are identical. Technical information about the photos PR Photo 15a/01 has been obtained by the combination of the B, V, and R stacked images of the OACDF2 field. The total exposure times in the three bands are 2 hours in B and V (12 ditherings of 10 min each were stacked to produce the B and V images) and 3 hours in R (13 ditherings of 15 min each). The mosaic images in the B and V bands were aligned relative to the R-band image and adjusted to a logarithmic intensity scale prior to the combination. The typical seeing was of the order of 1 arcsec in each of the three bands. Preliminary estimates of the three-sigma limiting magnitudes in B, V and R indicate 25.5, 25.0 and 25.0, respectively. More than 35,000 objects are detected above the three-sigma level. PR Photos 15b-f/01 display selected areas of the field shown in PR Photo 15a/01 at the original WFI scale, hereby also demonstrating the enormous amount of information contained in these wide-field images. In all photos, North is up and East is left.

  14. Image compression/decompression based on mathematical transform, reduction/expansion, and image sharpening

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1997-01-01

    An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described.

  15. Fingerprint recognition of wavelet-based compressed images by neuro-fuzzy clustering

    NASA Astrophysics Data System (ADS)

    Liu, Ti C.; Mitra, Sunanda

    1996-06-01

    Image compression plays a crucial role in many important and diverse applications requiring efficient storage and transmission. This work mainly focuses on a wavelet transform (WT) based compression of fingerprint images and the subsequent classification of the reconstructed images. The algorithm developed involves multiresolution wavelet decomposition, uniform scalar quantization, entropy and run- length encoder/decoder and K-means clustering of the invariant moments as fingerprint features. The performance of the WT-based compression algorithm has been compared with JPEG current image compression standard. Simulation results show that WT outperforms JPEG in high compression ratio region and the reconstructed fingerprint image yields proper classification.

  16. Halftoning processing on a JPEG-compressed image

    NASA Astrophysics Data System (ADS)

    Sibade, Cedric; Barizien, Stephane; Akil, Mohamed; Perroton, Laurent

    2003-12-01

    Digital image processing algorithms are usually designed for the raw format, that is on an uncompressed representation of the image. Therefore prior to transforming or processing a compressed format, decompression is applied; then, the result of the processing application is finally re-compressed for further transfer or storage. The change of data representation is resource-consuming in terms of computation, time and memory usage. In the wide format printing industry, this problem becomes an important issue: e.g. a 1 m2 input color image, scanned at 600 dpi exceeds 1.6 GB in its raw representation. However, some image processing algorithms can be performed in the compressed-domain, by applying an equivalent operation on the compressed format. This paper is presenting an innovative application of the halftoning processing operation by screening, to be applied on JPEG-compressed image. This compressed-domain transform is performed by computing the threshold operation of the screening algorithm in the DCT domain. This algorithm is illustrated by examples for different halftone masks. A pre-sharpening operation, applied on a JPEG-compressed low quality image is also described; it allows to de-noise and to enhance the contours of this image.

  17. Adaptively synchronous scalable spread spectrum (A4S) data-hiding strategy for three-dimensional visualization

    NASA Astrophysics Data System (ADS)

    Hayat, Khizar; Puech, William; Gesquière, Gilles

    2010-04-01

    We propose an adaptively synchronous scalable spread spectrum (A4S) data-hiding strategy to integrate disparate data, needed for a typical 3-D visualization, into a single JPEG2000 format file. JPEG2000 encoding provides a standard format on one hand and the needed multiresolution for scalability on the other. The method has the potential of being imperceptible and robust at the same time. While the spread spectrum (SS) methods are known for the high robustness they offer, our data-hiding strategy is removable at the same time, which ensures highest possible visualization quality. The SS embedding of the discrete wavelet transform (DWT)-domain depth map is carried out in transform domain YCrCb components from the JPEG2000 coding stream just after the DWT stage. To maintain synchronization, the embedding is carried out while taking into account the correspondence of subbands. Since security is not the immediate concern, we are at liberty with the strength of embedding. This permits us to increase the robustness and bring the reversibility of our method. To estimate the maximum tolerable error in the depth map according to a given viewpoint, a human visual system (HVS)-based psychovisual analysis is also presented.

  18. Desert Pathfinder at Work

    NASA Astrophysics Data System (ADS)

    2005-09-01

    The Atacama Pathfinder Experiment (APEX) project celebrates the inauguration of its outstanding 12-m telescope, located on the 5100m high Chajnantor plateau in the Atacama Desert (Chile). The APEX telescope, designed to work at sub-millimetre wavelengths, in the 0.2 to 1.5 mm range, passed successfully its Science Verification phase in July, and since then is performing regular science observations. This new front-line facility provides access to the "Cold Universe" with unprecedented sensitivity and image quality. After months of careful efforts to set up the telescope to work at the best possible technical level, those involved in the project are looking with satisfaction at the fruit of their labour: APEX is not only fully operational, it has already provided important scientific results. "The superb sensitivity of our detectors together with the excellence of the site allow fantastic observations that would not be possible with any other telescope in the world," said Karl Menten, Director of the group for Millimeter and Sub-Millimeter Astronomy at the Max-Planck-Institute for Radio Astronomy (MPIfR) and Principal Investigator of the APEX project. ESO PR Photo 30/05 ESO PR Photo 30/05 Sub-Millimetre Image of a Stellar Cradle [Preview - JPEG: 400 x 627 pix - 200k] [Normal - JPEG: 800 x 1254 pix - 503k] [Full Res - JPEG: 1539 x 2413 pix - 1.3M] Caption: ESO PR Photo 30/05 is an image of the giant molecular cloud G327 taken with APEX. More than 5000 spectra were taken in the J=3-2 line of the carbon monoxide molecule (CO), one of the best tracers of molecular clouds, in which star formation takes place. The bright peak in the north of the cloud is an evolved star forming region, where the gas is heated by a cluster of new stars. The most interesting region in the image is totally inconspicuous in CO: the G327 hot core, as seen in methanol contours. It is a truly exceptional source, and is one of the richest sources of emission from complex organic molecules in the Galaxy (see spectrum at bottom). Credit: Wyrowski et al. (map), Bisschop et al. (spectrum). Millimetre and sub-millimetre astronomy opens exciting new possibility in the study of the first galaxies to have formed in the Universe and of the formation processes of stars and planets. In particular, APEX allows astronomers to study the chemistry and physical conditions of molecular clouds, that is, dense regions of gas and dust in which new stars are forming. Among the first studies made with APEX, astronomers took a first glimpse deep into cradles of massive stars, observing for example the molecular cloud G327 and measuring significant emission in carbon monoxide and complex organic molecules (see ESO PR Photo 30/05). The official inauguration of the APEX telescope will start in San Pedro de Atacama on September, 25th. The Ambassadors in Chile of some of ESO's member states, the Intendente of the Chilean Region II, the Mayor of San Pedro, the Executive Director of the Chilean Science Agency (CONICYT), the Presidents of the Communities of Sequitor and Toconao, as well as representatives of the Ministry of Foreign Affairs and Universities in Chile, will join ESO's Director General, Dr. Catherine Cesarsky, the Chairman of the APEX Board and MPIfR director, Prof. Karl Menten, and the Director of the Onsala Space Observatory, Prof. Roy Booth, in a celebration that will be held in San Pedro de Atacama. The next day, the delegation will visit the APEX base camp in Sequitor, near San Pedro, from where the telescope is operated, as well as the APEX site on the 5100m high Llano de Chajnantor.

  19. Improved photo response non-uniformity (PRNU) based source camera identification.

    PubMed

    Cooper, Alan J

    2013-03-10

    The concept of using Photo Response Non-Uniformity (PRNU) as a reliable forensic tool to match an image to a source camera is now well established. Traditionally, the PRNU estimation methodologies have centred on a wavelet based de-noising approach. Resultant filtering artefacts in combination with image and JPEG contamination act to reduce the quality of PRNU estimation. In this paper, it is argued that the application calls for a simplified filtering strategy which at its base level may be realised using a combination of adaptive and median filtering applied in the spatial domain. The proposed filtering method is interlinked with a further two stage enhancement strategy where only pixels in the image having high probabilities of significant PRNU bias are retained. This methodology significantly improves the discrimination between matching and non-matching image data sets over that of the common wavelet filtering approach. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  20. Baseline coastal oblique aerial photographs collected from Pensacola, Florida, to Breton Islands, Louisiana, February 7, 2012

    USGS Publications Warehouse

    Morgan, Karen L.M.; Krohn, M. Dennis; Doran, Kara; Guy, Kristy K.

    2013-01-01

    The U.S. Geological Survey (USGS) conducts baseline and storm response photography missions to document and understand the changes in vulnerability of the Nation's coasts to extreme storms (Morgan, 2009). On February 7, 2012, the USGS conducted an oblique aerial photographic survey from Pensacola, Fla., to Breton Islands, La., aboard a Piper Navajo Chieftain at an altitude of 500 feet (ft) and approximately 1,000 ft offshore. This mission was flown to collect baseline data for assessing incremental changes since the last survey, and the data can be used in the assessment of future coastal change. The photographs provided here are Joint Photographic Experts Group (JPEG) images. The photograph locations are an estimate of the position of the aircraft and do not indicate the location of the feature in the images (see the Navigation Data page). These photos document the configuration of the barrier islands and other coastal features at the time of the survey. The header of each photo is populated with time of collection, Global Positioning System (GPS) latitude, GPS longitude, GPS position (latitude and longitude), keywords, credit, artist (photographer), caption, copyright, and contact information using EXIFtools (Subino and others, 2012). Photographs can be opened directly with any JPEG-compatible image viewer by clicking on a thumbnail on the contact sheet. Table 1 provides detailed information about the assigned location, name, data, and time the photograph was taken along with links to the photograph. In addition to the photographs, a Google Earth Keyhole Markup Language (KML) file is provided and can be used to view the images by clicking on the marker and then clicking on either the thumbnail or the link above the thumbnail. The KML files were created using the photographic navigation files (see the Photos and Maps page).

  1. ESO and NSF Sign Agreement on ALMA

    NASA Astrophysics Data System (ADS)

    2003-02-01

    Green Light for World's Most Powerful Radio Observatory On February 25, 2003, the European Southern Observatory (ESO) and the US National Science Foundation (NSF) are signing a historic agreement to construct and operate the world's largest and most powerful radio telescope, operating at millimeter and sub-millimeter wavelength. The Director General of ESO, Dr. Catherine Cesarsky, and the Director of the NSF, Dr. Rita Colwell, act for their respective organizations. Known as the Atacama Large Millimeter Array (ALMA), the future facility will encompass sixty-four interconnected 12-meter antennae at a unique, high-altitude site at Chajnantor in the Atacama region of northern Chile. ALMA is a joint project between Europe and North America. In Europe, ESO is leading on behalf of its ten member countries and Spain. In North America, the NSF also acts for the National Research Council of Canada and executes the project through the National Radio Astronomy Observatory (NRAO) operated by Associated Universities, Inc. (AUI). The conclusion of the ESO-NSF Agreement now gives the final green light for the ALMA project. The total cost of approximately 650 million Euro (or US Dollars) is shared equally between the two partners. Dr. Cesarsky is excited: "This agreement signifies the start of a great project of contemporary astronomy and astrophysics. Representing Europe, and in collaboration with many laboratories and institutes on this continent, we together look forward towards wonderful research projects. With ALMA we may learn how the earliest galaxies in the Universe really looked like, to mention but one of the many eagerly awaited opportunities with this marvellous facility". "With this agreement, we usher in a new age of research in astronomy" says Dr. Colwell. "By working together in this truly global partnership, the international astronomy community will be able to ensure the research capabilities needed to meet the long-term demands of our scientific enterprise, and that we will be able to study and understand our universe in ways that have previously been beyond our vision". The recent Presidential decree from Chile for AUI and the agreement signed in late 2002 between ESO and the Government of the Republic of Chile (cf. ESO PR 18/02) recognize the interest that the ALMA Project has for Chile, as it will deepen and strengthen the cooperation in scientific and technological matters between the parties. A joint ALMA Board has been established which oversees the realisation of the ALMA project via the management structure. This Board meets for the first time on February 24-25, 2003, at NSF in Washington and will witness this historic event. ALMA: Imaging the Light from Cosmic Dawn ESO PR Photo 06a/03 ESO PR Photo 06a/03 [Preview - JPEG: 588 x 400 pix - 52k [Normal - JPEG: 1176 x 800 pix - 192k] [Hi-Res - JPEG: 3300 x 2244 pix - 2.0M] ESO PR Photo 06b/03 ESO PR Photo 06b/03 [Preview - JPEG: 502 x 400 pix - 82k [Normal - JPEG: 1003 x 800 pix - 392k] [Hi-Res - JPEG: 2222 x 1773 pix - 3.0M] ESO PR Photo 06c/03 ESO PR Photo 06c/03 [Preview - JPEG: 474 x 400 pix - 84k [Normal - JPEG: 947 x 800 pix - 344k] [Hi-Res - JPEG: 2272 x 1920 pix - 2.0M] ESO PR Photo 06d/03 ESO PR Photo 06d/03 [Preview - JPEG: 414 x 400 pix - 69k [Normal - JPEG: 828 x 800 pix - 336k] [HiRes - JPEG: 2935 x 2835 pix - 7.4k] Captions: PR Photo 06a/03 shows an artist's view of the Atacama Large Millimeter Array (ALMA), with 64 12-m antennae. PR Photo 06b/03 is another such view, with the array arranged in a compact configuration at the high-altitude Chajnantor site. The ALMA VertexRSI prototype antennae is shown in PR Photo 06c/03 on the Antenna Test Facility (ATF) site at the NRAO Very Large Array (VLA) site near Socorro (New Mexico, USA). The future ALMA site at Llano de Chajnantor at 5000 metre altitude, some 40 km East of the village of San Pedro de Atacama (Chile) is seen in PR Photo 06d/03 - this view was obtained at 11 hrs in the morning on a crisp and clear autumn day (more views of this site are available at the Chajnantor Photo Gallery). The Atacama Large Millimeter Array (ALMA) will be one of astronomy's most powerful telescopes - providing unprecedented imaging capabilities and sensitivity in the corresponding wavelength range, many orders of magnitude greater than anything of its kind today. ALMA will be an array of 64 antennae that will work together as one telescope to study millimeter and sub-millimeter wavelength radiation from space. This radiation crosses the critical boundary between infrared and microwave radiation and holds the key to understanding such processes as planet and star formation, the formation of early galaxies and galaxy clusters, and the formation of organic and other molecules in space. "ALMA will be one of astronomy's premier tools for studying the universe" says Nobel Laureate Riccardo Giacconi, President of AUI (and former ESO Director General (1993-1999)). "The entire astronomical community is anxious to have the unprecedented power and resolution that ALMA will provide". The President of the ESO Council, Professor Piet van der Kruit, agrees: "ALMA heralds a break-through in sub-millimeter and millimeter astronomy, allowing some of the most penetrating studies the Universe ever made. It is safe to predict that there will be exciting scientific surprises when ALMA enters into operation". What is millimeter and sub-millimeter wavelength astronomy? Astronomers learn about objects in space by studying the energy emitted by those objects. Our Sun and the other stars throughout the Universe emit visible light. But these objects also emit other kinds of light waves, such as X-rays, infrared radiation, and radio waves. Some objects emit very little or no visible light, yet are strong sources at other wavelengths in the electromagnetic spectrum. Much of the energy in the Universe is present in the sub-millimeter and millimeter portion of the spectrum. This energy comes from the cold dust mixed with gas in interstellar space. It also comes from distant galaxies that formed many billions of years ago at the edges of the known universe. With ALMA, astronomers will have a uniquely powerful facility with access to this remarkable portion of the spectrum and hence, new and wonderful opportunities to learn more about those objects. Current observatories simply do not have anywhere near the necessary sensitivity and resolution to unlock the secrets that abundant sub-millimeter and millimeter wavelength radiation can reveal. It will take the unparalleled power of ALMA to fully study the cosmic emission at this wavelength and better understand the nature of the universe. Scientists from all over the world will use ALMA. They will compete for observing time by submitting proposals, which will be judged by a group of their peers on the basis of scientific merit. ALMA's unique capabilities ALMA's ability to detect remarkably faint sub-millimeter and millimeter wavelength emission and to create high-resolution images of the source of that emission gives it capabilities not found in any other astronomical instruments. ALMA will therefore be able to study phenomena previously out of reach to astronomers and astrophysicists, such as: * Very young galaxies forming stars at the earliest times in cosmic history; * New planets forming around young stars in our galaxy, the Milky Way; * The birth of new stars in spinning clouds of gas and dust; and * Interstellar clouds of gas and dust that are the nurseries of complex molecules and even organic chemicals that form the building blocks of life. How will ALMA work? All of ALMA's 64 antennae will work in concert, taking quick "snapshots" or long-term exposures of astronomical objects. Cosmic radiation from these objects will be reflected from the surface of each antenna and focussed onto highly sensitive receivers cooled to just a few degrees above absolute zero in order to suppress undesired "noise" from the surroundings. There the signals will be amplified many times, digitized, and then sent along underground fiber-optic cables to a large signal processor in the central control building. This specialized computer, called a correlator - running at 16,000 million-million operations per second - will combine all of the data from the 64 antennae to make images of remarkable quality. The extraordinary ALMA site Since atmospheric water vapor absorbs millimeter and (especially) sub-millimeter waves, ALMA must be constructed at a very high altitude in a very dry region of the earth. Extensive tests showed that the sky above the Atacama Desert of Chile has the excellent clarity and stability essential for ALMA. That is why ALMA will be built there, on Llano de Chajnantor at an altitude of 5,000 metres in the Chilean Andes. A series of views of this site, also in high-resolution suitable for reproduction, is available at the Chajnantor Photo Gallery. Timeline for ALMA June 1998: Phase 1 (Research and Development) June 1999: European/American Memorandum of Understanding February 2003: Signature of the bilateral Agreement 2004: Tests of the Prototype System 2007: Initial scientific operation of a partially completed array 2011: End of construction of the array

  2. Two VLT 8.2-m Unit Telescopes in Action

    NASA Astrophysics Data System (ADS)

    1999-04-01

    Visitors at ANTU - Astronomical Images from KUEYEN The VLT Control Room at the Paranal Observatory is becoming a busy place indeed. From here, two specialist teams of ESO astronomers and engineers now operate two VLT 8.2-m Unit Telescopes in parallel, ANTU and KUEYEN (formerly UT1 and UT2, for more information about the naming and the pronunciation, see ESO Press Release 06/99 ). Regular science observations have just started with the first of these giant telescopes, while impressive astronomical images are being obtained with the second. The work is hard, but the mood in the control room is good. Insiders claim that there have even been occasions on which the groups have had a friendly "competition" about which telescope makes the "best" images! The ANTU-team has worked with the FORS multi-mode instrument , their colleagues at KUEYEN use the VLT Test Camera for the ongoing tests of this new telescope. While the first is a highly developed astronomical instrument with a large-field CCD imager (6.8 x 6.8 arcmin 2 in the normal mode; 3.4 x 3.4 arcmin 2 in the high-resolution mode), the other is a less complex CCD camera with a smaller field (1.5 x 1.5 arcmin 2 ), suited to verify the optical performance of the telescope. As these images demonstrate, the performance of the second VLT Unit Telescope is steadily improving and it may not be too long before its optical quality will approach that of the first. First KUEYEN photos of stars and galaxies We present here some of the first astronomical images, taken with the second telescope, KUEYEN, in late March and early April 1999. They reflect the current status of the optical, electronic and mechanical systems, still in the process of being tuned. As expected, the experience gained from ANTU last year has turned out to be invaluable and has allowed good progress during this extremely delicate process. ESO PR Photo 19a/99 ESO PR Photo 19a/99 [Preview - JPEG: 400 x 433 pix - 160k] [Normal - JPEG: 800 x 866 pix - 457k] [High-Res - JPEG: 1985 x 2148 pix - 2.0M] ESO PR Photo 19b/99 ESO PR Photo 19b/99 [Preview - JPEG: 400 x 478 pix - 165k] [Normal - JPEG: 800 x 956 pix - 594k] [High-Res - JPEG: 3000 x 3583 pix - 7.1M] Caption to PR Photo 19a/99 : This photo was obtained with VLT KUEYEN on April 4, 1999. It is reproduced from an excellent 60-second R(ed)-band exposure of the innermost region of a globular cluster, Messier 68 (NGC 4590) , in the southern constellation Hydra (The Water-Snake). The distance to this 8-mag cluster is about 35,000 light years, and the diameter is about 140 light-years. The excellent image quality is 0.38 arcsec , demonstrating a good optical and mechanical state of the telescope, already at this early stage of the commissioning phase. The field measures about 90 x 90 arcsec 2. The original scale is 0.0455 pix/arcsec and there are 2048x2048 pixels in one frame. North is up and East is left. Caption to PR Photo 19b/99 : This photo shows the central region of spiral galaxy ESO 269-57 , located in the southern constellation Centaurus at a distance of about 150 million light-years. Many galaxies are seen in this direction at about the same distance, forming a loose cluster; there are also some fainter, more distant ones in the background. The designation refers to the ESO/Uppsala Survey of the Southern Sky in the 1970's during which over 15,000 southern galaxies were catalogued. ESO 269-57 is a tightly bound object of type Sar , the "r" referring to the "ring" that surrounds the bright centre, that is overexposed here. The photo is a composite, based on three exposures (Blue - 600 sec; Yellow-Green - 300 sec; Red - 300 sec) obtained with KUEYEN on March 28, 1999. The image quality is 0.7 arcsec and the field is 90 x 90 arcsec 2. North is up and East is left. ESO PR Photo 19c/99 ESO PR Photo 19c/99 [Preview - JPEG: 400 x 478 pix - 132k] [Normal - JPEG: 800 x 956 pix - 446k] [High-Res - JPEG: 3000 x 3583 pix - 4.6M] ESO PR Photo 19d/99 ESO PR Photo 19d/99 [Preview - JPEG: 400 x 454 pix - 86k] [Normal - JPEG: 800 x 907 pix - 301k] [High-Res - JPEG: 978 x 1109 pix - 282k] Caption to PR Photo 19c/99 : Somewhat further out in space, and right on the border between the southern constellations Hydra and Centaurus lies this knotty spiral galaxy, IC 4248 ; the distance is about 210 million light-years. It was imaged with KUEYEN on March 28, 1999, with the same filters and exposure times as used for Photo 19b/99. The image quality is 0.75 arcsec and the field is 90 x 90 arcsec 2. North is up and East is left. Caption to PR Photo 19d/99 : This is a close-up view of the double galaxy NGC 5090 (right) and NGC 5091 (left), in the southern constellation Centaurus. The first is a typical S0 galaxy with a bright diffuse centre, surrounded by a fainter envelope of stars (not resolved in this picture). However, some of the starlike objects seen in this region may be globular clusters (or dwarf galaxies) in orbit around NGC 5090. The other galaxy is of type Sa (the spiral structure is more developed) and is seen at a steep angle. The three-colour composite is based on frames obtained with KUEYEN on March 29, 1999, with the same filters and exposure times as used for Photo 19b/99. The image quality is 0.7 arcsec and the field is 90 x 90 arcsec 2. North is up and East is left. ( Note inserted on April 26: The original caption text identified the second galaxy as NGC 5090B - this error has now been corrected. ESO PR Photo 19e/99 ESO PR Photo 19e/99 [Preview - JPEG: 400 x 441 pix - 282k] [Normal - JPEG: 800 x 882 pix - 966k] [High-Res - JPEG: 3000 x 3307 pix - 6,4M] Caption to PR Photo 19e/99 : Wide-angle photo of the second 8.2-m VLT Unit Telescope, KUEYEN , obtained on March 10, 1999, with the main mirror and its cell in place at the bottom of the telescope structure. The Test Camera with which the astronomical images above were made, is positioned at the Cassegrain focus, inside this mirror cell. The Paranal Inauguration on March 5, 1999, took place under this telescope that was tilted towards the horizon to accommodate nearly 300 persons on the observing floor. Astronomical observations with ANTU have started On April 1, 1999, the first 8.2-m VLT Unit Telescope, ANTU , was "handed over" to the astronomers. Last year, about 270 observing proposals competed about the first, precious observing time at Europe's largest optical telescope and more than 100 of these were accommodated within the six-month period until the end of September 1999. The complete observing schedule is available on the web. These observations will be carried out in two different modes. During the Visitor Mode , the astronomers will be present at the telescope, while in the Service Mode , ESO observers perform the observations. The latter procedure allows a greater degree of flexibility and the possibility to assign periods of particularly good observing conditions to programmes whose success is critically dependent on this. The first ten nights at ANTU were allocated to service mode observations. After some initial technical problems with the instruments, these have now started. Already in the first night, programmes at ISAAC requiring 0.4 arcsec conditions could be satisfied, and some images better than 0.3 arcsec were obtained in the near-infrared . The first astronomers to use the telescope in visitors mode will be Professors Immo Appenzeller (Heidelberg, Germany; "Photo-polarimetry of pulsars") and George Miley (Leiden, The Netherlands; "Distant radio galaxies") with their respective team colleagues. How to obtain ESO Press Information ESO Press Information is made available on the World-Wide Web (URL: http://www.eso.org../ ). ESO Press Photos may be reproduced, if credit is given to the European Southern Observatory. Note also the dedicated webarea with VLT Information.

  3. Image compression/decompression based on mathematical transform, reduction/expansion, and image sharpening

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1997-12-30

    An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described. 22 figs.

  4. Steganographic embedding in containers-images

    NASA Astrophysics Data System (ADS)

    Nikishova, A. V.; Omelchenko, T. A.; Makedonskij, S. A.

    2018-05-01

    Steganography is one of the approaches to ensuring the protection of information transmitted over the network. But a steganographic method should vary depending on a used container. According to statistics, the most widely used containers are images and the most common image format is JPEG. Authors propose a method of data embedding into a frequency area of images in format JPEG 2000. It is proposed to use the method of Benham-Memon- Yeo-Yeung, in which instead of discrete cosine transform, discrete wavelet transform is used. Two requirements for images are formulated. Structure similarity is chosen to obtain quality assessment of data embedding. Experiments confirm that requirements satisfaction allows achieving high quality assessment of data embedding.

  5. A joint source-channel distortion model for JPEG compressed images.

    PubMed

    Sabir, Muhammad F; Sheikh, Hamid Rahim; Heath, Robert W; Bovik, Alan C

    2006-06-01

    The need for efficient joint source-channel coding (JSCC) is growing as new multimedia services are introduced in commercial wireless communication systems. An important component of practical JSCC schemes is a distortion model that can predict the quality of compressed digital multimedia such as images and videos. The usual approach in the JSCC literature for quantifying the distortion due to quantization and channel errors is to estimate it for each image using the statistics of the image for a given signal-to-noise ratio (SNR). This is not an efficient approach in the design of real-time systems because of the computational complexity. A more useful and practical approach would be to design JSCC techniques that minimize average distortion for a large set of images based on some distortion model rather than carrying out per-image optimizations. However, models for estimating average distortion due to quantization and channel bit errors in a combined fashion for a large set of images are not available for practical image or video coding standards employing entropy coding and differential coding. This paper presents a statistical model for estimating the distortion introduced in progressive JPEG compressed images due to quantization and channel bit errors in a joint manner. Statistical modeling of important compression techniques such as Huffman coding, differential pulse-coding modulation, and run-length coding are included in the model. Examples show that the distortion in terms of peak signal-to-noise ratio (PSNR) can be predicted within a 2-dB maximum error over a variety of compression ratios and bit-error rates. To illustrate the utility of the proposed model, we present an unequal power allocation scheme as a simple application of our model. Results show that it gives a PSNR gain of around 6.5 dB at low SNRs, as compared to equal power allocation.

  6. Region of interest and windowing-based progressive medical image delivery using JPEG2000

    NASA Astrophysics Data System (ADS)

    Nagaraj, Nithin; Mukhopadhyay, Sudipta; Wheeler, Frederick W.; Avila, Ricardo S.

    2003-05-01

    An important telemedicine application is the perusal of CT scans (digital format) from a central server housed in a healthcare enterprise across a bandwidth constrained network by radiologists situated at remote locations for medical diagnostic purposes. It is generally expected that a viewing station respond to an image request by displaying the image within 1-2 seconds. Owing to limited bandwidth, it may not be possible to deliver the complete image in such a short period of time with traditional techniques. In this paper, we investigate progressive image delivery solutions by using JPEG 2000. An estimate of the time taken in different network bandwidths is performed to compare their relative merits. We further make use of the fact that most medical images are 12-16 bits, but would ultimately be converted to an 8-bit image via windowing for display on the monitor. We propose a windowing progressive RoI technique to exploit this and investigate JPEG 2000 RoI based compression after applying a favorite or a default window setting on the original image. Subsequent requests for different RoIs and window settings would then be processed at the server. For the windowing progressive RoI mode, we report a 50% reduction in transmission time.

  7. Metadata requirements for results of diagnostic imaging procedures: a BIIF profile to support user applications

    NASA Astrophysics Data System (ADS)

    Brown, Nicholas J.; Lloyd, David S.; Reynolds, Melvin I.; Plummer, David L.

    2002-05-01

    A visible digital image is rendered from a set of digital image data. Medical digital image data can be stored as either: (a) pre-rendered format, corresponding to a photographic print, or (b) un-rendered format, corresponding to a photographic negative. The appropriate image data storage format and associated header data (metadata) required by a user of the results of a diagnostic procedure recorded electronically depends on the task(s) to be performed. The DICOM standard provides a rich set of metadata that supports the needs of complex applications. Many end user applications, such as simple report text viewing and display of a selected image, are not so demanding and generic image formats such as JPEG are sometimes used. However, these are lacking some basic identification requirements. In this paper we make specific proposals for minimal extensions to generic image metadata of value in various domains, which enable safe use in the case of two simple healthcare end user scenarios: (a) viewing of text and a selected JPEG image activated by a hyperlink and (b) viewing of one or more JPEG images together with superimposed text and graphics annotation using a file specified by a profile of the ISO/IEC Basic Image Interchange Format (BIIF).

  8. An Efficient Image Compressor for Charge Coupled Devices Camera

    PubMed Central

    Li, Jin; Xing, Fei; You, Zheng

    2014-01-01

    Recently, the discrete wavelet transforms- (DWT-) based compressor, such as JPEG2000 and CCSDS-IDC, is widely seen as the state of the art compression scheme for charge coupled devices (CCD) camera. However, CCD images project on the DWT basis to produce a large number of large amplitude high-frequency coefficients because these images have a large number of complex texture and contour information, which are disadvantage for the later coding. In this paper, we proposed a low-complexity posttransform coupled with compressing sensing (PT-CS) compression approach for remote sensing image. First, the DWT is applied to the remote sensing image. Then, a pair base posttransform is applied to the DWT coefficients. The pair base are DCT base and Hadamard base, which can be used on the high and low bit-rate, respectively. The best posttransform is selected by the l p-norm-based approach. The posttransform is considered as the sparse representation stage of CS. The posttransform coefficients are resampled by sensing measurement matrix. Experimental results on on-board CCD camera images show that the proposed approach significantly outperforms the CCSDS-IDC-based coder, and its performance is comparable to that of the JPEG2000 at low bit rate and it does not have the high excessive implementation complexity of JPEG2000. PMID:25114977

  9. A JPEG backward-compatible HDR image compression

    NASA Astrophysics Data System (ADS)

    Korshunov, Pavel; Ebrahimi, Touradj

    2012-10-01

    High Dynamic Range (HDR) imaging is expected to become one of the technologies that could shape next generation of consumer digital photography. Manufacturers are rolling out cameras and displays capable of capturing and rendering HDR images. The popularity and full public adoption of HDR content is however hindered by the lack of standards in evaluation of quality, file formats, and compression, as well as large legacy base of Low Dynamic Range (LDR) displays that are unable to render HDR. To facilitate wide spread of HDR usage, the backward compatibility of HDR technology with commonly used legacy image storage, rendering, and compression is necessary. Although many tone-mapping algorithms were developed for generating viewable LDR images from HDR content, there is no consensus on which algorithm to use and under which conditions. This paper, via a series of subjective evaluations, demonstrates the dependency of perceived quality of the tone-mapped LDR images on environmental parameters and image content. Based on the results of subjective tests, it proposes to extend JPEG file format, as the most popular image format, in a backward compatible manner to also deal with HDR pictures. To this end, the paper provides an architecture to achieve such backward compatibility with JPEG and demonstrates efficiency of a simple implementation of this framework when compared to the state of the art HDR image compression.

  10. VLTI First Fringes with Two Auxiliary Telescopes at Paranal

    NASA Astrophysics Data System (ADS)

    2005-03-01

    World's Largest Interferometer with Moving Optical Telescopes on Track Summary The Very Large Telescope Interferometer (VLTI) at Paranal Observatory has just seen another extension of its already impressive capabilities by combining interferometrically the light from two relocatable 1.8-m Auxiliary Telescopes. Following the installation of the first Auxiliary Telescope (AT) in January 2004 (see ESO PR 01/04), the second AT arrived at the VLT platform by the end of 2004. Shortly thereafter, during the night of February 2 to 3, 2005, the two high-tech telescopes teamed up and quickly succeeded in performing interferometric observations. This achievement heralds an era of new scientific discoveries. Both Auxiliary Telescopes will be offered from October 1, 2005 to the community of astronomers for routine observations, together with the MIDI instrument. By the end of 2006, Paranal will be home to four operational ATs that may be placed at 30 different positions and thus be combined in a very large number of ways ("baselines"). This will enable the VLTI to operate with enormous flexibility and, in particular, to obtain extremely detailed (sharp) images of celestial objects - ultimately with a resolution that corresponds to detecting an astronaut on the Moon. PR Photo 07a/05: Paranal Observing Platform with AT1 and AT2 PR Photo 07b/05: AT1 and AT2 with Open Domes PR Photo 07c/05: Evening at Paranal with AT1 and AT2 PR Photo 07d/05: AT1 and AT2 under the Southern Sky PR Photo 07e/05: First Fringes with AT1 and AT2 PR Video Clip 01/05: Two ATs at Paranal (Extract from ESO Newsreel 15) A Most Advanced Device ESO PR Video 01/05 ESO PR Video 01/05 Two Auxiliary Telescopes at Paranal [QuickTime: 160 x 120 pix - 37Mb - 4:30 min] [QuickTime: 320 x 240 pix - 64Mb - 4:30 min] ESO PR Photo 07a/05 ESO PR Photo 07a/05 [Preview - JPEG: 493 x400 pix - 44k] [Normal - JPEG: 985 x 800 pix - 727k] [HiRes - JPEG: 5000 x 4060 pix - 13.8M] Captions: ESO PR Video Clip 01/05 is an extract from ESO Video Newsreel 15, released on March 14, 2005. It provides an introduction to the VLT Interferometer (VLTI) and the two Auxiliary Telescopes (ATs) now installed at Paranal. ESO PR Photo 07a/05 shows the impressive ensemble at the summit of Paranal. From left to right, the enclosure of VLT Antu, Kueyen and Melipal, AT1, the VLT Survey Telescope (VST) in the background, AT2 and VLT Yepun. Located at the summit of the 2,600-m high Cerro Paranal in the Atacama Desert (Chile), ESO's Very Large Telescope (VLT) is at the forefront of astronomical technology and is one of the premier facilities in the world for optical and near-infrared observations. The VLT is composed of four 8.2-m Unit Telescope (Antu, Kueyen, Melipal and Yepun). They have been progressively put into service together with a vast suite of the most advanced astronomical instruments and are operated every night in the year. Contrary to other large astronomical telescopes, the VLT was designed from the beginning with the use of interferometry as a major goal. The href="/instruments/vlti">VLT Interferometer (VLTI) combines starlight captured by two 8.2- VLT Unit Telescopes, dramatically increasing the spatial resolution and showing fine details of a large variety of celestial objects. The VLTI is arguably the world's most advanced optical device of this type. It has already demonstrated its powerful capabilities by addressing several key scientific issues, such as determining the size and the shape of a variety of stars (ESO PR 22/02, PR 14/03 and PR 31/03), measuring distances to stars (ESO PR 25/04), probing the innermost regions of the proto-planetary discs around young stars (ESO PR 27/04) or making the first detection by infrared interferometry of an extragalactic object (ESO PR 17/03). "Little Brothers" ESO PR Photo 07b/05 ESO PR Photo 07b/05 [Preview - JPEG: 597 x 400 pix - 47k] [Normal - JPEG: 1193 x 800 pix - 330k] [HiRes - JPEG: 5000 x 3354 pix - 10.0M] ESO PR Photo 07c/05 ESO PR Photo 07c/05 [Preview - JPEG: 537 x 400 pix - 31k] [Normal - JPEG: 1074 x 800 pix - 555k] [HiRes - JPEG: 3000 x 2235 pix - 6.0M] ESO PR Photo 07d/05 ESO PR Photo 07d/05 [Preview - JPEG: 400 x 550 pix - 60k] [Normal - JPEG: 800 x 1099 pix - 946k] [HiRes - JPEG: 2414 x 3316 pix - 11.0M] Captions: ESO PR Photo 07b/05 shows VLTI Auxiliary Telescopes 1 and 2 (AT1 and AT2) in the early evening light, with the spherical domes opened and ready for observations. In ESO PR Photo 07c/05, the same scene is repeated later in the evening, with three of the large telescope enclosures in the background. This photo and ESO PR Photo 07c/05 which is a time-exposure with AT1 and AT2 under the beautiful night sky with the southern Milky Way band were obtained by ESO staff member Frédéric Gomté. However, most of the time the large telescopes are used for other research purposes. They are therefore only available for interferometric observations during a limited number of nights every year. Thus, in order to exploit the VLTI each night and to achieve the full potential of this unique setup, some other (smaller), dedicated telescopes were included into the overall VLT concept. These telescopes, known as the VLTI Auxiliary Telescopes (ATs), are mounted on tracks and can be placed at precisely defined "parking" observing positions on the observatory platform. From these positions, their light beams are fed into the same common focal point via a complex system of reflecting mirrors mounted in an underground system of tunnels. The Auxiliary Telescopes are real technological jewels. They are placed in ultra-compact enclosures, complete with all necessary electronics, an air conditioning system and cooling liquid for thermal control, compressed air for enclosure seals, a hydraulic plant for opening the dome shells, etc. Each AT is also fitted with a transporter that lifts the telescope and relocates it from one station to another. It moves around with its own housing on the top of Paranal, almost like a snail. Moreover, these moving ultra-high precision telescopes, each weighing 33 tonnes, fulfill very stringent mechanical stability requirements: "The telescopes are unique in the world", says Bertrand Koehler, the VLTI AT Project Manager. "After being relocated to a new position, the telescope is repositioned to a precision better than one tenth of a millimetre - that is, the size of a human hair! The image of the star is stabilized to better than thirty milli-arcsec - this is how we would see an object of the same size as one of the VLT enclosures on the Moon. Finally, the path followed by the light inside the telescope after bouncing on ten mirrors is stable to better than a few nanometres, which is the size of about one hundred atoms." A World Premiere ESO PR Photo 07e/05 ESO PR Photo 07e/05 "First Fringes" with two ATs [Preview - JPEG: 400 x 559 pix - 61k] [Normal - JPEG: 800 x 1134 pix - 357k] Caption: ESO PR Photo 07e/05 The "First Fringes" obtained with the first two VLTI Auxiliary Telescopes, as seen on the computer screen during the observation. The fringe pattern arises when the light beams from the two 1.8-m telescopes are brought together inside the VINCI instrument. The pattern itself contains information about the angular extension of the observed object, here the 6th-magnitude star HD62082. The fringes are acquired by moving a mirror back and forth around the position of equal path length for the two telescopes. One such scan can be seen in the third row window. This pattern results from the raw interferometric signals (the last two rows) after calibration and filtering using the photometric signals (the 4th and 5th row). The first two rows show the spectrum of the fringe pattern signal. More details about the interpretation of this pattern is given in Appendix A of PR 06/01. The possibility to move the ATs around and thus to perform observations with a large number of different telescope configurations ensures a great degree of flexibility, unique for an optical interferometric installation of this size and crucial for its exceptional performance. The ATs may be placed at 30 different positions and thus be combined in a very large number of ways. If the 8.2-m VLT Unit Telescopes are also taken into account, no less than 254 independent pairings of two telescopes ("baselines"), different in length and/or orientation, are available. Moreover, while the largest possible distance between two 8.2-m telescopes (ANTU and YEPUN) is about 130 metres, the maximal distance between two ATs may reach 200 metres. As the achievable image sharpness increases with telescope separation, interferometric observations with the ATs positioned at the extreme positions will therefore yield sharper images than is possible by combining light from the large telescopes alone. All of this will enable the VLTI to obtain exceedingly detailed (sharp) and very complete images of celestial objects - ultimately with a resolution that corresponds to detecting an astronaut on the Moon. Auxiliary Telescope no. 1 (AT1) was installed on the observatory's platform in January 2004. Now, one year later, the second of the four to be delivered, has been integrated into the VLTI. The installation period lasted two months and ended around midnight during the night of February 2-3, 2005. With extensive experience from the installation of AT1, the team of engineers and astronomers were able to combine the light from the two Auxiliary Telescopes in a very short time. In fact, following the necessary preparations, it took them only five minutes to adjust this extremely complex optical system and successfully capture the "First Fringes" with the VINCI test instrument! The star which was observed is named HD62082 and is just at the limit of what can be observed with the unaided eye (its visual magnitude is 6.2). The fringes were as clear as ever, and the VLTI control system kept them stable for more than one hour. Four nights later this exercise was repeated successfully with the mid-infrared science instrument MIDI. Fringes on the star Alphard (Alpha Hydrae) were acquired on February 7 at 4:05 local time. For Roberto Gilmozzi, Director of ESO's La Silla Paranal Observatory, "this is a very important new milestone. The introduction of the Auxiliary Telescopes in the development of the VLT Interferometer will bring interferometry out of the specialist experiment and into the domain of common user instrumentation for every astronomer in Europe. Without doubt, it will enormously increase the potentiality of the VLTI." With two more telescopes to be delivered within a year to the Paranal Observatory, ESO cements its position as world-leader in ground-based optical astronomy, providing Europe's scientists with the tools they need to stay at the forefront in this exciting science. The VLT Interferometer will, for example, allow astronomers to study details on the surface of stars or to probe proto-planetary discs and other objects for which ultra-high precision imaging is required. It is premature to speculate on what the Very Large Telescope Interferometer will soon discover, but it is easy to imagine that there may be quite some surprises in store for all of us.

  11. Recce imagery compression options

    NASA Astrophysics Data System (ADS)

    Healy, Donald J.

    1995-09-01

    The errors introduced into reconstructed RECCE imagery by ATARS DPCM compression are compared to those introduced by the more modern DCT-based JPEG compression algorithm. For storage applications in which uncompressed sensor data is available JPEG provides better mean-square-error performance while also providing more flexibility in the selection of compressed data rates. When ATARS DPCM compression has already been performed, lossless encoding techniques may be applied to the DPCM deltas to achieve further compression without introducing additional errors. The abilities of several lossless compression algorithms including Huffman, Lempel-Ziv, Lempel-Ziv-Welch, and Rice encoding to provide this additional compression of ATARS DPCM deltas are compared. It is shown that the amount of noise in the original imagery significantly affects these comparisons.

  12. Limited distortion in LSB steganography

    NASA Astrophysics Data System (ADS)

    Kim, Younhee; Duric, Zoran; Richards, Dana

    2006-02-01

    It is well known that all information hiding methods that modify the least significant bits introduce distortions into the cover objects. Those distortions have been utilized by steganalysis algorithms to detect that the objects had been modified. It has been proposed that only coefficients whose modification does not introduce large distortions should be used for embedding. In this paper we propose an effcient algorithm for information hiding in the LSBs of JPEG coefficients. Our algorithm uses parity coding to choose the coefficients whose modifications introduce minimal additional distortion. We derive the expected value of the additional distortion as a function of the message length and the probability distribution of the JPEG quantization errors of cover images. Our experiments show close agreement between the theoretical prediction and the actual additional distortion.

  13. Exploration of available feature detection and identification systems and their performance on radiographs

    NASA Astrophysics Data System (ADS)

    Wantuch, Andrew C.; Vita, Joshua A.; Jimenez, Edward S.; Bray, Iliana E.

    2016-10-01

    Despite object detection, recognition, and identification being very active areas of computer vision research, many of the available tools to aid in these processes are designed with only photographs in mind. Although some algorithms used specifically for feature detection and identification may not take explicit advantage of the colors available in the image, they still under-perform on radiographs, which are grayscale images. We are especially interested in the robustness of these algorithms, specifically their performance on a preexisting database of X-ray radiographs in compressed JPEG form, with multiple ways of describing pixel information. We will review various aspects of the performance of available feature detection and identification systems, including MATLABs Computer Vision toolbox, VLFeat, and OpenCV on our non-ideal database. In the process, we will explore possible reasons for the algorithms' lessened ability to detect and identify features from the X-ray radiographs.

  14. EBLAST: an efficient high-compression image transformation 3. application to Internet image and video transmission

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.; Ritter, Gerhard X.; Caimi, Frank M.

    2001-12-01

    A wide variety of digital image compression transforms developed for still imaging and broadcast video transmission are unsuitable for Internet video applications due to insufficient compression ratio, poor reconstruction fidelity, or excessive computational requirements. Examples include hierarchical transforms that require all, or large portion of, a source image to reside in memory at one time, transforms that induce significant locking effect at operationally salient compression ratios, and algorithms that require large amounts of floating-point computation. The latter constraint holds especially for video compression by small mobile imaging devices for transmission to, and compression on, platforms such as palmtop computers or personal digital assistants (PDAs). As Internet video requirements for frame rate and resolution increase to produce more detailed, less discontinuous motion sequences, a new class of compression transforms will be needed, especially for small memory models and displays such as those found on PDAs. In this, the third series of papers, we discuss the EBLAST compression transform and its application to Internet communication. Leading transforms for compression of Internet video and still imagery are reviewed and analyzed, including GIF, JPEG, AWIC (wavelet-based), wavelet packets, and SPIHT, whose performance is compared with EBLAST. Performance analysis criteria include time and space complexity and quality of the decompressed image. The latter is determined by rate-distortion data obtained from a database of realistic test images. Discussion also includes issues such as robustness of the compressed format to channel noise. EBLAST has been shown to perform superiorly to JPEG and, unlike current wavelet compression transforms, supports fast implementation on embedded processors with small memory models.

  15. "First Light" for HARPS at La Silla

    NASA Astrophysics Data System (ADS)

    2003-03-01

    "First Light" for HARPS at La Silla Advanced Planet-Hunting Spectrograph Passes First Tests With Flying Colours Summary The initial commissioning period of the new HARPS spectrograph (High Accuracy Radial Velocity Planet Searcher) of the 3.6-m telescope at the ESO La Silla Observatory has been successfully accomplished in the period February 11 - 27, 2003. This new instrument is optimized to detect planets in orbit around other stars ("exoplanets") by means of accurate (radial) velocity measurements with an unequalled precision of 1 meter per second . This high sensitivity makes it possible to detect variations in the motion of a star at this level, caused by the gravitational pull of one or more orbiting planets, even relatively small ones. "First Light" occurred on February 11, 2003, during the first night of tests. The instrument worked flawlessly and was fine-tuned during subsequent nights, achieving the predicted performance already during this first test run. The measurement of accurate stellar radial velocities is a very efficient way to search for planets around other stars. More than one hundred extrasolar planets have so far been detected , providing an increasingly clear picture of a great diversity of exoplanetary systems . However, current technical limitations have so far prevented the discovery around solar-type stars of exoplanets that are much less massive than Saturn, the second-largest planet in the solar system. HARPS will break through this barrier and will carry this fundamental exploration towards detection of exoplanets with masses like Uranus and Neptune. Moreover, in the case of low-mass stars - like Proxima Centauri, cf. ESO PR 05/03 - HARPS will have the unique capability to detect big "telluric" planets with only a few times the mass of the Earth . The HARPS instrument is being offered to the research community in the ESO member countries, already from October 2003 . PR Photo 08a/03 : The large optical grating of the HARPS spectrograph . PR Photo 08b/03 : The HARPS spectrograph . PR Photo 08c/03 : HARPS spectrum of the star HD100623 ("raw"). PR Photo 08d/03 : Extracted spectral tracing of the star HD100623 . PR Photo 08e/03 : Measured stability of HARPS. The HARPS Spectrograph ESO PR Photo 08a/03 ESO PR Photo 08a/03 [Preview - JPEG: 449 x 400 pix - 58k [Normal - JPEG: 897 x 800 pix - 616k] [Full-Res - JPEG: 1374 x 1226 pix - 1.3M] ESO PR Photo 08b/03 ESO PR Photo 08b/03 [Preview - JPEG: 500 x 400 pix - 83k [Normal - JPEG: 999 x 800 pix - 727k] [Full-Res - JPEG: 1600 x 1281 pix - 1.3M] Captions : PR Photo 08a/03 and PR Photo 08b/03 show the HARPS spectrograph during laboratory tests. The vacuum tank is open so that some of the high-precision components inside can be seen. On PR Photo 08a/03 , the large optical grating by which the incoming stellar light is dispersed is visible on the top of the bench; it measures 200 x 800 mm. HARPS is a unique fiber-fed "echelle" spectrograph able to record at once the visible range of a stellar spectrum (wavelengths from 380 - 690 nm) with very high spectral resolving power (better than R = 100,000 ). Any light losses inside the instrument caused by reflections of the starlight in the various optical components (mirrors and gratings), have been minimised and HARPS therefore works very efficiently . First observations ESO PR Photo 08c/03 ESO PR Photo 08c/03 [Preview - JPEG: 400 x 490 pix - 52k [Normal - JPEG: 800 x 980 pix - 362k] [Full-Res - JPEG: 1976 x 1195 pix - 354k] ESO PR Photo 08d/03 ESO PR Photo 08d/03 [Preview - JPEG: 485 x 400 pix - 53k [Normal - JPEG: 969X x 800 pix - 160k] Captions : PR Photo 08c/03 displays a HARPS untreated ("raw") exposure of the star HD100623 , of the comparatively cool stellar spectral type K0V. The frame shows the complete image as recorded with the 4000 x 4000 pixel CCD detector in the focal plane of the spectrograph. The horizontal white lines correspond to the stellar spectrum, divided into 70 adjacent spectral bands which together cover the entire visible wavelength range from 380 to 690 nm. Some of the stellar absorption lines are seen as dark horizontal features; they are the spectral signatures of various chemical elements in the star's upper layers ("atmosphere"). Bright emission lines from the heavy element thorium are visible between the bands - they are exposed by a lamp in the spectrograph to calibrate the wavelengths. This allows measuring any instrumental drift, thereby guaranteeing the exceedingly high precision that qualifies HARPS. PR Photo 08d/03 displays a small part of the spectrum of the star HD100623 following on-line data extraction (in astronomical terminology: "reduction") of the previous raw frame, shown in PR Photo 08c/03 . Several deep absorption lines are clearly visible. During the first commissioning period in February 2003, the high efficiency of HARPS was clearly demonstrated by observations of a G6V-type star of magnitude 8. This star is similar to, but slightly less heavy than our Sun and about 5 times fainter than the faintest stars visible with the unaided eye. During an exposure lasting only one minute, a signal-to-noise ratio (S/N) of 45 per pixel was achieved - this allows to determine the star's radial velocity with an uncertainty of only ~1 m/s! . For comparison, the velocity of a briskly walking person is about 2 m/s. A main performance goal of the HARPS instrument has therefore been reached, already at this early moment. This result also demonstrates an impressive gain in efficiency of no less than about 75 times as compared to that achievable with its predecessor CORALIE. That instrument has been operating very successfully at the 1.2-m Swiss Leonard Euler telescope at La Silla and has discovered several exoplanets during the past years, see for instance ESO Press Releases ( PR 18/98 , PR 13/00 and PR 07/01 ). In practice, this means that this new planet searcher at La Silla can now investigate many more stars in a given observing time and consequently with much increased probability for success. Extraordinary stability ESO PR Photo 08e/03 ESO PR Photo 08e/03 [Preview - JPEG: 478 x 400 pix - 38k [Normal - JPEG: 955 x 800 pix - 111k] Captions : PR Photo 08e/03 is a powerful demonstration of the extraordinary stability of the HARPS spectrograph. It plots the instrumentally induced velocity change, as measured during one night (9 consecutive hours) in the commissioning period. The drift of the instrument is determined by computing the exact position of the Thorium emission lines. As can be seen, the drift is of the order of 1 m/s during 9 hours and is measured with an accuracy of only 20 cm/s. The goal of measuring velocities of stars with an accuracy comparable to that of a pedestrian has required extraordinary efforts for the design and construction of this instrument. Indeed, HARPS is the most stable spectrograph ever built for astronomical applications . A crucial measure in this respect is the location of the HARPS spectrograph in a climatized room in the telescope building. The starlight captured by the 3.6-m telescope is guided to the instrument through a very efficient optical fibre from the telescope's Cassegrain focus. Moreover, the spectrograph is placed inside a vacuum tank to reduce to a minimum any movement of the sensitive optical elements because of changes in pressure and temperature. The temperature of the critical components of HARPS itself is kept very stable, with less than 0.005 degree variation and the spectrum therefore drifts by less than 2 m/s per night. This is a very small value - 1 m/s corresponds to a displacement of the stellar spectrum on the CCD detector by about 1/1000 the size of one CCD pixel, which is equivalent to 15 nm or only about 150 silicon atoms! This drift is continuously measured by means of a Thorium spectrum which is simultaneously recorded on the detector with an accuracy of only 20 cm/s. PR Photo 08e/03 illustrates two fundamental issues: HARPS performs with an overall stability never before reached by any other astronomical spectrograph , and it is possible to measure any nightly drift with an accuracy never achieved before [1]. During this first commissioning period in February 2003, all instrument functions were tested, as well as the complete data flow system hard- and software. Already during the second test night, the data-reduction pipeline was used to obtain the extracted and wavelength-calibrated spectra in a completely automatic way. The first spectra obtained with HARPS will now allow the construction of templates needed to compute the radial velocities of different types of stars with the best efficiency. The second commissioning period in June will then be used to achieve the optimal performance of this new, very powerful instrument. Astronomers in the ESO community will have the opportunity to observe with HARPS from October 1, 2003. Other research opportunities opening This superb radial velocity machine will also play an important role for the study of stellar interiors by asteroseismology. Oscillation modes were recently discovered in the nearby solar-type star Alpha Centauri A from precise radial velocity measurements carried out with CORALIE (see ESO PR 15/01 ). HARPS is able to carry out similar measurements on fainter stars, thus reaching a much wider range of masses, spectral characteristics and ages. Michel Mayor , Director of the Geneva Observatory and co-discoverer of the first known exoplanet, is confident: "With HARPS operating so well already during the first test nights, there is every reason to believe that we shall soon see some breakthroughs in this field also" . The HARPS Consortium HARPS has been designed and built by an international consortium of research institutes, led by the Observatoire de Genève (Switzerland) and including Observatoire de Haute-Provence (France), Physikalisches Institut der Universität Bern (Switzerland), the Service d'Aeronomie (CNRS, France), as well as ESO La Silla and ESO Garching . The HARPS consortium has been granted 100 observing nights per year during a 5-year period at the ESO 3.6-m telescope to perform what promises to be the most ambitious systematic search for exoplanets so far implemented worldwide . The project team is directed by Michel Mayor (Principal Investigator), Didier Queloz (Mission Scientist), Francesco Pepe (Project Managers Consortium) and Gero Rupprecht (ESO representative).

  16. Antarctica

    Atmospheric Science Data Center

    2013-04-16

    article title:  Twilight in Antarctica     View larger JPEG ... SpectroRadiometer (MISR) instrument on board Terra. The Ross Ice Shelf and Transantarctic Mountains are illuminated by low Sun. MISR was ...

  17. Coming Home at Paranal

    NASA Astrophysics Data System (ADS)

    2002-02-01

    Unique "Residencia" Opens at the VLT Observatory Summary The Paranal Residencia at the ESO VLT Observatory is now ready and the staff and visitors have moved into their new home. This major architectural project has the form of a unique subterranean construction with a facade opening towards the Pacific Ocean , far below at a distance of about 12 km. Natural daylight is brought into the building through a 35-m wide glass-covered dome, a rectangular courtyard roof and various skylight hatches. Located in the middle of the Atacama Desert, one of the driest areas on Earth, the Residencia incorporates a small garden and a swimming pool, allowing the inhabitants to retreat from time to time from the harsh outside environment. Returning from long shifts at the VLT and other installations on the mountain, here they can breathe moist air and receive invigorating sensory impressions. With great originality of the design, it has been possible to create an interior with a feeling of open space - this is a true "home in the desert" . Moreover, with strict ecological power, air and water management , the Paranal Residencia has already become a symbol of innovative architecture in its own right. Constructed with robust, but inexpensive materials, it is an impressively elegant and utilitarian counterpart to the VLT high-tech facilities poised some two hundred meters above, on the top of the mountain. PR Photo 05a/02 : Aerial view of the Paranal Observatory area. PR Photo 05b/02 : Aerial view of the Paranal Residencia . PR Photo 05c/02 : Outside view of the Paranal Residencia . PR Photo 05d/02 : The Entry Hall (fisheye view). PR Photo 05e/02 : The Entry Hall with garden and pool. PR Photo 05f/02 : The Reception Area . PR Photo 05g/02 : The Reception Area - decoration. PR Photo 05h/02 : The Reception Area - decoration. PR Photo 05i/02 : The Reception Area - decoration. PR Photo 05j/02 : View towards the Cantine . PR Photo 05k/02 : View towards the Kitchen . PR Photo 05l/02 : View of the Corridors . PR Photo 05m/02 : A Bedroom . PR Photo 05n/02 : The main facade in evening light . PR Photo 05o/02 : View from the Observing Platform towards the Residencia in evening light. The Paranal Residencia ESO PR Photo 05a/02 ESO PR Photo 05a/02 [Preview - JPEG: 611 x 400 pix - 73k] [Normal - JPEG: 1222 x 800 pix - 936k] [HiRes - JPEG: 3000 x 1964 pix - 4.6M] ESO PR Photo 05b/02 ESO PR Photo 05b/02 [Preview - JPEG: 619 x 400 pix - 92k] [Normal - JPEG: 1238 x 800 pix - 944k] [HiRes - JPEG: 3000 x 1938 pix - 3.1M] Caption : PR Photo 05a/02 shows an aerial view of the Paranal Observatory. Below the observing platform at the top of the mountain - at a distance of about 3 km - is the Base Camp with the technical area (to the right of the road) and the new Residencia building (left of the road). To the extreme left is a temporary container camp of the construction company. PR Photo 05b/02 shows the Base Camp in more detail. In the course of 2002, many of the containers on the right side will be removed. The square building in the foreground to the left of the entrance gate is the future "Visitors' Centre".- A dummy 8.2-m concrete mirror is also placed here. These photos were made by ESO engineer Gert Hüdepohl during the final construction phase in late 2001. Ever since the construction of the ESO Very Large Telescope (VLT) at Paranal began in 1991, staff and visitors have resided in cramped containers in the "Base Camp". This is one of driest and most inhospitable areas in the Chilean Atacama Desert and eleven years is a long time to wait. However, there was never any doubt that the construction of the telescope itself must have absolute priority. Nevertheless, with the major technical installations in place, the time had come to develop a more comfortable and permanent base of living at Paranal, outside the telescope area. A unique architectural concept The concept for the Paranal Residencia emerged from a widely noted international architectural competition, won by Auer and Weber Freie Architekten from Munich (Germany), and with Dominik Schenkirz as principal designer. The interior furnishing and decoration was awarded to the Chilean architect Paula Gutierrez . The construction began in late 1998. Information about this work and several photos illustrating the progress have been published as PR Photos 31a-d/99 , PR Photo 43h/99 and PR Photos 04b-d/01 . Taking advantage of an existing depression in the ground, the architects created a unique subterranean construction with a single facade opening towards the Pacific Ocean , far below at a distance of about 12 km. It has the same colour as the desert and blends perfectly into the surroundings. The Paranal Residencia is elegant, with robust and inexpensive materials. Natural daylight is brought into the building through a 35-m wide glass-covered dome, a rectangular courtyard roof and various skylight hatches. The great originality of this design has made it possible to create an interior with a feeling of open space, despite the underground location. Some building characteristics are indicated below Facilities at the Residencia To the visitor who arrives at the Paranal Residencia from the harsh natural environment, the welcoming feeling under the dome is unexpected and instantly pleasant. This is a true "oasis" within coloured concrete walls and the air is agreeably warm and moist. There is a strong sense of calm and serenity and, above all, a feeling of coming home . At night, the lighting below the roofing closure fabric is spectacular and the impression on the mind is overwhelming. The various facilities are integrated over four floors below ground level. They include small, but nice and simple bedrooms, offices, meeting points, a restaurant, a library, a reception area, a cinema and other recreational areas. The natural focal point is located next to the reception at the entrance. The dining room articulates the building at the -2 level and view points through the facade form bridges between the surrounding Paranal desert and the interior. Simple, but elegant furnishing and specially manufactured carpeting complement a strong design of perspectives. The Republic of Chile, the host state for the ESO Paranal Observatory, is present with its emblematic painter Roberto Matta . Additional space is also provided for a regional art and activity display. The staff moved out of the containers and into their new home in mid-January 2002. Today, the Paranal Residencia has already become a symbol of innovative architecture in its own right, an impressively elegant and utilitarian counterpart to the VLT high-tech facilities poised some two hundred meters above, on the top of the mountain. Some building characteristics * Construction initiated in 1998 * Area: 10000 m 2 * Total cost: 12 Million Euro (less than 2% of the total cost of the VLT project), approx. 1200 Euro/m 2 * 108 bedrooms, each with 16 m 2 * Cantine capacity for 200 persons * 22 offices * 5 terraces/viewpoints * 70-seat cinema room * Multiple meeting areas * Double room library * Building management control for the environment and the lighting * Swimming pool; water treatment and grey water recirculation * Modular concept with potential for extension to 200 rooms * Completely light-tight and with a high level of sound insulation * Communication network with phone and TV-set in each room * Main contractors: Vial y Vives, Petricio Industrial, Koch The Paranal Residencia: A Photo Collection

  18. Development of ultrasound/endoscopy PACS (picture archiving and communication system) and investigation of compression method for cine images

    NASA Astrophysics Data System (ADS)

    Osada, Masakazu; Tsukui, Hideki

    2002-09-01

    ABSTRACT Picture Archiving and Communication System (PACS) is a system which connects imaging modalities, image archives, and image workstations to reduce film handling cost and improve hospital workflow. Handling diagnostic ultrasound and endoscopy images is challenging, because it produces large amount of data such as motion (cine) images of 30 frames per second, 640 x 480 in resolution, with 24-bit color. Also, it requires enough image quality for clinical review. We have developed PACS which is able to manage ultrasound and endoscopy cine images with above resolution and frame rate, and investigate suitable compression method and compression rate for clinical image review. Results show that clinicians require capability for frame-by-frame forward and backward review of cine images because they carefully look through motion images to find certain color patterns which may appear in one frame. In order to satisfy this quality, we have chosen motion JPEG, installed and confirmed that we could capture this specific pattern. As for acceptable image compression rate, we have performed subjective evaluation. No subjects could tell the difference between original non-compressed images and 1:10 lossy compressed JPEG images. One subject could tell the difference between original and 1:20 lossy compressed JPEG images although it is acceptable. Thus, ratios of 1:10 to 1:20 are acceptable to reduce data amount and cost while maintaining quality for clinical review.

  19. A new approach of objective quality evaluation on JPEG2000 lossy-compressed lung cancer CT images

    NASA Astrophysics Data System (ADS)

    Cai, Weihua; Tan, Yongqiang; Zhang, Jianguo

    2007-03-01

    Image compression has been used to increase the communication efficiency and storage capacity. JPEG 2000 compression, based on the wavelet transformation, has its advantages comparing to other compression methods, such as ROI coding, error resilience, adaptive binary arithmetic coding and embedded bit-stream. However it is still difficult to find an objective method to evaluate the image quality of lossy-compressed medical images so far. In this paper, we present an approach to evaluate the image quality by using a computer aided diagnosis (CAD) system. We selected 77 cases of CT images, bearing benign and malignant lung nodules with confirmed pathology, from our clinical Picture Archiving and Communication System (PACS). We have developed a prototype of CAD system to classify these images into benign ones and malignant ones, the performance of which was evaluated by the receiver operator characteristics (ROC) curves. We first used JPEG 2000 to compress these cases of images with different compression ratio from lossless to lossy, and used the CAD system to classify the cases with different compressed ratio, then compared the ROC curves from the CAD classification results. Support vector machine (SVM) and neural networks (NN) were used to classify the malignancy of input nodules. In each approach, we found that the area under ROC (AUC) decreases with the increment of compression ratio with small fluctuations.

  20. Progressive transmission of images over fading channels using rate-compatible LDPC codes.

    PubMed

    Pan, Xiang; Banihashemi, Amir H; Cuhadar, Aysegul

    2006-12-01

    In this paper, we propose a combined source/channel coding scheme for transmission of images over fading channels. The proposed scheme employs rate-compatible low-density parity-check codes along with embedded image coders such as JPEG2000 and set partitioning in hierarchical trees (SPIHT). The assignment of channel coding rates to source packets is performed by a fast trellis-based algorithm. We examine the performance of the proposed scheme over correlated and uncorrelated Rayleigh flat-fading channels with and without side information. Simulation results for the expected peak signal-to-noise ratio of reconstructed images, which are within 1 dB of the capacity upper bound over a wide range of channel signal-to-noise ratios, show considerable improvement compared to existing results under similar conditions. We also study the sensitivity of the proposed scheme in the presence of channel estimation error at the transmitter and demonstrate that under most conditions our scheme is more robust compared to existing schemes.

  1. Edge-Based Image Compression with Homogeneous Diffusion

    NASA Astrophysics Data System (ADS)

    Mainberger, Markus; Weickert, Joachim

    It is well-known that edges contain semantically important image information. In this paper we present a lossy compression method for cartoon-like images that exploits information at image edges. These edges are extracted with the Marr-Hildreth operator followed by hysteresis thresholding. Their locations are stored in a lossless way using JBIG. Moreover, we encode the grey or colour values at both sides of each edge by applying quantisation, subsampling and PAQ coding. In the decoding step, information outside these encoded data is recovered by solving the Laplace equation, i.e. we inpaint with the steady state of a homogeneous diffusion process. Our experiments show that the suggested method outperforms the widely-used JPEG standard and can even beat the advanced JPEG2000 standard for cartoon-like images.

  2. ALMA On the Move - ESO Awards Important Contract for the ALMA Project

    NASA Astrophysics Data System (ADS)

    2005-12-01

    Only two weeks after awarding its largest-ever contract for the procurement of antennas for the Atacama Large Millimeter Array project (ALMA), ESO has signed a contract with Scheuerle Fahrzeugfabrik GmbH, a world-leader in the design and production of custom-built heavy-duty transporters, for the provision of two antenna transporting vehicles. These vehicles are of crucial importance for ALMA. ESO PR Photo 41a/05 ESO PR Photo 41a/05 The ALMA Transporter (Artist's Impression) [Preview - JPEG: 400 x 756 pix - 234k] [Normal - JPEG: 800 x 1512 pix - 700k] [Full Res - JPEG: 1768 x 3265 pix - 2.3M] Caption: Each of the ALMA transporters will be 10 m wide, 4.5 m high and 16 m long. "The timely awarding of this contract is most important to ensure that science operations can commence as planned," said ESO Director General Catherine Cesarsky. "This contract thus marks a further step towards the realization of the ALMA project." "These vehicles will operate in a most unusual environment and must live up to very strict demands regarding performance, reliability and safety. Meeting these requirements is a challenge for us, and we are proud to have been selected by ESO for this task," commented Hans-Jörg Habernegg, President of Scheuerle GmbH. ESO PR Photo 41b/05 ESO PR Photo 41b/05 Signing the Contract [Preview - JPEG: 400 x 572 pix - 234k] [Normal - JPEG: 800 x 1143 pix - 700k] [HiRes - JPEG: 4368 x 3056 pix - 2.3M] Caption: (left to right) Mr Thomas Riek, Vice-President of Scheuerle GmbH, Dr Catherine Cesarsky, ESO Director General and Mr Hans-Jörg Habernegg, President of Scheuerle GmbH. When completed on the high-altitude Chajnantor site in Chile, ALMA is expected to comprise more than 60 antennas, which can be placed in different locations on the plateau but which work together as one giant telescope. Changing the relative positions of the antennas and thus also the configuration of the array allows for different observing modes, comparable to using a zoom lens, offering different degrees of resolution and sky coverage as needed by the astronomers. The ALMA Antenna Transporters allow for moving the antennas between the different pre-defined antenna positions. They will also be used for transporting antennas between the maintenance area at 2900 m elevation and the "high site" at 5000 m above sea level, where the observations are carried out. Given their important functions, both for the scientific work and in transporting high-tech antennas with the required care, the vehicles must live up to very demanding operational requirements. Each transporter has a mass of 150 tonnes and is able to lift and transport antennas of 110 tonnes. They must be able to place the antennas on the docking pads with millimetric precision. At the same time, they must be powerful enough to climb 2000 m reliably and safely with their heavy and valuable load, putting extraordinary demands on the 500 kW diesel engines. This means negotiating a 28 km long high-altitude road with an average slope of 7 %. Finally, as they will be operated at an altitude with significantly reduced oxygen levels, a range of redundant safety devices protect both personnel and equipment from possible mishaps or accidents. The first transporter is scheduled to be delivered in the summer of 2007 to match the delivery of the first antennas to Chajnantor. The ESO contract has a value of approx. 5.5 m Euros.

  3. A Cosmic Baby-Boom

    NASA Astrophysics Data System (ADS)

    2005-09-01

    Large Population of Galaxies Found in the Young Universe with ESO's VLT The Universe was a more fertile place soon after it was formed than has previously been suspected. A team of French and Italian astronomers [1] made indeed the surprising discovery of a large and unknown population of distant galaxies observed when the Universe was only 10 to 30% its present age. ESO PR Photo 29a/05 ESO PR Photo 29a/05 New Population of Distant Galaxies [Preview - JPEG: 400 x 424 pix - 191k] [Normal - JPEG: 800 x 847 pix - 449k] [HiRes - JPEG: 2269 x 2402 pix - 2.0M] ESO PR Photo 29b/05 ESO PR Photo 29b/05 Average Spectra of Distant Galaxies [Preview - JPEG: 400 x 506 pix - 141k] [Normal - JPEG: 800 x 1012 pix - 320k] This breakthrough is based on observations made with the Visible Multi-Object Spectrograph (VIMOS) as part of the VIMOS VLT Deep Survey (VVDS). The VVDS started early 2002 on Melipal, one of the 8.2-m telescopes of ESO's Very Large Telescope Array [2]. In a total sample of about 8,000 galaxies selected only on the basis of their observed brightness in red light, almost 1,000 bright and vigorously star forming galaxies were discovered that were formed between 9 and 12 billion years ago (i.e. about 1,500 to 4,500 million years after the Big Bang). "To our surprise, says Olivier Le Fèvre, from the Laboratoire d'Astrophysique de Marseille (France) and co-leader of the VVDS project, "this is two to six times higher than had been found previously. These galaxies had been missed because previous surveys had selected objects in a much more restrictive manner than we did. And they did so to accommodate the much lower efficiency of the previous generation of instruments." While observations and models have consistently indicated that the Universe had not yet formed many stars in the first billion years of cosmic time, the discovery announced today by scientists calls for a significant change in this picture. The astronomers indeed find that stars formed two to three times faster than previously estimated. "These observations will demand a profound reassessment of our theories of the formation and evolution of galaxies in a changing Universe", says Gianpaolo Vettolani, the other co-leader of the VVDS project, working at INAF-IRA in Bologna (Italy). These results are reported in the September 22 issue of the journal Nature (Le Fèvre et al., "A large population of galaxies 9 to 12 billion years back in the life of the Universe").

  4. Artifacts in slab average-intensity-projection images reformatted from JPEG 2000 compressed thin-section abdominal CT data sets.

    PubMed

    Kim, Bohyoung; Lee, Kyoung Ho; Kim, Kil Joong; Mantiuk, Rafal; Kim, Hye-ri; Kim, Young Hoon

    2008-06-01

    The objective of our study was to assess the effects of compressing source thin-section abdominal CT images on final transverse average-intensity-projection (AIP) images. At reversible, 4:1, 6:1, 8:1, 10:1, and 15:1 Joint Photographic Experts Group (JPEG) 2000 compressions, we compared the artifacts in 20 matching compressed thin sections (0.67 mm), compressed thick sections (5 mm), and AIP images (5 mm) reformatted from the compressed thin sections. The artifacts were quantitatively measured with peak signal-to-noise ratio (PSNR) and a perceptual quality metric (High Dynamic Range Visual Difference Predictor [HDR-VDP]). By comparing the compressed and original images, three radiologists independently graded the artifacts as 0 (none, indistinguishable), 1 (barely perceptible), 2 (subtle), or 3 (significant). Friedman tests and exact tests for paired proportions were used. At irreversible compressions, the artifacts tended to increase in the order of AIP, thick-section, and thin-section images in terms of PSNR (p < 0.0001), HDR-VDP (p < 0.0001), and the readers' grading (p < 0.01 at 6:1 or higher compressions). At 6:1 and 8:1, distinguishable pairs (grades 1-3) tended to increase in the order of AIP, thick-section, and thin-section images. Visually lossless threshold for the compression varied between images but decreased in the order of AIP, thick-section, and thin-section images (p < 0.0001). Compression artifacts in thin sections are significantly attenuated in AIP images. On the premise that thin sections are typically reviewed using an AIP technique, it is justifiable to compress them to a compression level currently accepted for thick sections.

  5. New Mexico: Los Alamos

    Atmospheric Science Data Center

    2014-05-15

    article title:  Los Alamos, New Mexico     View Larger JPEG image ... kb) Multi-angle views of the Fire in Los Alamos, New Mexico, May 9, 2000. These true-color images covering north-central New Mexico ...

  6. An RBF-based compression method for image-based relighting.

    PubMed

    Leung, Chi-Sing; Wong, Tien-Tsin; Lam, Ping-Man; Choy, Kwok-Hung

    2006-04-01

    In image-based relighting, a pixel is associated with a number of sampled radiance values. This paper presents a two-level compression method. In the first level, the plenoptic property of a pixel is approximated by a spherical radial basis function (SRBF) network. That means that the spherical plenoptic function of each pixel is represented by a number of SRBF weights. In the second level, we apply a wavelet-based method to compress these SRBF weights. To reduce the visual artifact due to quantization noise, we develop a constrained method for estimating the SRBF weights. Our proposed approach is superior to JPEG, JPEG2000, and MPEG. Compared with the spherical harmonics approach, our approach has a lower complexity, while the visual quality is comparable. The real-time rendering method for our SRBF representation is also discussed.

  7. A software platform for the analysis of dermatology images

    NASA Astrophysics Data System (ADS)

    Vlassi, Maria; Mavraganis, Vlasios; Asvestas, Panteleimon

    2017-11-01

    The purpose of this paper is to present a software platform developed in Python programming environment that can be used for the processing and analysis of dermatology images. The platform provides the capability for reading a file that contains a dermatology image. The platform supports image formats such as Windows bitmaps, JPEG, JPEG2000, portable network graphics, TIFF. Furthermore, it provides suitable tools for selecting, either manually or automatically, a region of interest (ROI) on the image. The automated selection of a ROI includes filtering for smoothing the image and thresholding. The proposed software platform has a friendly and clear graphical user interface and could be a useful second-opinion tool to a dermatologist. Furthermore, it could be used to classify images including from other anatomical parts such as breast or lung, after proper re-training of the classification algorithms.

  8. Lossless medical image compression using geometry-adaptive partitioning and least square-based prediction.

    PubMed

    Song, Xiaoying; Huang, Qijun; Chang, Sheng; He, Jin; Wang, Hao

    2018-06-01

    To improve the compression rates for lossless compression of medical images, an efficient algorithm, based on irregular segmentation and region-based prediction, is proposed in this paper. Considering that the first step of a region-based compression algorithm is segmentation, this paper proposes a hybrid method by combining geometry-adaptive partitioning and quadtree partitioning to achieve adaptive irregular segmentation for medical images. Then, least square (LS)-based predictors are adaptively designed for each region (regular subblock or irregular subregion). The proposed adaptive algorithm not only exploits spatial correlation between pixels but it utilizes local structure similarity, resulting in efficient compression performance. Experimental results show that the average compression performance of the proposed algorithm is 10.48, 4.86, 3.58, and 0.10% better than that of JPEG 2000, CALIC, EDP, and JPEG-LS, respectively. Graphical abstract ᅟ.

  9. Context-dependent JPEG backward-compatible high-dynamic range image compression

    NASA Astrophysics Data System (ADS)

    Korshunov, Pavel; Ebrahimi, Touradj

    2013-10-01

    High-dynamic range (HDR) imaging is expected, together with ultrahigh definition and high-frame rate video, to become a technology that may change photo, TV, and film industries. Many cameras and displays capable of capturing and rendering both HDR images and video are already available in the market. The popularity and full-public adoption of HDR content is, however, hindered by the lack of standards in evaluation of quality, file formats, and compression, as well as large legacy base of low-dynamic range (LDR) displays that are unable to render HDR. To facilitate the wide spread of HDR usage, the backward compatibility of HDR with commonly used legacy technologies for storage, rendering, and compression of video and images are necessary. Although many tone-mapping algorithms are developed for generating viewable LDR content from HDR, there is no consensus of which algorithm to use and under which conditions. We, via a series of subjective evaluations, demonstrate the dependency of the perceptual quality of the tone-mapped LDR images on the context: environmental factors, display parameters, and image content itself. Based on the results of subjective tests, it proposes to extend JPEG file format, the most popular image format, in a backward compatible manner to deal with HDR images also. An architecture to achieve such backward compatibility with JPEG is proposed. A simple implementation of lossy compression demonstrates the efficiency of the proposed architecture compared with the state-of-the-art HDR image compression.

  10. JPEG XS, a new standard for visually lossless low-latency lightweight image compression

    NASA Astrophysics Data System (ADS)

    Descampe, Antonin; Keinert, Joachim; Richter, Thomas; Fößel, Siegfried; Rouvroy, Gaël.

    2017-09-01

    JPEG XS is an upcoming standard from the JPEG Committee (formally known as ISO/IEC SC29 WG1). It aims to provide an interoperable visually lossless low-latency lightweight codec for a wide range of applications including mezzanine compression in broadcast and Pro-AV markets. This requires optimal support of a wide range of implementation technologies such as FPGAs, CPUs and GPUs. Targeted use cases are professional video links, IP transport, Ethernet transport, real-time video storage, video memory buffers, and omnidirectional video capture and rendering. In addition to the evaluation of the visual transparency of the selected technologies, a detailed analysis of the hardware and software complexity as well as the latency has been done to make sure that the new codec meets the requirements of the above-mentioned use cases. In particular, the end-to-end latency has been constrained to a maximum of 32 lines. Concerning the hardware complexity, neither encoder nor decoder should require more than 50% of an FPGA similar to Xilinx Artix 7 or 25% of an FPGA similar to Altera Cyclon 5. This process resulted in a coding scheme made of an optional color transform, a wavelet transform, the entropy coding of the highest magnitude level of groups of coefficients, and the raw inclusion of the truncated wavelet coefficients. This paper presents the details and status of the standardization process, a technical description of the future standard, and the latest performance evaluation results.

  11. Parallel efficient rate control methods for JPEG 2000

    NASA Astrophysics Data System (ADS)

    Martínez-del-Amor, Miguel Á.; Bruns, Volker; Sparenberg, Heiko

    2017-09-01

    Since the introduction of JPEG 2000, several rate control methods have been proposed. Among them, post-compression rate-distortion optimization (PCRD-Opt) is the most widely used, and the one recommended by the standard. The approach followed by this method is to first compress the entire image split in code blocks, and subsequently, optimally truncate the set of generated bit streams according to the maximum target bit rate constraint. The literature proposes various strategies on how to estimate ahead of time where a block will get truncated in order to stop the execution prematurely and save time. However, none of them have been defined bearing in mind a parallel implementation. Today, multi-core and many-core architectures are becoming popular for JPEG 2000 codecs implementations. Therefore, in this paper, we analyze how some techniques for efficient rate control can be deployed in GPUs. In order to do that, the design of our GPU-based codec is extended, allowing stopping the process at a given point. This extension also harnesses a higher level of parallelism on the GPU, leading to up to 40% of speedup with 4K test material on a Titan X. In a second step, three selected rate control methods are adapted and implemented in our parallel encoder. A comparison is then carried out, and used to select the best candidate to be deployed in a GPU encoder, which gave an extra 40% of speedup in those situations where it was really employed.

  12. Lossless and lossy compression of quantitative phase images of red blood cells obtained by digital holographic imaging.

    PubMed

    Jaferzadeh, Keyvan; Gholami, Samaneh; Moon, Inkyu

    2016-12-20

    In this paper, we evaluate lossless and lossy compression techniques to compress quantitative phase images of red blood cells (RBCs) obtained by an off-axis digital holographic microscopy (DHM). The RBC phase images are numerically reconstructed from their digital holograms and are stored in 16-bit unsigned integer format. In the case of lossless compression, predictive coding of JPEG lossless (JPEG-LS), JPEG2000, and JP3D are evaluated, and compression ratio (CR) and complexity (compression time) are compared against each other. It turns out that JP2k can outperform other methods by having the best CR. In the lossy case, JP2k and JP3D with different CRs are examined. Because some data is lost in a lossy way, the degradation level is measured by comparing different morphological and biochemical parameters of RBC before and after compression. Morphological parameters are volume, surface area, RBC diameter, sphericity index, and the biochemical cell parameter is mean corpuscular hemoglobin (MCH). Experimental results show that JP2k outperforms JP3D not only in terms of mean square error (MSE) when CR increases, but also in compression time in the lossy compression way. In addition, our compression results with both algorithms demonstrate that with high CR values the three-dimensional profile of RBC can be preserved and morphological and biochemical parameters can still be within the range of reported values.

  13. Quality Scalability Aware Watermarking for Visual Content.

    PubMed

    Bhowmik, Deepayan; Abhayaratne, Charith

    2016-11-01

    Scalable coding-based content adaptation poses serious challenges to traditional watermarking algorithms, which do not consider the scalable coding structure and hence cannot guarantee correct watermark extraction in media consumption chain. In this paper, we propose a novel concept of scalable blind watermarking that ensures more robust watermark extraction at various compression ratios while not effecting the visual quality of host media. The proposed algorithm generates scalable and robust watermarked image code-stream that allows the user to constrain embedding distortion for target content adaptations. The watermarked image code-stream consists of hierarchically nested joint distortion-robustness coding atoms. The code-stream is generated by proposing a new wavelet domain blind watermarking algorithm guided by a quantization based binary tree. The code-stream can be truncated at any distortion-robustness atom to generate the watermarked image with the desired distortion-robustness requirements. A blind extractor is capable of extracting watermark data from the watermarked images. The algorithm is further extended to incorporate a bit-plane discarding-based quantization model used in scalable coding-based content adaptation, e.g., JPEG2000. This improves the robustness against quality scalability of JPEG2000 compression. The simulation results verify the feasibility of the proposed concept, its applications, and its improved robustness against quality scalable content adaptation. Our proposed algorithm also outperforms existing methods showing 35% improvement. In terms of robustness to quality scalable video content adaptation using Motion JPEG2000 and wavelet-based scalable video coding, the proposed method shows major improvement for video watermarking.

  14. Image enhancement using the hypothesis selection filter: theory and application to JPEG decoding.

    PubMed

    Wong, Tak-Shing; Bouman, Charles A; Pollak, Ilya

    2013-03-01

    We introduce the hypothesis selection filter (HSF) as a new approach for image quality enhancement. We assume that a set of filters has been selected a priori to improve the quality of a distorted image containing regions with different characteristics. At each pixel, HSF uses a locally computed feature vector to predict the relative performance of the filters in estimating the corresponding pixel intensity in the original undistorted image. The prediction result then determines the proportion of each filter used to obtain the final processed output. In this way, the HSF serves as a framework for combining the outputs of a number of different user selected filters, each best suited for a different region of an image. We formulate our scheme in a probabilistic framework where the HSF output is obtained as the Bayesian minimum mean square error estimate of the original image. Maximum likelihood estimates of the model parameters are determined from an offline fully unsupervised training procedure that is derived from the expectation-maximization algorithm. To illustrate how to apply the HSF and to demonstrate its potential, we apply our scheme as a post-processing step to improve the decoding quality of JPEG-encoded document images. The scheme consistently improves the quality of the decoded image over a variety of image content with different characteristics. We show that our scheme results in quantitative improvements over several other state-of-the-art JPEG decoding methods.

  15. Baseline coastal oblique aerial photographs collected from Calcasieu Lake, Louisiana, to Brownsville, Texas, September 9-10, 2008

    USGS Publications Warehouse

    Morgan, Karen L. M.; Karen A. Westphal,

    2016-04-28

    The U.S. Geological Survey (USGS), as part of the National Assessment of Coastal Change Hazards project, conducts baseline and storm-response photography missions to document and understand the changes in vulnerability of the Nation's coasts to extreme storms (Morgan, 2009). On September 9-10, 2008, the USGS conducted an oblique aerial photographic survey from Calcasieu Lake, Louisiana, to Brownsville, Texas, aboard a Cessna C-210 (aircraft) at an altitude of 500 feet (ft) and approximately 1,000 ft offshore. This mission was flown to collect baseline data for assessing incremental changes of the beach and nearshore area, and the data can be used in the assessment of future coastal change.The photographs provided in this report are Joint Photographic Experts Group (JPEG) images. ExifTool was used to add the following to the header of each photo: time of collection, Global Positioning System (GPS) latitude, GPS longitude, keywords, credit, artist (photographer), caption, copyright, and contact information. The photograph locations are an estimate of the position of the aircraft at the time the photograph was taken and do not indicate the location of any feature in the images (see the Navigation Data page). These photographs document the state of the barrier islands and other coastal features at the time of the survey. Pages containing thumbnail images of the photographs, referred to as contact sheets, were created in 5-minute segments of flight time. These segments can be found on the Photos and Maps page. Photographs can be opened directly with any JPEG-compatible image viewer by clicking on a thumbnail on the contact sheet.In addition to the photographs, a Google Earth Keyhole Markup Language (KML) file is provided and can be used to view the images by clicking on the marker and then clicking on either the thumbnail or the link above the thumbnail. The KML file was created using the photographic navigation files. The KML file can be found in the kml folder.

  16. Post-Hurricane Sandy coastal oblique aerial photographs collected from Cape Lookout, North Carolina, to Montauk, New York, November 4-6, 2012

    USGS Publications Warehouse

    Morgan, Karen L.M.; Krohn, M. Dennis

    2014-01-01

    The U.S. Geological Survey (USGS) conducts baseline and storm response photography missions to document and understand the changes in vulnerability of the Nation's coasts to extreme storms. On November 4-6, 2012, approximately one week after the landfall of Hurricane Sandy, the USGS conducted an oblique aerial photographic survey from Cape Lookout, N.C., to Montauk, N.Y., aboard a Piper Navajo Chieftain (aircraft) at an altitude of 500 feet (ft) and approximately 1,000 ft offshore. This mission was flown to collect post-Hurricane Sandy data for assessing incremental changes in the beach and nearshore area since the last survey in 2009. The data can be used in the assessment of future coastal change. The photographs provided here are Joint Photographic Experts Group (JPEG) images. The photograph locations are an estimate of the position of the aircraft and do not indicate the location of the feature in the images. These photos document the configuration of the barrier islands and other coastal features at the time of the survey. Exiftool was used to add the following to the header of each photo: time of collection, Global Positioning System (GPS) latitude, GPS longitude, keywords, credit, artist (photographer), caption, copyright, and contact information. Photographs can be opened directly with any JPEG-compatible image viewer by clicking on a thumbnail on the contact sheet. Table 1 provides detailed information about the GPS location, image name, date, and time each of the 9,481 photographs were taken, along with links to each photograph. The photographs are organized in segments, also referred to as contact sheets, and represent approximately 5 minutes of flight time. In addition to the photographs, a Google Earth Keyhole Markup Language (KML) file is provided and can be used to view the images by clicking on the marker and then clicking on either the thumbnail or the link above the thumbnail. The KML files were created using the photographic navigation files.

  17. TreeRipper web application: towards a fully automated optical tree recognition software.

    PubMed

    Hughes, Joseph

    2011-05-20

    Relationships between species, genes and genomes have been printed as trees for over a century. Whilst this may have been the best format for exchanging and sharing phylogenetic hypotheses during the 20th century, the worldwide web now provides faster and automated ways of transferring and sharing phylogenetic knowledge. However, novel software is needed to defrost these published phylogenies for the 21st century. TreeRipper is a simple website for the fully-automated recognition of multifurcating phylogenetic trees (http://linnaeus.zoology.gla.ac.uk/~jhughes/treeripper/). The program accepts a range of input image formats (PNG, JPG/JPEG or GIF). The underlying command line c++ program follows a number of cleaning steps to detect lines, remove node labels, patch-up broken lines and corners and detect line edges. The edge contour is then determined to detect the branch length, tip label positions and the topology of the tree. Optical Character Recognition (OCR) is used to convert the tip labels into text with the freely available tesseract-ocr software. 32% of images meeting the prerequisites for TreeRipper were successfully recognised, the largest tree had 115 leaves. Despite the diversity of ways phylogenies have been illustrated making the design of a fully automated tree recognition software difficult, TreeRipper is a step towards automating the digitization of past phylogenies. We also provide a dataset of 100 tree images and associated tree files for training and/or benchmarking future software. TreeRipper is an open source project licensed under the GNU General Public Licence v3.

  18. Fourth Light at Paranal!

    NASA Astrophysics Data System (ADS)

    2000-09-01

    VLT YEPUN Joins ANTU, KUEYEN and MELIPAL It was a historical moment last night (September 3 - 4, 2000) in the VLT Control Room at the Paranal Observatory , after nearly 15 years of hard work. Finally, four teams of astronomers and engineers were sitting at the terminals - and each team with access to an 8.2-m telescope! From now on, the powerful "Paranal Quartet" will be observing night after night, with a combined mirror surface of more than 210 m 2. And beginning next year, some of them will be linked to form part of the unique VLT Interferometer with unparalleled sensitivity and image sharpness. YEPUN "First Light" Early in the evening, the fourth 8.2-m Unit Telescope, YEPUN , was pointed to the sky for the first time and successfully achieved "First Light". Following a few technical exposures, a series of "first light" photos was made of several astronomical objects with the VLT Test Camera. This instrument was also used for the three previous "First Light" events for ANTU ( May 1998 ), KUEYEN ( March 1999 ) and MELIPAL ( January 2000 ). These images served to evaluate provisionally the performance of the new telescope, mainly in terms of mechanical and optical quality. The ESO staff were very pleased with the results and pronounced YEPUN fit for the subsequent commissioning phase. When the name YEPUN was first given to the fourth VLT Unit Telescope, it was supposed to mean "Sirius" in the Mapuche language. However, doubts have since arisen about this translation and a detailed investigation now indicates that the correct meaning is "Venus" (as the Evening Star). For a detailed explanation, please consult the essay On the Meaning of "YEPUN" , now available at the ESO website. The first images At 21:39 hrs local time (01:39 UT), YEPUN was turned to point in the direction of a dense Milky Way field, near the border between the constellations Sagitta (The Arrow) and Aquila (The Eagle). A guide star was acquired and the active optics system quickly optimized the mirror system. At 21:44 hrs (01:44 UT), the Test Camera at the Cassegrain focus within the M1 mirror cell was opened for 30 seconds, with the planetary nebula Hen 2-428 in the field. The resulting "First Light" image was immediately read out and appeared on the computer screen at 21:45:53 hrs (01:45:53 UT). "Not bad! - "Very nice!" were the first, "business-as-usual"-like comments in the room. The zenith distance during this observation was 44° and the image quality was measured as 0.9 arcsec, exactly the same as that registered by the Seeing Monitoring Telescope outside the telescope building. There was some wind. ESO PR Photo 22a/00 ESO PR Photo 22a/00 [Preview - JPEG: 374 x 400 pix - 128k] [Normal - JPEG: 978 x 1046 pix - 728k] Caption : ESO PR Photo 22a/00 shows a colour composite of some of the first astronomical exposures obtained by YEPUN . The object is the planetary nebula Hen 2-428 that is located at a distance of 6,000-8,000 light-years and seen in a dense sky field, only 2° from the main plane of the Milky Way. As other planetary nebulae, it is caused by a dying star (the bluish object at the centre) that shreds its outer layers. The image is based on exposures through three optical filtres: B(lue) (10 min exposure, seeing 0.9 arcsec; here rendered as blue), V(isual) (5 min; 0.9 arcsec; green) and R(ed) (3 min; 0.9 arcsec; red). The field measures 88 x 78 arcsec 2 (1 pixel = 0.09 arcsec). North is to the lower right and East is to the lower left. The 5-day old Moon was about 90° away in the sky that was accordingly bright. The zenith angle was 44°. The ESO staff then proceeded to take a series of three photos with longer exposures through three different optical filtres. They have been combined to produce the image shown in ESO PR Photo 22a/00 . More astronomical images were obtained in sequence, first of the dwarf galaxy NGC 6822 in the Local Group (see PR Photo 22f/00 below) and then of the spiral galaxy NGC 7793 . All 8.2-m telescopes now in operation at Paranal The ESO Director General, Catherine Cesarsky , who was present on Paranal during this event, congratulated the ESO staff to the great achievement, herewith bringing a major phase of the VLT project to a successful end. She was particularly impressed by the excellent optical quality that was achieved at this early moment of the commissioning tests. A measurement showed that already now, 80% of the light is concentrated within 0.22 arcsec. The manager of the VLT project, Massimo Tarenghi , was very happy to reach this crucial project milestone, after nearly fifteen years of hard work. He also remarked that with the M2 mirror already now "in the active optics loop", the telescope was correctly compensating for the somewhat mediocre atmospheric conditions on this night. The next major step will be the "first light" for the VLT Interferometer (VLTI) , when the light from two Unit Telescopes is combined. This event is expected in the middle of next year. Impressions from the YEPUN "First Light" event First Light for YEPUN - ESO PR VC 06/00 ESO PR Video Clip 06/00 "First Light for YEPUN" (5650 frames/3:46 min) [MPEG Video+Audio; 160x120 pix; 7.7Mb] [MPEG Video+Audio; 320x240 pix; 25.7 Mb] [RealMedia; streaming; 34kps] [RealMedia; streaming; 200kps] ESO Video Clip 06/00 shows sequences from the Control Room at the Paranal Observatory, recorded with a fixed TV-camera in the evening of September 3 at about 23:00 hrs local time (03:00 UT), i.e., soon after the moment of "First Light" for YEPUN . The video sequences were transmitted via ESO's dedicated satellite communication link to the Headquarters in Garching for production of the clip. It begins at the moment a guide star is acquired to perform an automatic "active optics" correction of the mirrors; the associated explanation is given by Massimo Tarenghi (VLT Project Manager). The first astronomical observation is performed and the first image of the planetary nebula Hen 2-428 is discussed by the ESO Director General, Catherine Cesarsky . The next image, of the nearby dwarf galaxy NGC 6822 , arrives and is shown and commented on by the ESO Director General. Finally, Massimo Tarenghi talks about the next major step of the VLT Project. The combination of the lightbeams from two 8.2-m Unit Telescopes, planned for the summer of 2001, will mark the beginning of the VLT Interferometer. ESO Press Photo 22b/00 ESO Press Photo 22b/00 [Preview; JPEG: 400 x 300; 88k] [Full size; JPEG: 1600 x 1200; 408k] The enclosure for the fourth VLT 8.2-m Unit Telescope, YEPUN , photographed at sunset on September 3, 2000, immediately before "First Light" was successfully achieved. The upper part of the mostly subterranean Interferometric Laboratory for the VLTI is seen in front. (Digital Photo). ESO Press Photo 22c/00 ESO Press Photo 22c/00 [Preview; JPEG: 400 x 300; 112k] [Full size; JPEG: 1280 x 960; 184k] The initial tuning of the YEPUN optical system took place in the early evening of September 3, 2000, from the "observing hut" on the floor of the telescope enclosure. From left to right: Krister Wirenstrand who is responsible for the VLT Control Software, Jason Spyromilio - Head of the Commissioning Team, and Massimo Tarenghi , VLT Manager. (Digital Photo). ESO Press Photo 22d/00 ESO Press Photo 22d/00 [Preview; JPEG: 400 x 300; 112k] [Full size; JPEG: 1280 x 960; 184k] "Mission Accomplished" - The ESO Director General, Catherine Cesarsky , and the Paranal Director, Roberto Gilmozzi , face the VLT Manager, Massimo Tarenghi at the YEPUN Control Station, right after successful "First Light" for this telescope. (Digital Photo). An aerial image of YEPUN in its enclosure is available as ESO PR Photo 43a/99. The mechanical structure of YEPUN was first pre-assembled at the Ansaldo factory in Milan (Italy) where it served for tests while the other telescopes were erected at Paranal. An early photo ( ESO PR Photo 37/95 ) is available that was obtained during the visit of the ESO Council to Milan in December 1995, cf. ESO PR 18/95. Paranal at sunset ESO Press Photo 22e/00 ESO Press Photo 22e/00 [Preview; JPEG: 400 x 200; 14kb] [Normal; JPEG: 800 x 400; 84kb] [High-Res; JPEG: 4000 x 2000; 4.0Mb] Wide-angle view of the Paranal Observatory at sunset. The last rays of the sun illuminate the telescope enclosures at the top of the mountain and some of the buildings at the Base Camp. The new "residencia" that will provide living space for the Paranal staff and visitors from next year is being constructed to the left. The "First Light" observations with YEPUN began soon after sunset. This photo was obtained in March 2000. Additional photos (September 6, 2000) ESO PR Photo 22f/00 ESO PR Photo 22f/00 [Preview - JPEG: 400 x 487 pix - 224k] [Normal - JPEG: 992 x 1208 pix - 1.3Mb] Caption : ESO PR Photo 22f/00 shows a colour composite of three exposures of a field in the dwarf galaxy NGC 6822 , a member of the Local Group of Galaxies at a distance of about 2 million light-years. They were obtained by YEPUN and the VLT Test Camera at about 23:00 hrs local time on September 3 (03:00 UT on September 4), 2000. The image is based on exposures through three optical filtres: B(lue) (10 min exposure; here rendered as blue), V(isual) (5 min; green) and R(ed) (5 min; red); the seeing was 0.9 - 1.0 arcsec. Individual stars of many different colours (temperatures) are seen. The field measures about 1.5 x 1.5 arcmin 2. Another image of this galaxy was obtained earlier with ANTU and FORS1 , cf. PR Photo 10b/99. ESO Press Photo 22g/00 ESO Press Photo 22g/00 [Preview; JPEG: 400 x 300; 136k] [Full size; JPEG: 1280 x 960; 224k] Most of the crew that put together YEPUN is here photographed after the installation of the M1 mirror cell at the bottom of the mechanical structure (on July 30, 2000). Back row (left to right): Erich Bugueno (Mechanical Supervisor), Erito Flores (Maintenance Technician); front row (left to right) Peter Gray (Mechanical Engineer), German Ehrenfeld (Mechanical Engineer), Mario Tapia (Mechanical Engineer), Christian Juica (kneeling - Mechanical Technician), Nelson Montano (Maintenance Engineer), Hansel Sepulveda (Mechanical Technican) and Roberto Tamai (Mechanical Engineer). (Digital Photo). ESO PR Photos may be reproduced, if credit is given to the European Southern Observatory. The ESO PR Video Clips service to visitors to the ESO website provides "animated" illustrations of the ongoing work and events at the European Southern Observatory. The most recent clip was: ESO PR Video Clip 05/00 ("Portugal to Accede to ESO (27 June 2000). Information is also available on the web about other ESO videos.

  19. Interactive Courseware Standards

    DTIC Science & Technology

    1992-07-01

    music industry standard provides data formats and transmission specifications for musical notation. Joint Photographic Experts Group (JPEG). This...has been used in the music industry for several years, especially for electronically programmable keyboards and 16 instruments. The video compression

  20. Web surveillance system using platform-based design

    NASA Astrophysics Data System (ADS)

    Lin, Shin-Yo; Tsai, Tsung-Han

    2004-04-01

    A revolutionary methodology of SOPC platform-based design environment for multimedia communications will be developed. We embed a softcore processor to perform the image compression in FPGA. Then, we plug-in an Ethernet daughter board in the SOPC development platform system. Afterward, a web surveillance platform system is presented. The web surveillance system consists of three parts: image capture, web server and JPEG compression. In this architecture, user can control the surveillance system by remote. By the IP address configures to Ethernet daughter board, the user can access the surveillance system via browser. When user access the surveillance system, the CMOS sensor presently capture the remote image. After that, it will feed the captured image with the embedded processor. The embedded processor immediately performs the JPEG compression. Afterward, the user receives the compressed data via Ethernet. To sum up of the above mentioned, the all system will be implemented on APEX20K200E484-2X device.

  1. Novel approach to multispectral image compression on the Internet

    NASA Astrophysics Data System (ADS)

    Zhu, Yanqiu; Jin, Jesse S.

    2000-10-01

    Still image coding techniques such as JPEG have been always applied onto intra-plane images. Coding fidelity is always utilized in measuring the performance of intra-plane coding methods. In many imaging applications, it is more and more necessary to deal with multi-spectral images, such as the color images. In this paper, a novel approach to multi-spectral image compression is proposed by using transformations among planes for further compression of spectral planes. Moreover, a mechanism of introducing human visual system to the transformation is provided for exploiting the psycho visual redundancy. The new technique for multi-spectral image compression, which is designed to be compatible with the JPEG standard, is demonstrated on extracting correlation among planes based on human visual system. A high measure of compactness in the data representation and compression can be seen with the power of the scheme taken into account.

  2. Privacy enabling technology for video surveillance

    NASA Astrophysics Data System (ADS)

    Dufaux, Frédéric; Ouaret, Mourad; Abdeljaoued, Yousri; Navarro, Alfonso; Vergnenègre, Fabrice; Ebrahimi, Touradj

    2006-05-01

    In this paper, we address the problem privacy in video surveillance. We propose an efficient solution based on transformdomain scrambling of regions of interest in a video sequence. More specifically, the sign of selected transform coefficients is flipped during encoding. We address more specifically the case of Motion JPEG 2000. Simulation results show that the technique can be successfully applied to conceal information in regions of interest in the scene while providing with a good level of security. Furthermore, the scrambling is flexible and allows adjusting the amount of distortion introduced. This is achieved with a small impact on coding performance and negligible computational complexity increase. In the proposed video surveillance system, heterogeneous clients can remotely access the system through the Internet or 2G/3G mobile phone network. Thanks to the inherently scalable Motion JPEG 2000 codestream, the server is able to adapt the resolution and bandwidth of the delivered video depending on the usage environment of the client.

  3. First Digit Law and Its Application to Digital Forensics

    NASA Astrophysics Data System (ADS)

    Shi, Yun Q.

    Digital data forensics, which gathers evidence of data composition, origin, and history, is crucial in our digital world. Although this new research field is still in its infancy stage, it has started to attract increasing attention from the multimedia-security research community. This lecture addresses the first digit law and its applications to digital forensics. First, the Benford and generalized Benford laws, referred to as first digit law, are introduced. Then, the application of first digit law to detection of JPEG compression history for a given BMP image and detection of double JPEG compressions are presented. Finally, applying first digit law to detection of double MPEG video compressions is discussed. It is expected that the first digit law may play an active role in other task of digital forensics. The lesson learned is that statistical models play an important role in digital forensics and for a specific forensic task different models may provide different performance.

  4. Deepest Wide-Field Colour Image in the Southern Sky

    NASA Astrophysics Data System (ADS)

    2003-01-01

    LA SILLA CAMERA OBSERVES CHANDRA DEEP FIELD SOUTH ESO PR Photo 02a/03 ESO PR Photo 02a/03 [Preview - JPEG: 400 x 437 pix - 95k] [Normal - JPEG: 800 x 873 pix - 904k] [HiRes - JPEG: 4000 x 4366 pix - 23.1M] Caption : PR Photo 02a/03 shows a three-colour composite image of the Chandra Deep Field South (CDF-S) , obtained with the Wide Field Imager (WFI) camera on the 2.2-m MPG/ESO telescope at the ESO La Silla Observatory (Chile). It was produced by the combination of about 450 images with a total exposure time of nearly 50 hours. The field measures 36 x 34 arcmin 2 ; North is up and East is left. Technical information is available below. The combined efforts of three European teams of astronomers, targeting the same sky field in the southern constellation Fornax (The Oven) have enabled them to construct a very deep, true-colour image - opening an exceptionally clear view towards the distant universe . The image ( PR Photo 02a/03 ) covers an area somewhat larger than the full moon. It displays more than 100,000 galaxies, several thousand stars and hundreds of quasars. It is based on images with a total exposure time of nearly 50 hours, collected under good observing conditions with the Wide Field Imager (WFI) on the MPG/ESO 2.2m telescope at the ESO La Silla Observatory (Chile) - many of them extracted from the ESO Science Data Archive . The position of this southern sky field was chosen by Riccardo Giacconi (Nobel Laureate in Physics 2002) at a time when he was Director General of ESO, together with Piero Rosati (ESO). It was selected as a sky region towards which the NASA Chandra X-ray satellite observatory , launched in July 1999, would be pointed while carrying out a very long exposure (lasting a total of 1 million seconds, or 278 hours) in order to detect the faintest possible X-ray sources. The field is now known as the Chandra Deep Field South (CDF-S) . The new WFI photo of CDF-S does not reach quite as deep as the available images of the "Hubble Deep Fields" (HDF-N in the northern and HDF-S in the southern sky, cf. e.g. ESO PR Photo 35a/98 ), but the field-of-view is about 200 times larger. The present image displays about 50 times more galaxies than the HDF images, and therefore provides a more representative view of the universe . The WFI CDF-S image will now form a most useful basis for the very extensive and systematic census of the population of distant galaxies and quasars, allowing at once a detailed study of all evolutionary stages of the universe since it was about 2 billion years old . These investigations have started and are expected to provide information about the evolution of galaxies in unprecedented detail. They will offer insights into the history of star formation and how the internal structure of galaxies changes with time and, not least, throw light on how these two evolutionary aspects are interconnected. GALAXIES IN THE WFI IMAGE ESO PR Photo 02b/03 ESO PR Photo 02b/03 [Preview - JPEG: 488 x 400 pix - 112k] [Normal - JPEG: 896 x 800 pix - 1.0M] [Full-Res - JPEG: 2591 x 2313 pix - 8.6M] Caption : PR Photo 02b/03 contains a collection of twelve subfields from the full WFI Chandra Deep Field South (WFI CDF-S), centred on (pairs or groups of) galaxies. Each of the subfields measures 2.5 x 2.5 arcmin 2 (635 x 658 pix 2 ; 1 pixel = 0.238 arcsec). North is up and East is left. Technical information is available below. The WFI CDF-S colour image - of which the full field is shown in PR Photo 02a/03 - was constructed from all available observations in the optical B- ,V- and R-bands obtained under good conditions with the Wide Field Imager (WFI) on the 2.2-m MPG/ESO telescope at the ESO La Silla Observatory (Chile), and now stored in the ESO Science Data Archive. It is the "deepest" image ever taken with this instrument. It covers a sky field measuring 36 x 34 arcmin 2 , i.e., an area somewhat larger than that of the full moon. The observations were collected during a period of nearly four years, beginning in January 1999 when the WFI instrument was first installed (cf. ESO PR 02/99 ) and ending in October 2002. Altogether, nearly 50 hours of exposure were collected in the three filters combined here, cf. the technical information below. Although it is possible to identify more than 100,000 galaxies in the image - some of which are shown in PR Photo 02b/03 - it is still remarkably "empty" by astronomical standards. Even the brightest stars in the field (of visual magnitude 9) can hardly be seen by human observers with binoculars. In fact, the area density of bright, nearby galaxies is only half of what it is in "normal" sky fields. Comparatively empty fields like this one provide an unsually clear view towards the distant regions in the universe and thus open a window towards the earliest cosmic times . Research projects in the Chandra Deep Field South ESO PR Photo 02c/03 ESO PR Photo 02c/03 [Preview - JPEG: 400 x 513 pix - 112k] [Normal - JPEG: 800 x 1026 pix - 1.2M] [Full-Res - JPEG: 1717 x 2201 pix - 5.5M] ESO PR Photo 02d/03 ESO PR Photo 02d/03 [Preview - JPEG: 400 x 469 pix - 112k] [Normal - JPEG: 800 x 937 pix - 1.0M] [Full-Res - JPEG: 2545 x 2980 pix - 10.7M] Caption : PR Photo 02c-d/03 shows two sky fields within the WFI image of CDF-S, reproduced at full (pixel) size to illustrate the exceptional information richness of these data. The subfields measure 6.8 x 7.8 arcmin 2 (1717 x 1975 pixels) and 10.1 x 10.5 arcmin 2 (2545 x 2635 pixels), respectively. North is up and East is left. Technical information is available below. Astronomers from different teams and disciplines have been quick to join forces in a world-wide co-ordinated effort around the Chandra Deep Field South. Observations of this area are now being performed by some of the most powerful astronomical facilities and instruments. They include space-based X-ray and infrared observations by the ESA XMM-Newton , the NASA CHANDRA , Hubble Space Telescope (HST) and soon SIRTF (scheduled for launch in a few months), as well as imaging and spectroscopical observations in the infrared and optical part of the spectrum by telescopes at the ground-based observatories of ESO (La Silla and Paranal) and NOAO (Kitt Peak and Tololo). A huge database is currently being created that will help to analyse the evolution of galaxies in all currently feasible respects. All participating teams have agreed to make their data on this field publicly available, thus providing the world-wide astronomical community with a unique opportunity to perform competitive research, joining forces within this vast scientific project. Concerted observations The optical true-colour WFI image presented here forms an important part of this broad, concerted approach. It combines observations of three scientific teams that have engaged in complementary scientific projects, thereby capitalizing on this very powerful combination of their individual observations. The following teams are involved in this work: * COMBO-17 (Classifying Objects by Medium-Band Observations in 17 filters) : an international collaboration led by Christian Wolf and other scientists at the Max-Planck-Institut für Astronomie (MPIA, Heidelberg, Germany). This team used 51 hours of WFI observing time to obtain images through five broad-band and twelve medium-band optical filters in the visual spectral region in order to measure the distances (by means of "photometric redshifts") and star-formation rates of about 10,000 galaxies, thereby also revealing their evolutionary status. * EIS (ESO Imaging Survey) : a team of visiting astronomers from the ESO community and beyond, led by Luiz da Costa (ESO). They observed the CDF-S for 44 hours in six optical bands with the WFI camera on the MPG/ESO 2.2-m telescope and 28 hours in two near-infrared bands with the SOFI instrument at the ESO 3.5-m New Technology Telescope (NTT) , both at La Silla. These observations form part of the Deep Public Imaging Survey that covers a total sky area of 3 square degrees. * GOODS (The Great Observatories Origins Deep Survey) : another international team (on the ESO side, led by Catherine Cesarsky ) that focusses on the coordination of deep space- and ground-based observations on a smaller, central area of the CDF-S in order to image the galaxies in many differerent spectral wavebands, from X-rays to radio. GOODS has contributed with 40 hours of WFI time for observations in three broad-band filters that were designed for the selection of targets to be spectroscopically observed with the ESO Very Large Telescope (VLT) at the Paranal Observatory (Chile), for which over 200 hours of observations are planned. About 10,000 galaxies will be spectroscopically observed in order to determine their redshift (distance), star formation rate, etc. Another important contribution to this large research undertaking will come from the GEMS project. This is a "HST treasury programme" (with Hans-Walter Rix from MPIA as Principal Investigator) which observes the 10,000 galaxies identified in COMBO-17 - and eventually the entire WFI-field with HST - to show the evolution of their shapes with time. Great questions With the combination of data from many wavelength ranges now at hand, the astronomers are embarking upon studies of the many different processes in the universe. They expect to shed more light on several important cosmological questions, such as: * How and when was the first generation of stars born? * When exactly was the neutral hydrogen in the universe ionized the first time by powerful radiation emitted from the first stars and active galactic nuclei? * How did galaxies and groups of galaxies evolve during the past 13 billion years? * What is the true nature of those elusive objects that are only seen at the infrared and submillimetre wavelengths (cf. ESO PR 23/02 )? * Which fraction of galaxies had an "active" nucleus (probably with a black hole at the centre) in their past, and how long did this phase last? Moreover, since these extensive optical observations were obtained in the course of a dozen observing periods during several years, it is also possible to perform studies of certain variable phenomena: * How many variable sources are seen and what are their types and properties? * How many supernovae are detected per time interval, i.e. what is the supernovae frequency at different cosmic epochs? * How do those processes depend on each other? This is just a short and very incomplete list of questions astronomers world-wide will address using all the complementary observations. No doubt that the coming studies of the Chandra Deep Field South - with this and other data - will be most exciting and instructive! Other wide-field images Other wide-field images from the WFI have been published in various ESO press releases during the past four years - they are also available at the WFI Photo Gallery . A collection of full-resolution files (TIFF-format) is available on a WFI CD-ROM . Technical Information The very extensive data reduction and colour image processing needed to produce these images were performed by Mischa Schirmer and Thomas Erben at the "Wide Field Expertise Center" of the Institut für Astrophysik und Extraterrestrische Forschung der Universität Bonn (IAEF) in Germany. It was done by means of a software pipeline specialised for reduction of multiple CCD wide-field imaging camera data. This pipeline is mainly based on publicly available software modules and algorithms ( EIS , FLIPS , LDAC , Terapix , Wifix ). The image was constructed from about 150 exposures in each of the following wavebands: B-band (centred at wavelength 456 nm; here rendered as blue, 15.8 hours total exposure time), V-band (540 nm; green, 15.6 hours) and R-band (652 nm; red, 17.8 hours). Only images taken under sufficiently good observing conditions (defined as seeing less than 1.1 arcsec) were included. In total, 450 images were assembled to produce this colour image, together with about as many calibration images (biases, darks and flats). More than 2 Terabyte (TB) of temporary files were produced during the extensive data reduction. Parallel processing of all data sets took about two weeks on a four-processor Sun Enterprise 450 workstation and a 1.8 GHz dual processor Linux PC. The final colour image was assembled in Adobe Photoshop. The observations were performed by ESO (GOODS, EIS) and the COMBO-17 collaboration in the period 1/1999-10/2002.

  5. Fast and accurate face recognition based on image compression

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Blasch, Erik

    2017-05-01

    Image compression is desired for many image-related applications especially for network-based applications with bandwidth and storage constraints. The face recognition community typical reports concentrate on the maximal compression rate that would not decrease the recognition accuracy. In general, the wavelet-based face recognition methods such as EBGM (elastic bunch graph matching) and FPB (face pattern byte) are of high performance but run slowly due to their high computation demands. The PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis) algorithms run fast but perform poorly in face recognition. In this paper, we propose a novel face recognition method based on standard image compression algorithm, which is termed as compression-based (CPB) face recognition. First, all gallery images are compressed by the selected compression algorithm. Second, a mixed image is formed with the probe and gallery images and then compressed. Third, a composite compression ratio (CCR) is computed with three compression ratios calculated from: probe, gallery and mixed images. Finally, the CCR values are compared and the largest CCR corresponds to the matched face. The time cost of each face matching is about the time of compressing the mixed face image. We tested the proposed CPB method on the "ASUMSS face database" (visible and thermal images) from 105 subjects. The face recognition accuracy with visible images is 94.76% when using JPEG compression. On the same face dataset, the accuracy of FPB algorithm was reported as 91.43%. The JPEG-compressionbased (JPEG-CPB) face recognition is standard and fast, which may be integrated into a real-time imaging device.

  6. High bit depth infrared image compression via low bit depth codecs

    NASA Astrophysics Data System (ADS)

    Belyaev, Evgeny; Mantel, Claire; Forchhammer, Søren

    2017-08-01

    Future infrared remote sensing systems, such as monitoring of the Earth's environment by satellites, infrastructure inspection by unmanned airborne vehicles etc., will require 16 bit depth infrared images to be compressed and stored or transmitted for further analysis. Such systems are equipped with low power embedded platforms where image or video data is compressed by a hardware block called the video processing unit (VPU). However, in many cases using two 8-bit VPUs can provide advantages compared with using higher bit depth image compression directly. We propose to compress 16 bit depth images via 8 bit depth codecs in the following way. First, an input 16 bit depth image is mapped into 8 bit depth images, e.g., the first image contains only the most significant bytes (MSB image) and the second one contains only the least significant bytes (LSB image). Then each image is compressed by an image or video codec with 8 bits per pixel input format. We analyze how the compression parameters for both MSB and LSB images should be chosen to provide the maximum objective quality for a given compression ratio. Finally, we apply the proposed infrared image compression method utilizing JPEG and H.264/AVC codecs, which are usually available in efficient implementations, and compare their rate-distortion performance with JPEG2000, JPEG-XT and H.265/HEVC codecs supporting direct compression of infrared images in 16 bit depth format. A preliminary result shows that two 8 bit H.264/AVC codecs can achieve similar result as 16 bit HEVC codec.

  7. Efficient transmission of compressed data for remote volume visualization.

    PubMed

    Krishnan, Karthik; Marcellin, Michael W; Bilgin, Ali; Nadar, Mariappan S

    2006-09-01

    One of the goals of telemedicine is to enable remote visualization and browsing of medical volumes. There is a need to employ scalable compression schemes and efficient client-server models to obtain interactivity and an enhanced viewing experience. First, we present a scheme that uses JPEG2000 and JPIP (JPEG2000 Interactive Protocol) to transmit data in a multi-resolution and progressive fashion. The server exploits the spatial locality offered by the wavelet transform and packet indexing information to transmit, in so far as possible, compressed volume data relevant to the clients query. Once the client identifies its volume of interest (VOI), the volume is refined progressively within the VOI from an initial lossy to a final lossless representation. Contextual background information can also be made available having quality fading away from the VOI. Second, we present a prioritization that enables the client to progressively visualize scene content from a compressed file. In our specific example, the client is able to make requests to progressively receive data corresponding to any tissue type. The server is now capable of reordering the same compressed data file on the fly to serve data packets prioritized as per the client's request. Lastly, we describe the effect of compression parameters on compression ratio, decoding times and interactivity. We also present suggestions for optimizing JPEG2000 for remote volume visualization and volume browsing applications. The resulting system is ideally suited for client-server applications with the server maintaining the compressed volume data, to be browsed by a client with a low bandwidth constraint.

  8. Pine Island Glacier, Antarctica, MISR Multi-angle Composite

    Atmospheric Science Data Center

    2013-12-17

    ...     View Larger Image (JPEG) A large iceberg has finally separated from the calving front ... next due to stereo parallax. This parallax is used in MISR processing to retrieve cloud heights over snow and ice. Additionally, a plume ...

  9. Energy and Quality-Aware Multimedia Signal Processing

    NASA Astrophysics Data System (ADS)

    Emre, Yunus

    Today's mobile devices have to support computation-intensive multimedia applications with a limited energy budget. In this dissertation, we present architecture level and algorithm-level techniques that reduce energy consumption of these devices with minimal impact on system quality. First, we present novel techniques to mitigate the effects of SRAM memory failures in JPEG2000 implementations operating in scaled voltages. We investigate error control coding schemes and propose an unequal error protection scheme tailored for JPEG2000 that reduces overhead without affecting the performance. Furthermore, we propose algorithm-specific techniques for error compensation that exploit the fact that in JPEG2000 the discrete wavelet transform outputs have larger values for low frequency subband coefficients and smaller values for high frequency subband coefficients. Next, we present use of voltage overscaling to reduce the data-path power consumption of JPEG codecs. We propose an algorithm-specific technique which exploits the characteristics of the quantized coefficients after zig-zag scan to mitigate errors introduced by aggressive voltage scaling. Third, we investigate the effect of reducing dynamic range for datapath energy reduction. We analyze the effect of truncation error and propose a scheme that estimates the mean value of the truncation error during the pre-computation stage and compensates for this error. Such a scheme is very effective for reducing the noise power in applications that are dominated by additions and multiplications such as FIR filter and transform computation. We also present a novel sum of absolute difference (SAD) scheme that is based on most significant bit truncation. The proposed scheme exploits the fact that most of the absolute difference (AD) calculations result in small values, and most of the large AD values do not contribute to the SAD values of the blocks that are selected. Such a scheme is highly effective in reducing the energy consumption of motion estimation and intra-prediction kernels in video codecs. Finally, we present several hybrid energy-saving techniques based on combination of voltage scaling, computation reduction and dynamic range reduction that further reduce the energy consumption while keeping the performance degradation very low. For instance, a combination of computation reduction and dynamic range reduction for Discrete Cosine Transform shows on average, 33% to 46% reduction in energy consumption while incurring only 0.5dB to 1.5dB loss in PSNR.

  10. Compression strategies for LiDAR waveform cube

    NASA Astrophysics Data System (ADS)

    Jóźków, Grzegorz; Toth, Charles; Quirk, Mihaela; Grejner-Brzezinska, Dorota

    2015-01-01

    Full-waveform LiDAR data (FWD) provide a wealth of information about the shape and materials of the surveyed areas. Unlike discrete data that retains only a few strong returns, FWD generally keeps the whole signal, at all times, regardless of the signal intensity. Hence, FWD will have an increasingly well-deserved role in mapping and beyond, in the much desired classification in the raw data format. Full-waveform systems currently perform only the recording of the waveform data at the acquisition stage; the return extraction is mostly deferred to post-processing. Although the full waveform preserves most of the details of the real data, it presents a serious practical challenge for a wide use: much larger datasets compared to those from the classical discrete return systems. Atop the need for more storage space, the acquisition speed of the FWD may also limit the pulse rate on most systems that cannot store data fast enough, and thus, reduces the perceived system performance. This work introduces a waveform cube model to compress waveforms in selected subsets of the cube, aimed at achieving decreased storage while maintaining the maximum pulse rate of FWD systems. In our experiments, the waveform cube is compressed using classical methods for 2D imagery that are further tested to assess the feasibility of the proposed solution. The spatial distribution of airborne waveform data is irregular; however, the manner of the FWD acquisition allows the organization of the waveforms in a regular 3D structure similar to familiar multi-component imagery, as those of hyper-spectral cubes or 3D volumetric tomography scans. This study presents the performance analysis of several lossy compression methods applied to the LiDAR waveform cube, including JPEG-1, JPEG-2000, and PCA-based techniques. Wide ranges of tests performed on real airborne datasets have demonstrated the benefits of the JPEG-2000 Standard where high compression rates incur fairly small data degradation. In addition, the JPEG-2000 Standard-compliant compression implementation can be fast and, thus, used in real-time systems, as compressed data sequences can be formed progressively during the waveform data collection. We conclude from our experiments that 2D image compression strategies are feasible and efficient approaches, thus they might be applied during the acquisition of the FWD sensors.

  11. First-Ever Census of Variable Mira-Type Stars in Galaxy Outside the Local Group

    NASA Astrophysics Data System (ADS)

    2003-05-01

    First-Ever Census of Variable Mira-Type Stars in Galaxy Outsidethe Local Group Summary An international team led by ESO astronomer Marina Rejkuba [1] has discovered more than 1000 luminous red variable stars in the nearby elliptical galaxy Centaurus A (NGC 5128) . Brightness changes and periods of these stars were measured accurately and reveal that they are mostly cool long-period variable stars of the so-called "Mira-type" . The observed variability is caused by stellar pulsation. This is the first time a detailed census of variable stars has been accomplished for a galaxy outside the Local Group of Galaxies (of which the Milky Way galaxy in which we live is a member). It also opens an entirely new window towards the detailed study of stellar content and evolution of giant elliptical galaxies . These massive objects are presumed to play a major role in the gravitational assembly of galaxy clusters in the Universe (especially during the early phases). This unprecedented research project is based on near-infrared observations obtained over more than three years with the ISAAC multi-mode instrument at the 8.2-m VLT ANTU telescope at the ESO Paranal Observatory . PR Photo 14a/03 : Colour image of the peculiar galaxy Centaurus A . PR Photo 14b/03 : Location of the fields in Centaurus A, now studied. PR Photo 14c/03 : "Field 1" in Centaurus A (visual light; FORS1). PR Photo 14d/03 : "Field 2" in Centaurus A (visual light; FORS1). PR Photo 14e/03 : "Field 1" in Centaurus A (near-infrared; ISAAC). PR Photo 14f/03 : "Field 2" in Centaurus A (near-infrared; ISAAC). PR Photo 14g/03 : Light variation of six variable stars in Centaurus A PR Photo 14h/03 : Light variation of stars in Centaurus A (Animated GIF) PR Photo 14i/03 : Light curves of four variable stars in Centaurus A. Mira-type variable stars Among the stars that are visible in the sky to the unaided eye, roughly one out of three hundred (0.3%) displays brightness variations and is referred to by astronomers as a "variable star". The percentage is much higher among large, cool stars ("red giants") - in fact, almost all luminous stars of that type are variable. Such stars are known as Mira-variables ; the name comes from the most prominent member of this class, Omicron Ceti in the constellation Cetus (The Whale), also known as "Stella Mira" (The Wonderful Star). Its brightness changes with a period of 332 days and it is about 1500 times brighter at maximum (visible magnitude 2 and one of the fifty brightest stars in the sky) than at minimum (magnitude 10 and only visible in small telescopes) [2]. Stars like Omicron Ceti are nearing the end of their life. They are very large and have sizes from a few hundred to about a thousand times that of the Sun. The brightness variation is due to pulsations during which the star's temperature and size change dramatically. In the following evolutionary phase, Mira-variables will shed their outer layers into surrounding space and become visible as planetary nebulae with a hot and compact star (a "white dwarf") at the middle of a nebula of gas and dust (cf. the "Dumbbell Nebula" - ESO PR Photo 38a-b/98 ). Several thousand Mira-type stars are currently known in the Milky Way galaxy and a few hundred have been found in other nearby galaxies, including the Magellanic Clouds. The peculiar galaxy Centaurus A ESO PR Photo 14a/03 ESO PR Photo 14a/03 [Preview - JPEG: 400 x 451 pix - 53k [Normal - JPEG: 800 x 903 pix - 528k] [Hi-Res - JPEG: 3612 x 4075 pix - 8.4M] ESO PR Photo 14b/03 ESO PR Photo 14b/03 [Preview - JPEG: 570 x 400 pix - 52k [Normal - JPEG: 1140 x 800 pix - 392k] ESO PR Photo 14c/03 ESO PR Photo 14c/03 [Preview - JPEG: 400 x 451 pix - 61k [Normal - JPEG: 800 x 903 pix - 768k] ESO PR Photo 14d/03 ESO PR Photo 14d/03 [Preview - JPEG: 400 x 451 pix - 56k [Normal - JPEG: 800 x 903 pix - 760k] Captions : PR Photo 14a/03 is a colour composite photo of the peculiar galaxy Centaurus A (NGC 5128) , obtained with the Wide-Field Imager (WFI) camera at the ESO/MPG 2.2-m telescope on La Silla. It is based on a total of nine 3-min exposures made on March 25, 1999, through different broad-band optical filters (B(lue) - total exposure time 9 min - central wavelength 456 nm - here rendered as blue; V(isual) - 540 nm - 9 min - green; I(nfrared) - 784 nm - 9 min - red); it was prepared from files in the ESO Science Data Archive by ESO-astronomer Benoît Vandame . The elliptical shape and the central dust band, the imprint of a galaxy collision, are well visible. PR Photo 14b/03 identifies the two regions of Centaurus A (the rectangles in the upper left and lower right inserts) in which a search for variable stars was made during the present research project: "Field 1" is located in an area north-east of the center in which many young stars are present. This is also the direction in which an outflow ("jet") is seen on deep optical and radio images. "Field 2" is positioned in the galaxy's halo, south of the centre. High-resolution, very deep colour photos of these two fields and their immediate surroundings are shown in PR Photos 14c-d/03 . They were produced by means of CCD-frames obtained in July 1999 through U- and V-band optical filters with the VLT FORS1 multi-mode instrument at the 8.2-m VLT ANTU telescope on Paranal. Note the great variety of object types and colours, including many background galaxies which are seen through these less dense regions of Centaurus A . The total exposure time was 30 min in each filter and the seeing was excellent, 0.5 arcsec. The original pixel size is 0.196 arcsec and the fields measure 6.7 x 6.7 arcmin 2 (2048 x 2048 pix 2 ). North is up and East is left on all photos. Centaurus A (NGC 5128) is the nearest giant galaxy, at a distance of about 13 million light-years. It is located outside the Local Group of Galaxies to which our own galaxy, the Milky Way, and its satellite galaxies, the Magellanic Clouds, belong. Centaurus A is seen in the direction of the southern constellation Centaurus. It is of elliptical shape and is currently merging with a companion galaxy, making it one of the most spectacular objects in the sky, cf. PR Photo 14a/03 . It possesses a very heavy black hole at its centre (see ESO PR 04/01 ) and is a source of strong radio and X-ray emission. During the present research programme, two regions in Centaurus A were searched for stars of variable brightness; they are located in the periphery of this peculiar galaxy, cf. PR Photos 14b-d/03 . An outer field ("Field 1") coincides with a stellar shell with many blue and luminous stars produced by the on-going galaxy merger; it lies at a distance of 57,000 light-years from the centre. The inner field ("Field 2") is more crowded and is situated at a projected distance of about 30,000 light-years from the centre.. Three years of VLT observations ESO PR Photo 14e/03 ESO PR Photo 14e/03 [Preview - JPEG: 400 x 447 pix - 120k [Normal - JPEG: 800 x 894 pix - 992k] ESO PR Photo 14f/03 ESO PR Photo 14f/03 [Preview - JPEG: 400 x 450 pix - 96k [Normal - JPEG: 800 x 899 pix - 912k] Caption : PR Photos 14e-f/03 are colour composites of two small fields ("Field 1" and "Field 2") in the peculiar galaxy Centaurus A (NGC 5128) , based on exposures through three near-infrared filters (the J-, H- and K-bands at wavelengths 1.2, 1.6 and 2.2 µm, respectively) with the ISAAC multi-mode instrument at the 8.2-m VLT ANTU telescope at the ESO Paranal observatory. The corresponding areas are outlined within the two inserts in PR Photo 14b/03 and may be compared with the visual images from FORS1 ( PR Photos 14c-d/03 ). These ISAAC photos are the deepest near-infrared images ever obtained in this galaxy and show thousands of its stars of different colours. In the present colour-coding, the redder an image, the cooler is the star. The original pixel size is 0.15 arcsec and both fields measure 2.5 x 2.5 arcmin 2. North is up and East is left. Under normal circumstances, any team of professional astronomers will have access to the largest telescopes in the world for only a very limited number of consecutive nights each year. However, extensive searches for variable stars like the present require repeated observations lasting minutes-to-hours over periods of months-to-years. It is thus not feasible to perform such observations in the classical way in which the astronomers travel to the telescope each time. Fortunately, the operational system of the VLT at the ESO Paranal Observatory (Chile) is also geared to encompass this kind of long-term programme. Between April 1999 and July 2002, the 8.2-m VLT ANTU telescope on Cerro Paranal in Chile) was operated in service mode on many occasions to obtain K-band images of the two fields in Centaurus A by means of the near-infrared ISAAC multi-mode instrument. Each field was observed over 20 times in the course of this three-year period ; some of the images were obtained during exceptional seeing conditions of 0.30 arcsec. One set of complementary optical images was obtained with the FORS1 multi-mode instrument (also on VLT ANTU) in July 1999. Each image from the ISAAC instrument covers a sky field measuring 2.5 x 2.5 arcmin 2. The combined images, encompassing a total exposure of 20 hours are indeed the deepest infrared images ever made of the halo of any galaxy as distant as Centaurus A , about 13 million light-years. Discovering one thousand Mira variables ESO PR Photo 14g/03 ESO PR Photo 14g/03 [Preview - JPEG: 400 x 480 pix - 61k [Normal - JPEG: 800 x 961 pix - 808k] ESO PR Photo 14h/03 ESO PR Photo 14h/03 [Animated GIF: 263 x 267 pix - 56k ESO PR Photo 14i/03 ESO PR Photo 14i/03 [Preview - JPEG: 480 x 400 pix - 33k [Normal - JPEG: 959 x 800 pix - 152k] Captions : PR Photo 14g/03 shows a zoomed-in area within "Field 2" in Centaurus A , from the ISAAC colour image shown in PR Photo 14e/03 . Nearly all red stars in this area are of the variable Mira-type. The brightness variation of some stars (labelled A-D) is demonstrated in the animated-GIF image PR Photo 14h/03 . The corresponding light curves (brightness over the pulsation period) are shown in PR Photo 14i/03 . Here the abscissa indicates the pulsation phase (one full period corresponds to the interval from 0 to 1) and the ordinate unit is near-infrared K s -magnitude. One magnitude corresponds to a difference in brightness of a factor 2.5. Once the lengthy observations were completed, two further steps were needed to identify the variable stars in Centaurus A . First, each ISAAC frame was individually processed to identify the thousands and thousands of faint point-like images (stars) visible in these fields. Next, all images were compared using a special software package ("DAOPHOT") to measure the brightness of all these stars in the different frames, i.e., as a function of time. While most stars in these fields as expected were found to have constant brightness, more than 1000 stars displayed variations in brightness with time; this is by far the largest number of variable stars ever discovered in a galaxy outside the Local Group of Galaxies. The detailed analysis of this enormous dataset took more than a year. Most of the variable stars were found to be of the Mira-type and their light curves (brightness over the pulsation period) were measured, cf. PR Photo 14i/03 . For each of them, values of the characterising parameters, the period (days) and brightness amplitude (magnitudes) were determined. A catalogue of the newly discovered variable stars in Centaurus A has now been made available to the astronomical community via the European research journal Astronomy & Astrophysics. Marina Rejkuba is pleased and thankful: "We are really very fortunate to have carried out this ambitious project so successfully. It all depended critically on different factors: the repeated granting of crucial observing time by the ESO Observing Programmes Committee over different observing periods in the face of rigorous international competition, the stability and reliability of the telescope and the ISAAC instrument over a period of more than three years and, not least, the excellent quality of the service mode observations, so efficiently performed by the staff at the Paranal Observatory." What have we learned about Centaurus A? The present study of variable stars in this giant elliptical galaxy is the first-ever of its kind. Although the evaluation of the very large observational data material is still not finished, it has already led to a number of very useful scientific results. Confirmation of the presence of an intermediate-age population Based on earlier research (optical and near-IR colour-magnitude diagrams of the stars in the fields), the present team of astronomers had previously detected the presence of intermediate-age and young stellar populations in the halo of this galaxy. The youngest stars appear to be aligned with the powerful jet produced by the massive black hole at the centre. Some of the very luminous red variable stars now discovered confirm the presence of a population of intermediate-age stars in the halo of this galaxy. It also contributes to our understanding of how giant elliptical galaxies form. New measurement of the distance to Centaurus A The pulsation of Mira-type variable stars obeys a period-luminosity relation. The longer its period, the more luminous is a Mira-type star. This fact makes it possible to use Mira-type stars as "standard candles" (objects of known intrinsic luminosity) for distance determinations. They have in fact often been used in this way to measure accurate distances to more nearby objects, e.g., to individual clusters of stars and to the center in our Milky Way galaxy, and also to galaxies in the Local Group, in particular the Magellanic Clouds. This method works particularly well with infrared measurements and the astronomers were now able to measure the distance to Centaurus A in this new way. They found 13.7 ± 1.9 million light-years , in general agreement with and thus confirming other methods. Study of stellar population gradients in the halo of a giant elliptical galaxy The two fields here studied contain different populations of stars. A clear dependence on the location (a "gradient") within the galaxy is observed, which can be due to differences in chemical composition or age, or to a combination of both. Understanding the cause of this gradient will provide additional clues to how Centaurus A - and indeed all giant elliptical galaxies - was formed and has since evolved. Comparison with other well-known nearby galaxies Past searches have discovered Mira-type variable stars thoughout the Milky Way, our home galaxy, and in other nearby galaxies in the Local Group. However, there are no giant elliptical galaxies like Centaurus A in the Local Group and this is the first time it has been possible to identify this kind of stars in that type of galaxy. The present investigation now opens a new window towards studies of the stellar constituents of such galaxies .

  12. A study on multiresolution lossless video coding using inter/intra frame adaptive prediction

    NASA Astrophysics Data System (ADS)

    Nakachi, Takayuki; Sawabe, Tomoko; Fujii, Tetsuro

    2003-06-01

    Lossless video coding is required in the fields of archiving and editing digital cinema or digital broadcasting contents. This paper combines a discrete wavelet transform and adaptive inter/intra-frame prediction in the wavelet transform domain to create multiresolution lossless video coding. The multiresolution structure offered by the wavelet transform facilitates interchange among several video source formats such as Super High Definition (SHD) images, HDTV, SDTV, and mobile applications. Adaptive inter/intra-frame prediction is an extension of JPEG-LS, a state-of-the-art lossless still image compression standard. Based on the image statistics of the wavelet transform domains in successive frames, inter/intra frame adaptive prediction is applied to the appropriate wavelet transform domain. This adaptation offers superior compression performance. This is achieved with low computational cost and no increase in additional information. Experiments on digital cinema test sequences confirm the effectiveness of the proposed algorithm.

  13. Random Walk Graph Laplacian-Based Smoothness Prior for Soft Decoding of JPEG Images.

    PubMed

    Liu, Xianming; Cheung, Gene; Wu, Xiaolin; Zhao, Debin

    2017-02-01

    Given the prevalence of joint photographic experts group (JPEG) compressed images, optimizing image reconstruction from the compressed format remains an important problem. Instead of simply reconstructing a pixel block from the centers of indexed discrete cosine transform (DCT) coefficient quantization bins (hard decoding), soft decoding reconstructs a block by selecting appropriate coefficient values within the indexed bins with the help of signal priors. The challenge thus lies in how to define suitable priors and apply them effectively. In this paper, we combine three image priors-Laplacian prior for DCT coefficients, sparsity prior, and graph-signal smoothness prior for image patches-to construct an efficient JPEG soft decoding algorithm. Specifically, we first use the Laplacian prior to compute a minimum mean square error initial solution for each code block. Next, we show that while the sparsity prior can reduce block artifacts, limiting the size of the overcomplete dictionary (to lower computation) would lead to poor recovery of high DCT frequencies. To alleviate this problem, we design a new graph-signal smoothness prior (desired signal has mainly low graph frequencies) based on the left eigenvectors of the random walk graph Laplacian matrix (LERaG). Compared with the previous graph-signal smoothness priors, LERaG has desirable image filtering properties with low computation overhead. We demonstrate how LERaG can facilitate recovery of high DCT frequencies of a piecewise smooth signal via an interpretation of low graph frequency components as relaxed solutions to normalized cut in spectral clustering. Finally, we construct a soft decoding algorithm using the three signal priors with appropriate prior weights. Experimental results show that our proposal outperforms the state-of-the-art soft decoding algorithms in both objective and subjective evaluations noticeably.

  14. Effects of Digitization and JPEG Compression on Land Cover Classification Using Astronaut-Acquired Orbital Photographs

    NASA Technical Reports Server (NTRS)

    Robinson, Julie A.; Webb, Edward L.; Evangelista, Arlene

    2000-01-01

    Studies that utilize astronaut-acquired orbital photographs for visual or digital classification require high-quality data to ensure accuracy. The majority of images available must be digitized from film and electronically transferred to scientific users. This study examined the effect of scanning spatial resolution (1200, 2400 pixels per inch [21.2 and 10.6 microns/pixel]), scanning density range option (Auto, Full) and compression ratio (non-lossy [TIFF], and lossy JPEG 10:1, 46:1, 83:1) on digital classification results of an orbital photograph from the NASA - Johnson Space Center archive. Qualitative results suggested that 1200 ppi was acceptable for visual interpretive uses for major land cover types. Moreover, Auto scanning density range was superior to Full density range. Quantitative assessment of the processing steps indicated that, while 2400 ppi scanning spatial resolution resulted in more classified polygons as well as a substantially greater proportion of polygons < 0.2 ha, overall agreement between 1200 ppi and 2400 ppi was quite high. JPEG compression up to approximately 46:1 also did not appear to have a major impact on quantitative classification characteristics. We conclude that both 1200 and 2400 ppi scanning resolutions are acceptable options for this level of land cover classification, as well as a compression ratio at or below approximately 46:1. Auto range density should always be used during scanning because it acquires more of the information from the film. The particular combination of scanning spatial resolution and compression level will require a case-by-case decision and will depend upon memory capabilities, analytical objectives and the spatial properties of the objects in the image.

  15. A multicenter observer performance study of 3D JPEG2000 compression of thin-slice CT.

    PubMed

    Erickson, Bradley J; Krupinski, Elizabeth; Andriole, Katherine P

    2010-10-01

    The goal of this study was to determine the compression level at which 3D JPEG2000 compression of thin-slice CTs of the chest and abdomen-pelvis becomes visually perceptible. A secondary goal was to determine if residents in training and non-physicians are substantially different from experienced radiologists in their perception of compression-related changes. This study used multidetector computed tomography 3D datasets with 0.625-1-mm thickness slices of standard chest, abdomen, or pelvis, clipped to 12 bits. The Kakadu v5.2 JPEG2000 compression algorithm was used to compress and decompress the 80 examinations creating four sets of images: lossless, 1.5 bpp (8:1), 1 bpp (12:1), and 0.75 bpp (16:1). Two randomly selected slices from each examination were shown to observers using a flicker mode paradigm in which observers rapidly toggled between two images, the original and a compressed version, with the task of deciding whether differences between them could be detected. Six staff radiologists, four residents, and six PhDs experienced in medical imaging (from three institutions) served as observers. Overall, 77.46% of observers detected differences at 8:1, 94.75% at 12:1, and 98.59% at 16:1 compression levels. Across all compression levels, the staff radiologists noted differences 64.70% of the time, the resident's detected differences 71.91% of the time, and the PhDs detected differences 69.95% of the time. Even mild compression is perceptible with current technology. The ability to detect differences does not equate to diagnostic differences, although perception of compression artifacts could affect diagnostic decision making and diagnostic workflow.

  16. Roundness variation in JPEG images affects the automated process of nuclear immunohistochemical quantification: correction with a linear regression model.

    PubMed

    López, Carlos; Jaén Martinez, Joaquín; Lejeune, Marylène; Escrivà, Patricia; Salvadó, Maria T; Pons, Lluis E; Alvaro, Tomás; Baucells, Jordi; García-Rojo, Marcial; Cugat, Xavier; Bosch, Ramón

    2009-10-01

    The volume of digital image (DI) storage continues to be an important problem in computer-assisted pathology. DI compression enables the size of files to be reduced but with the disadvantage of loss of quality. Previous results indicated that the efficiency of computer-assisted quantification of immunohistochemically stained cell nuclei may be significantly reduced when compressed DIs are used. This study attempts to show, with respect to immunohistochemically stained nuclei, which morphometric parameters may be altered by the different levels of JPEG compression, and the implications of these alterations for automated nuclear counts, and further, develops a method for correcting this discrepancy in the nuclear count. For this purpose, 47 DIs from different tissues were captured in uncompressed TIFF format and converted to 1:3, 1:23 and 1:46 compression JPEG images. Sixty-five positive objects were selected from these images, and six morphological parameters were measured and compared for each object in TIFF images and those of the different compression levels using a set of previously developed and tested macros. Roundness proved to be the only morphological parameter that was significantly affected by image compression. Factors to correct the discrepancy in the roundness estimate were derived from linear regression models for each compression level, thereby eliminating the statistically significant differences between measurements in the equivalent images. These correction factors were incorporated in the automated macros, where they reduced the nuclear quantification differences arising from image compression. Our results demonstrate that it is possible to carry out unbiased automated immunohistochemical nuclear quantification in compressed DIs with a methodology that could be easily incorporated in different systems of digital image analysis.

  17. VIMOS - a Cosmology Machine for the VLT

    NASA Astrophysics Data System (ADS)

    2002-03-01

    Successful Test Observations With Powerful New Instrument at Paranal [1] Summary One of the most fundamental tasks of modern astrophysics is the study of the evolution of the Universe . This is a daunting undertaking that requires extensive observations of large samples of objects in order to produce reasonably detailed maps of the distribution of galaxies in the Universe and to perform statistical analysis. Much effort is now being put into mapping the relatively nearby space and thereby to learn how the Universe looks today . But to study its evolution, we must compare this with how it looked when it still was young . This is possible, because astronomers can "look back in time" by studying remote objects - the larger their distance, the longer the light we now observe has been underway to us, and the longer is thus the corresponding "look-back time". This may sound easy, but it is not. Very distant objects are very dim and can only be observed with large telescopes. Looking at one object at a time would make such a study extremely time-consuming and, in practical terms, impossible. To do it anyhow, we need the largest possible telescope with a highly specialised, exceedingly sensitive instrument that is able to observe a very large number of (faint) objects in the remote universe simultaneously . The VLT VIsible Multi-Object Spectrograph (VIMOS) is such an instrument. It can obtain many hundreds of spectra of individual galaxies in the shortest possible time; in fact, in one special observing mode, up to 6400 spectra of the galaxies in a remote cluster during a single exposure, augmenting the data gathering power of the telescope by the same proportion. This marvellous science machine has just been installed at the 8.2-m MELIPAL telescope, the third unit of the Very Large Telescope (VLT) at the ESO Paranal Observatory. A main task will be to carry out 3-dimensional mapping of the distant Universe from which we can learn its large-scale structure . "First light" was achieved on February 26, 2002, and a first series of test observations has successfully demonstrated the huge potential of this amazing facility. Much work on VIMOS is still ahead during the coming months in order to put into full operation and fine-tune the most efficient "galaxy cruncher" in the world. VIMOS is the outcome of a fruitful collaboration between ESO and several research institutes in France and Italy, under the responsibility of the Laboratoire d'Astrophysique de Marseille (CNRS, France). The other partners in the "VIRMOS Consortium" are the Laboratoire d'Astrophysique de Toulouse, Observatoire Midi-Pyrénées, and Observatoire de Haute-Provence in France, and Istituto di Radioastronomia (Bologna), Istituto di Fisica Cosmica e Tecnologie Relative (Milano), Osservatorio Astronomico di Bologna, Osservatorio Astronomico di Brera (Milano) and Osservatorio Astronomico di Capodimonte (Naples) in Italy. PR Photo 09a/02 : VIMOS image of the Antennae Galaxies (centre). PR Photo 09b/02 : First VIMOS Multi-Object Spectrum (full field) PR Photo 09c/02 : The VIMOS instrument on VLT MELIPAL PR Photo 09d/02 : The VIMOS team at "First Light". PR Photo 09e/02 : "First Light" image of NGC 5364 PR Photo 09f/02 : Image of the Crab Nebula PR Photo 09g/02 : Image of spiral galaxy NGC 2613 PR Photo 09h/02 : Image of spiral galaxy Messier 100 PR Photo 09i/02 : Image of cluster of galaxies ACO 3341 PR Photo 09j/02 : Image of cluster of galaxies MS 1008.1-1224 PR Photo 09k/02 : Mask design for MOS exposure PR Photo 09l/02 : First VIMOS Multi-Object Spectrum (detail) PR Photo 09m/02 : Integrated Field Spectroscopy of central area of the "Antennae Galaxies" PR Photo 09n/02 : Integrated Field Spectroscopy of central area of the "Antennae Galaxies" (detail) Science with VIMOS ESO PR Photo 09a/02 ESO PR Photo 09a/02 [Preview - JPEG: 400 x 469 pix - 152k] [Normal - JPEG: 800 x 938 pix - 408k] ESO PR Photo 09b/02 ESO PR Photo 09b/02 [Preview - JPEG: 400 x 511 pix - 304k] [Normal - JPEG: 800 x 1022 pix - 728k] Caption : PR Photo 09a/02 : One of the first images from the new VIMOS facility, obtained right after the moment of "first light" on Ferbruary 26, 2002. It shows the famous "Antennae Galaxies" (NGC 4038/39), the result of a recent collision between two galaxies. As an immediate outcome of this dramatic event, stars are born within massive complexes that appear blue in this composite photo, based on exposures through green, orange and red optical filtres. PR Photo 09b/02 : Some of the first spectra of distant galaxies obtained with VIMOS in Multi-Object-Spectroscopy (MOS) mode. More than 220 galaxies were observed simultaneously, an unprecedented efficiency for such a "deep" exposure, reaching so far out in space. These spectra allow to obtain the redshift, a measure of distance, as well as to assess the physical status of the gas and stars in each of these galaxies. A part of this photo is enlarged as PR Photo 09l/02. Technical information about these photos is available below. Other "First Light" images from VIMOS are shown in the photo gallery below. The next in the long series of front-line instruments to be installed on the ESO Very Large Telescope (VLT), VIMOS (and its complementary, infrared-sensitive counterpart NIRMOS, now in the design stage) will allow mapping of the distribution of galaxies, clusters, and quasars during a time interval spanning more than 90% of the age of the universe. It will let us look back in time to a moment only ~1.5 billion years after the Big Bang (corresponding to a redshift of about 5). Like archaeologists, astronomers can then dig deep into those early ages when the first building blocks of galaxies were still in the process of formation. They will be able to determine when most of the star formation occurred in the universe and how it evolved with time. They will analyse how the galaxies cluster in space, and how this distribution varies with time. Such observations will put important constraints on evolution models, in particular on the average density of matter in the Universe. Mapping the distant universe requires to determine the distances of the enormous numbers of remote galaxies seen in deep pictures of the sky, adding depth - the third, indispensible dimension - to the photo. VIMOS offers this capability, and very efficiently. Multi-object spectroscopy is a technique by which many objects are observed simultaneously. VIMOS can observe the spectra of about 1000 galaxies in one exposure, from which redshifts, hence distances, can be measured [2]. The possibility to observe two galaxies at once would be equivalent to having a telescope twice the size of a VLT Unit Telescope. VIMOS thus effectively "increases" the size of the VLT hundreds of times. From these spectra, the stellar and gaseous content and internal velocities of galaxies can be infered, forming the base for detailed physical studies. At present the distances of only a few thousand galaxies and quasars have been measured in the distant universe. VIMOS aims at observing 100 times more, over one hundred thousand of those remote objects. This will form a solid base for unprecedented and detailed statistical studies of the population of galaxies and quasars in the very early universe. The international VIRMOS Consortium VIMOS is one of two major astronomical instruments to be delivered by the VIRMOS Consortium of French and Italian institutes under a contract signed in the summer of 1997 between the European Southern Observatory (ESO) and the French Centre National de la Recherche Scientifique (CNRS). The participating institutes are: in France: * Laboratoire d'Astrophysique de Marseille (LAM), Observatoire Marseille-Provence (project responsible) * Laboratoire d'Astrophysique de Toulouse, Observatoire Midi-Pyrénées * Observatoire de Haute-Provence (OHP) in Italy: * Istituto di Radioastronomia (IRA-CNR) (Bologna) * Istituto di Fisica Cosmica e Tecnologie Relative (IFCTR) (Milano) * Osservatorio Astronomico di Capodimonte (OAC) (Naples) * Osservatorio Astronomico di Bologna (OABo) * Osservatorio Astronomico di Brera (OABr) (Milano) VIMOS at the VLT: a unique and powerful combination ESO PR Photo 09c/02 ESO PR Photo 09c/02 [Preview - JPEG: 501 x 400 pix - 312k] [Normal - JPEG: 1002 x 800 pix - 840k] Caption : PR Photo 09c/02 shows the new VIMOS instrument on one of the Nasmyth platforms of the 8.2-m VLT MELIPAL telescope at Paranal. VIMOS is installed on the Nasmyth "Focus B" platform of the 8.2-m VLT MELIPAL telescope, cf. PR Photo 09c/02 . It may be compared to four multi-mode instruments of the FORS-type (cf. ESO PR 14/98 ), joined in one stiff structure. The construction of VIMOS has involved the production of large and complex optical elements and their integration in more than 30 remotely controlled, finely moving functions in the instrument. In the configuration employed for the "first light", VIMOS made use of two of its four channels. The two others will be put into operation in the next commissioning period during the coming months. However, VIMOS is already now the most efficient multi-object spectrograph in the world , with an equivalent (accumulated) slit length of up to 70 arcmin on the sky. VIMOS has a field-of-view as large as half of the full moon (14 x 16 arcmin 2 for the four quadrants), the largest sky field to be imaged so far by the VLT. It has excellent sensitivity in the blue region of the spectrum (about 60% more efficient than any other similar instruments in the ultraviolet band), and it is also very sensitive in all other visible spectral regions, all the way to the red limit. But the absolutely unique feature of VIMOS is its capability to take large numbers of spectra simultaneously , leading to exceedingly efficient use of the observing time. Up to about 1000 objects can be observed in a single exposure in multi-slit mode. And no less than 6400 spectra can be recorded with the Integral Field Unit , in which a closely packed fibre optics bundle can simultaneously observe a continuous sky area measuring no less than 56 x 56 arcsec 2. A dedicated machine, the Mask Manufacturing Unit (MMU) , cuts the slits for the entrance apertures of the spectrograph. The laser is capable of cutting 200 slits in less than 15 minutes. This facility was put into operation at Paranal by the VIRMOS Consortium already in August 2000 and has since been extensively used for observations with the FORS2 instrument; more details are available in ESO PR 19/99. Fast start-up of VIMOS at Paranal ESO PR Photo 09d/02 ESO PR Photo 09d/02 [Preview - JPEG: 473 x 400 pix - 280k] [Normal - JPEG: 946 x 1209 pix - 728k] ESO PR Photo 09e/02 ESO PR Photo 09e/02 [Preview - JPEG: 400 x 438 pix - 176k] [Normal - JPEG: 800 x 876 pix - 664k] Caption : PR Photo 09d/02 : The VIRMOS team in the MELIPAL control room, moments after "First Light" on February 26, 2002. From left to right: Oreste Caputi, Marco Scodeggio, Giovanni Sciarretta , Olivier Le Fevre, Sylvie Brau-Nogue, Christian Lucuix, Bianca Garilli, Markus Kissler-Patig (in front), Xavier Reyes, Michel Saisse, Luc Arnold and Guido Mancini . PR Photo 09e/02 : The spiral galaxy NGC 5364 was the first object to be observed by VIMOS. This false-colour near-infrared, raw "First Light" photo shows the extensive spiral arms. Technical information about this photo is available below. VIMOS was shipped from Observatoire de Haute-Provence (France) at the end of 2001, and reassembled at Paranal during a first period in January 2002. From mid-February, the instrument was made ready for installation on the VLT MELIPAL telescope; this happened on February 24, 2002. VIMOS saw "First Light" just two days later, on February 26, 2000, cf. PR Photo 09e/02 . During the same night, a number of excellent images were obtained of various objects, demonstrating the fine capabilities of the instrument in the "direct imaging"-mode. The first spectra were successfully taken during the night of March 2 - 3, 2002 . The slit masks that were used on this occasion were prepared with dedicated software that also optimizes the object selection, cf. PR Photo 09k/02 , and were then cut with the laser machine. From the first try on, the masks have been well aligned on the sky objects. The first observations with large numbers of spectra were obtained shortly thereafter. First accomplishments Images of nearby galaxies, clusters of galaxies, and distant galaxy fields were among the first to be obtained, using the VIMOS imaging mode and demonstrating the excellent efficiency of the instrument, various examples are shown below. The first observations of multi-spectra were performed in a selected sky field in which many faint galaxies are present; it is known as the "VIRMOS-VLT Deep Survey Field at 1000+02". Thanks to the excellent sensitivity of VIMOS, the spectra of galaxies as faint as (red) magnitude R = 23 (i.e. over 6 million times fainter than what can be perceived with the unaided eye) are visible on exposures lasting only 15 minutes. Some of the first observations with the Integral Field Unit were made of the core of the famous Antennae Galaxies (NGC 4038/39) . They will form the basis for a detailed map of the strong emission produced by the current, dramatic collision of the two galaxies. First Images and Spectra from VIMOS - a Gallery The following photos are from a collection of the first images and spectra obtained with VIMOS . See also PR Photos 09a/02 , 09b/02 and 09e/02 , reproduced above. Technical information about all of them is available below. ESO PR Photo 09f/02 ESO PR Photo 09f/02 [Preview - JPEG: 400 x 469 pix - 224k] [Normal - JPEG: 800 x 937 pix - 544k] [HiRes - JPEG: 2001 x 2343 pix - 3.6M] Caption : PR Photo 09f/02 : The Crab Nebula (Messier 1) , as observed by VIMOS. This well-known object is the remnant of a stellar explosion in the year 1054. ESO PR Photo 09g/02 ESO PR Photo 09g/02 [Preview - JPEG: 478 x 400 pix - 184k] [Normal - JPEG: 956 x 1209 pix - 416k] [HiRes - JPEG: 1801 x 1507 pix - 1.4M] Caption : PR Photo 09g/02 : VIMOS photo of NGC 2613 , a spiral galaxy that ressembles our own Milky Way. ESO PR Photo 09h/02 ESO PR Photo 09h/02 [Preview - JPEG: 400 x 469 pix - 152k] [Normal - JPEG: 800 x 938 pix - 440k] [HiRes - JPEG: 1800 x 2100 pix - 2.0M] Caption : PR Photo 09h/02 : Messier 100 is one of the largest and brightest spiral galaxies in the sky. ESO PR Photo 09i/02 ESO PR Photo 09i/02 [Preview - JPEG: 400 x 405 pix - 144k] [Normal - JPEG: 800 x 810 pix - 312k] Caption : PR Photo 09i/02 : The cluster of galaxies ACO 3341 is located at a distance of about 300 million light-years (redshift z = 0.037), i.e., comparatively nearby in cosmological terms. It contains a large number of galaxies of different size and brightness that are bound together by gravity. ESO PR Photo 09j/02 ESO PR Photo 09j/02 [Preview - JPEG: 447 x 400 pix - 200k] [Normal - JPEG: 893 x 800 pix - 472k] [HiRes - JPEG: 1562 x 1399 pix - 1.1M] Caption : PR Photo 09j/02 : The distant cluster of galaxies MS 1008.1-1224 is some 3 billion light-years distant (redshift z = 0.301). The galaxies in this cluster - that we observe as they were 3 billion years ago - are different from galaxies in our neighborhood; their stellar populations, on the average, are younger. ESO PR Photo 09k/02 ESO PR Photo 09k/02 [Preview - JPEG: 400 x 455 pix - 280k] [Normal - JPEG: 800 x 909 pix - 696k] Caption : PR Photo 09k/02 : Design of a Mask for Multi-Object Spectroscopy (MOS) observations with VIMOS. The mask serves to block, as far as possible, unwanted background light from the "night sky" (radiation from atoms and molecules in the Earth's upper atmosphere). During the set-up process for multi-object observations, the VIMOS software optimizes the position of the individual slits in the mask (one for each object for which a spectrum will be obtained) before these are cut. The photo shows an example of this fitting process, with the slit contours superposed on a short pre-exposure of the sky field to be observed. ESO PR Photo 09l/02 ESO PR Photo 09l/02 [Preview - JPEG: 470 x 400 pix - 200k] [Normal - JPEG: 939 x 800 pix - 464k] Caption : PR Photo 09l/02 : First Multi-Object Spectroscopy (MOS) observations with VIMOS; enlargement of a small part of the field shown in PR Photo 09b/02. The light from each galaxy passes through the dedicated slit in the mask (see PR Photo 09k/02 ) and produces a spectrum on the detector. Each vertical rectangle contains the spectrum of one galaxy that is located several billion light-years away. The horizontal lines are the strong emission from the "night sky" (radiation from atoms and molecules in the Earth's upper atmosphere), while the vertical traces are the spectral signatures of the galaxies. The full field contains the spectra of over 220 galaxies that were observed simultaneously, illustrating the great efficiency of this technique. Later, about 1000 spectra will be obtained in one exposure. ESO PR Photo 09m/02 ESO PR Photo 09m/02 [Preview - JPEG: 470 x 400 pix - 264k] [Normal - JPEG: 939 x 800 pix - 720k] Caption : PR Photo 09m/02 : was obtained with the Integral Field Spectroscopy mode of VIMOS. In one single exposure, more than 3000 spectra were taken of the central area of the Antennae Galaxies ( PR Photo 09a/02 ). ESO PR Photo 09n/02 ESO PR Photo 09n/02 [Preview - JPEG: 532 x 400 pix - 320k] [Normal - JPEG: 1063 x 800 pix - 864k] Caption : PR Photo 09n/02 : An enlargement of a small area in PR Photo 09m/02. This observation allows mapping of the distribution of elements like hydrogen (H) and sulphur (S II), for which the signatures are clearly identified in these spectra. The wavelength increases towards the top (arrow). Notes [1]: This is a joint Press Release of ESO , Centre National de la Recherche Scientifique (CNRS) in France, and Consiglio Nazionale delle Ricerche (CNR) and Istituto Nazionale di Astrofisica (INAF) in Italy. [2]: In astronomy, the redshift denotes the fraction by which the lines in the spectrum of an object are shifted towards longer wavelengths. The observed redshift of a distant galaxy gives a direct estimate of the apparent recession velocity as caused by the universal expansion. Since the expansion rate increases with distance, the velocity is itself a function (the Hubble relation) of the distance to the object. Technical information about the photos PR Photo 09a/01 : Composite VRI image of NGC 4038/39, obtained on 26 February 2002, in a bright sky (full moon). Individual exposures of 60 sec each; image quality 0.6 arcsec FWHM; the field measures 3.5 x 3.5 arcmin 2. North is up and East is left. PR Photo 09b/02 : MOS-spectra obtained with two quadrants totalling 221 slits + 6 reference objects (stars placed in square holes to ensure a correct alignment). Exposure time 15 min; LR(red) grism. This is the raw (unprocessed) image of the spectra. PR Photo 09e/02 : A 60 sec i exposure of NGC 5364 on February 26, 2002; image quality 0.6 arcsec FWHM; full moon; 3.5 x 3.5 arcmin 2 ; North is up and East is left. PR Photo 09f/02 : Composite VRI image of Messier 1, obtained on March 4, 2002. The individual exposures lasted 180 sec; image quality 0.7 arcsec FWHM; field 7 x 7 arcmin 2 ; North is up and East is left. PR Photo 09g/02 : Composite VRI image of NGC 2613, obtained on February 28, 2002. The individual exposures lasted 180 sec; image quality 0.7 arcsec FWHM; field 7 x 7 arcmin 2 ; North is up and East is left. PR Photo 09h/02 : Composite VRI image of Messier 100, obtained on March 3, 2002. The individual exposures lasted 180 sec, image quality 0.7 arcsec FWHM; field 7 x 7 arcmin 2 ; North is up and East is left. PR Photo 09i/02 : R-band image of galaxy cluster ACO 3341, obtained on March 4, 2002. Exposure 300 sec, image quality 0.5 arcsec FWHM;. field 7 x 7 arcmin 2 ; North is up and East is left. PR Photo 09j/02 : Composite VRI image of the distant cluster of galaxies MS 1008.1-1224. The individual exposures lasted 300 sec; image quality 0.8 arcsec FWHM; field 5 x 3 arcmin 2 ; North is to the right and East is up. PR Photo 09k/02 : Mask design made with the VMMPS tool, overlaying a pre-image. The selected objects are seen at the centre of the yellow squares, where a 1 arcsec slit is cut along the spatial X-axis. The rectangles in white represent the dispersion in wavelength of the spectra along the Y-axis. Masks are cut with the Mask Manufacturing Unit (MMU) built by the Virmos Consortium. PR Photo 09l/02 : Enlargement of a small area of PR Photo 09b/02. PR Photo 09m/02 : Spectra of the central area of NGC 4038/39, obtained with the Integral Field Unit on February 26, 2002. The exposure lasted 5 min and was made with the low resolution red grating. PR Photo 09m/02 : Zoom-in on small area of PR Photo 09m/02. The strong emission lines of hydrogen (H-alpha) and ionized sulphur (S II) are seen.

  18. Next VLT Instrument Ready for the Astronomers

    NASA Astrophysics Data System (ADS)

    2000-02-01

    FORS2 Commissioning Period Successfully Terminated The commissioning of the FORS2 multi-mode astronomical instrument at KUEYEN , the second FOcal Reducer/low dispersion Spectrograph at the ESO Very Large Telescope, was successfully finished today. This important work - that may be likened with the test driving of a new car model - took place during two periods, from October 22 to November 21, 1999, and January 22 to February 8, 2000. The overall goal was to thoroughly test the functioning of the new instrument, its conformity to specifications and to optimize its operation at the telescope. FORS2 is now ready to be handed over to the astronomers on April 1, 2000. Observing time for a six-month period until October 1 has already been allocated to a large number of research programmes. Two of the images that were obtained with FORS2 during the commissioning period are shown here. An early report about this instrument is available as ESO PR 17/99. The many modes of FORS2 The FORS Commissioning Team carried out a comprehensive test programme for all observing modes. These tests were done with "observation blocks (OBs)" that describe the set-up of the instrument and telescope for each exposure in all details, e.g., position in the sky of the object to be observed, filters, exposure time, etc.. Whenever an OB is "activated" from the control console, the corresponding observation is automatically performed. Additional information about the VLT Data Flow System is available in ESO PR 10/99. The FORS2 observing modes include direct imaging, long-slit and multi-object spectroscopy, exactly as in its twin, FORS1 at ANTU . In addition, FORS2 contains the "Mask Exchange Unit" , a motorized magazine that holds 10 masks made of thin metal plates into which the slits are cut by means of a laser. The advantage of this particular observing method is that more spectra (of more objects) can be taken with a single exposure (up to approximately 80) and that the shape of the slits can be adapted to the shape of the objects, thus increasing the scientific return. Results obtained so far look very promising. To increase further the scientific power of the FORS2 instrument in the spectroscopic mode, a number of new optical dispersion elements ("grisms", i.e., a combination of a grating and a glass prism) have been added. They give the scientists a greater choice of spectral resolution and wavelength range. Another mode that is new to FORS2 is the high time resolution mode. It was demonstrated with the Crab pulsar, cf. ESO PR 17/99 and promises very interesting scientific returns. Images from the FORS2 Commissioning Phase The two composite images shown below were obtained during the FORS2 commissioning work. They are based on three exposures through different optical broadband filtres (B: 429 nm central wavelength; 88 nm FWHM (Full Width at Half Maximum), V: 554/111 nm, R: 655/165 nm). All were taken with the 2048 x 2048 pixel 2 CCD detector with a field of view of 6.8 x 6.8 arcmin 2 ; each pixel measures 24 µm square. They were flatfield corrected and bias subtracted, scaled in intensity and some cosmetic cleaning was performed, e.g. removal of bad columns on the CCD. North is up and East is left. Tarantula Nebula in the Large Magellanic Cloud ESO Press Photo 05a/00 ESO Press Photo 05a/00 [Preview; JPEG: 400 x 452; 52k] [Normal; JPEG: 800 x 903; 142k] [Full-Res; JPEG: 2048 x 2311; 2.0Mb] The Tarantula Nebula in the Large Magellanic Cloud , as obtained with FORS2 at KUEYEN during the recent Commissioning period. It was taken during the night of January 31 - February 1, 2000. It is a composite of three exposures in B (30 sec exposure, image quality 0.75 arcsec; here rendered in blue colour), V (15 sec, 0.70 arcsec; green) and R (10 sec, 0.60 arcsec; red). The full-resolution version of this photo retains the orginal pixels. 30 Doradus , also known as the Tarantula Nebula , or NGC 2070 , is located in the Large Magellanic Cloud (LMC) , some 170,000 light-years away. It is one of the largest known star-forming regions in the Local Group of Galaxies. It was first catalogued as a star, but then recognized to be a nebula by the French astronomer A. Lacaille in 1751-52. The Tarantula Nebula is the only extra-galactic nebula which can be seen with the unaided eye. It contains in the centre the open stellar cluster R 136 with many of the largest, hottest, and most massive stars known. Radio Galaxy Centaurus A ESO Press Photo 05b/00 ESO Press Photo 05b/00 [Preview; JPEG: 400 x 448; 40k] [Normal; JPEG: 800 x 896; 110k] [Full-Res; JPEG: 2048 x 2293; 2.0Mb] The radio galaxy Centarus A , as obtained with FORS2 at KUEYEN during the recent Commissioning period. It was taken during the night of January 31 - February 1, 2000. It is a composite of three exposures in B (300 sec exposure, image quality 0.60 arcsec; here rendered in blue colour), V (240 sec, 0.60 arcsec; green) and R (240 sec, 0.55 arcsec; red). The full-resolution version of this photo retains the orginal pixels. ESO Press Photo 05c/00 ESO Press Photo 05c/00 [Preview; JPEG: 400 x 446; 52k] [Normal; JPEG: 801 x 894; 112k] An area, north-west of the centre of Centaurus A with a detailed view of the dust lane and clusters of luminous blue stars. The normal version of this photo retains the orginal pixels. The new FORS2 image of Centaurus A , also known as NGC 5128 , is an example of how frontier science can be combined with esthetic aspects. This galaxy is a most interesting object for the present attempts to understand active galaxies . It is being investigated by means of observations in all spectral regions, from radio via infrared and optical wavelengths to X- and gamma-rays. It is one of the most extensively studied objects in the southern sky. FORS2 , with its large field-of-view and excellent optical resolution, makes it possible to study the global context of the active region in Centaurus A in great detail. Note for instance the great number of massive and luminous blue stars that are well resolved individually, in the upper right and lower left in PR Photo 05b/00 . Centaurus A is one of the foremost examples of a radio-loud active galactic nucleus (AGN) . On images obtained at optical wavelengths, thick dust layers almost completely obscure the galaxy's centre. This structure was first reported by Sir John Herschel in 1847. Until 1949, NGC 5128 was thought to be a strange object in the Milky Way, but it was then identified as a powerful radio galaxy and designated Centaurus A . The distance is about 10-13 million light-years (3-4 Mpc) and the apparent visual magnitude is about 8, or 5 times too faint to be seen with the unaided eye. There is strong evidence that Centaurus A is a merger of an elliptical with a spiral galaxy, since elliptical galaxies would not have had enough dust and gas to form the young, blue stars seen along the edges of the dust lane. The core of Centaurus A is the smallest known extragalactic radio source, only 10 light-days across. A jet of high energy particles from this centre is observed in radio and X-ray images. The core probably contains a supermassive black hole with a mass of about 100 million solar masses. This is the caption to ESO PR Photos 05a-c/00 . They may be reproduced, if credit is given to the European Southern Observatory..

  19. Energy efficiency of task allocation for embedded JPEG systems.

    PubMed

    Fan, Yang-Hsin; Wu, Jan-Ou; Wang, San-Fu

    2014-01-01

    Embedded system works everywhere for repeatedly performing a few particular functionalities. Well-known products include consumer electronics, smart home applications, and telematics device, and so forth. Recently, developing methodology of embedded systems is applied to conduct the design of cloud embedded system resulting in the applications of embedded system being more diverse. However, the more energy consumes result from the more embedded system works. This study presents hyperrectangle technology (HT) to embedded system for obtaining energy saving. The HT adopts drift effect to construct embedded systems with more hardware circuits than software components or vice versa. It can fast construct embedded system with a set of hardware circuits and software components. Moreover, it has a great benefit to fast explore energy consumption for various embedded systems. The effects are presented by assessing a JPEG benchmarks. Experimental results demonstrate that the HT, respectively, achieves the energy saving by 29.84%, 2.07%, and 68.80% on average to GA, GHO, and Lin.

  20. Image acquisition system using on sensor compressed sampling technique

    NASA Astrophysics Data System (ADS)

    Gupta, Pravir Singh; Choi, Gwan Seong

    2018-01-01

    Advances in CMOS technology have made high-resolution image sensors possible. These image sensors pose significant challenges in terms of the amount of raw data generated, energy efficiency, and frame rate. This paper presents a design methodology for an imaging system and a simplified image sensor pixel design to be used in the system so that the compressed sensing (CS) technique can be implemented easily at the sensor level. This results in significant energy savings as it not only cuts the raw data rate but also reduces transistor count per pixel; decreases pixel size; increases fill factor; simplifies analog-to-digital converter, JPEG encoder, and JPEG decoder design; decreases wiring; and reduces the decoder size by half. Thus, CS has the potential to increase the resolution of image sensors for a given technology and die size while significantly decreasing the power consumption and design complexity. We show that it has potential to reduce power consumption by about 23% to 65%.

  1. Compressing images for the Internet

    NASA Astrophysics Data System (ADS)

    Beretta, Giordano B.

    1998-01-01

    The World Wide Web has rapidly become the hot new mass communications medium. Content creators are using similar design and layout styles as in printed magazines, i.e., with many color images and graphics. The information is transmitted over plain telephone lines, where the speed/price trade-off is much more severe than in the case of printed media. The standard design approach is to use palettized color and to limit as much as possible the number of colors used, so that the images can be encoded with a small number of bits per pixel using the Graphics Interchange Format (GIF) file format. The World Wide Web standards contemplate a second data encoding method (JPEG) that allows color fidelity but usually performs poorly on text, which is a critical element of information communicated on this medium. We analyze the spatial compression of color images and describe a methodology for using the JPEG method in a way that allows a compact representation while preserving full color fidelity.

  2. Modeling of video compression effects on target acquisition performance

    NASA Astrophysics Data System (ADS)

    Cha, Jae H.; Preece, Bradley; Espinola, Richard L.

    2009-05-01

    The effect of video compression on image quality was investigated from the perspective of target acquisition performance modeling. Human perception tests were conducted recently at the U.S. Army RDECOM CERDEC NVESD, measuring identification (ID) performance on simulated military vehicle targets at various ranges. These videos were compressed with different quality and/or quantization levels utilizing motion JPEG, motion JPEG2000, and MPEG-4 encoding. To model the degradation on task performance, the loss in image quality is fit to an equivalent Gaussian MTF scaled by the Structural Similarity Image Metric (SSIM). Residual compression artifacts are treated as 3-D spatio-temporal noise. This 3-D noise is found by taking the difference of the uncompressed frame, with the estimated equivalent blur applied, and the corresponding compressed frame. Results show good agreement between the experimental data and the model prediction. This method has led to a predictive performance model for video compression by correlating various compression levels to particular blur and noise input parameters for NVESD target acquisition performance model suite.

  3. A Java viewer to publish Digital Imaging and Communications in Medicine (DICOM) radiologic images on the World Wide Web.

    PubMed

    Setti, E; Musumeci, R

    2001-06-01

    The world wide web is an exciting service that allows one to publish electronic documents made of text and images on the internet. Client software called a web browser can access these documents, and display and print them. The most popular browsers are currently Microsoft Internet Explorer (Microsoft, Redmond, WA) and Netscape Communicator (Netscape Communications, Mountain View, CA). These browsers can display text in hypertext markup language (HTML) format and images in Joint Photographic Expert Group (JPEG) and Graphic Interchange Format (GIF). Currently, neither browser can display radiologic images in native Digital Imaging and Communications in Medicine (DICOM) format. With the aim to publish radiologic images on the internet, we wrote a dedicated Java applet. Our software can display radiologic and histologic images in DICOM, JPEG, and GIF formats, and provides a a number of functions like windowing and magnification lens. The applet is compatible with some web browsers, even the older versions. The software is free and available from the author.

  4. Analysis-Preserving Video Microscopy Compression via Correlation and Mathematical Morphology

    PubMed Central

    Shao, Chong; Zhong, Alfred; Cribb, Jeremy; Osborne, Lukas D.; O’Brien, E. Timothy; Superfine, Richard; Mayer-Patel, Ketan; Taylor, Russell M.

    2015-01-01

    The large amount video data produced by multi-channel, high-resolution microscopy system drives the need for a new high-performance domain-specific video compression technique. We describe a novel compression method for video microscopy data. The method is based on Pearson's correlation and mathematical morphology. The method makes use of the point-spread function (PSF) in the microscopy video acquisition phase. We compare our method to other lossless compression methods and to lossy JPEG, JPEG2000 and H.264 compression for various kinds of video microscopy data including fluorescence video and brightfield video. We find that for certain data sets, the new method compresses much better than lossless compression with no impact on analysis results. It achieved a best compressed size of 0.77% of the original size, 25× smaller than the best lossless technique (which yields 20% for the same video). The compressed size scales with the video's scientific data content. Further testing showed that existing lossy algorithms greatly impacted data analysis at similar compression sizes. PMID:26435032

  5. Energy Efficiency of Task Allocation for Embedded JPEG Systems

    PubMed Central

    2014-01-01

    Embedded system works everywhere for repeatedly performing a few particular functionalities. Well-known products include consumer electronics, smart home applications, and telematics device, and so forth. Recently, developing methodology of embedded systems is applied to conduct the design of cloud embedded system resulting in the applications of embedded system being more diverse. However, the more energy consumes result from the more embedded system works. This study presents hyperrectangle technology (HT) to embedded system for obtaining energy saving. The HT adopts drift effect to construct embedded systems with more hardware circuits than software components or vice versa. It can fast construct embedded system with a set of hardware circuits and software components. Moreover, it has a great benefit to fast explore energy consumption for various embedded systems. The effects are presented by assessing a JPEG benchmarks. Experimental results demonstrate that the HT, respectively, achieves the energy saving by 29.84%, 2.07%, and 68.80% on average to GA, GHO, and Lin. PMID:24982983

  6. Successful "First Light" for VLT High-Resolution Spectrograph

    NASA Astrophysics Data System (ADS)

    1999-10-01

    Great Research Prospects with UVES at KUEYEN A major new astronomical instrument for the ESO Very Large Telescope at Paranal (Chile), the UVES high-resolution spectrograph, has just made its first observations of astronomical objects. The astronomers are delighted with the quality of the spectra obtained at this moment of "First Light". Although much fine-tuning still has to be done, this early success promises well for new and exciting science projects with this large European research facility. Astronomical instruments at VLT KUEYEN The second VLT 8.2-m Unit Telescope, KUEYEN ("The Moon" in the Mapuche language), is in the process of being tuned to perfection before it will be "handed" over to the astronomers on April 1, 2000. The testing of the new giant telescope has been successfully completed. The latest pointing tests were very positive and, from real performance measurements covering the entire operating range of the telescope, the overall accuracy on the sky was found to be 0.85 arcsec (the RMS-value). This is an excellent result for any telescope and implies that KUEYEN (as is already the case for ANTU) will be able to acquire its future target objects securely and efficiently, thus saving precious observing time. This work has paved the way for the installation of large astronomical instruments at its three focal positions, all prototype facilities that are capable of catching the light from even very faint and distant celestial objects. The three instruments at KUEYEN are referred to by their acronyms UVES , FORS2 and FLAMES. They are all dedicated to the investigation of the spectroscopic properties of faint stars and galaxies in the Universe. The UVES instrument The first to be installed is the Ultraviolet Visual Echelle Spectrograph (UVES) that was built by ESO, with the collaboration of the Trieste Observatory (Italy) for the control software. Complete tests of its optical and mechanical components, as well as of its CCD detectors and of the complex control system, cf. ESO PR Photos 44/98 , were made in the laboratories of the ESO Headquarters in Garching (Germany) before it was fully dismounted and shipped (some parts by air, others by ship) to the ESO Paranal Observatory, 130 km south of Antofagasta (Chile). Here, the different pieces of UVES (with a total weight of 8 tons) were carefully reassembled on the Nasmyth platform of KUEYEN and made ready for real observations (see ESO PR Photos 36p-t/99 ). UVES is a complex two-channel spectrograph that has been built around two giant optical (echelle diffraction) gratings, each ruled on a 84 cm x 21 cm x 12 cm block of the ceramic material Zerodur (the same that is used for the VLT 8.2-m main mirrors) and weighing more than 60 kg. These echelle gratings finely disperse the light from celestial objects collected by the telescope into its constituent wavelengths (colours). UVES' resolving power (an optical term that indicates the ratio between a given wavelength and the smallest wavelength difference between two spectral lines that are clearly separated by the spectrograph) may reach 110,000, a very high value for an astronomical instrument of such a large size. This means for instance that even comparatively small changes in radial velocity (a few km/sec only) can be accurately measured and also that it is possible to detect the faint spectral signatures of very rare elements in celestial objects. One UVES channel is optimized for the ultraviolet and blue, the other for visual and red light. The spectra are digitally recorded by two highly efficient CCD detectors for subsequent analysis and astrophysical interpretation. By optimizing the transmission of the various optical components in its two channels, UVES has a very high efficiency all the way from the UV (wavelength about 300 nm) to the near-infrared (1000 nm or 1 µm). This guarantees that only a minimum of the precious light that is collected by KUEYEN is lost and that detailed spectra can be obtained of even quite faint objects, down to about magnitude 20 (corresponding to nearly one million times fainter than what can be perceived with the unaided eye). The possibility of doing simultaneous observations in the two channels (with a dichroic mirror) ensures a further gain in data gathering efficiency. First Observations with UVES In the evening of September 27, 1999, the ESO astronomers turned the KUEYEN telescope and - for the first time - focussed the light of stars and galaxies on the entrance aperture of the UVES instrument. This is the crucial moment of "First Light" for a new astronomical facility. The following test period will last about three weeks. Much of the time during the first observing nights was spent by functional tests of the various observation modes and by targeting "standard stars" with well-known properties in order to measure the performance of the new instrument. They showed that it is behaving very well. This marks the beginning of a period of progressive fine-tuning that will ultimately bring UVES to peak performance. The astronomers also did a few "scientific" observations during these nights, aimed at exploring the capabilities of their new spectrograph. They were eager to do so, also because UVES is the first spectrograph of this type installed at a telescope of large diameter in the southern hemisphere . Many exciting research possibilities are now opening with UVES . They include a study of the chemical history of many galaxies in the Local Group, e.g. by observing the most metal-poor (oldest) stars in the Milky Way Galaxy and by obtaining the first, extremely detailed spectra of their brightest stars in the Magellanic Clouds. Quasars and distant compact galaxies will also be among the most favoured targets of the first UVES observers, not least because their spectra carry crucial information about the density, physical state and chemical composition of the early Universe. UVES First Light: SN 1987A One of the first spectral test exposures with UVES at KUEYEN was of SN 1987A , the famous supernova that exploded in the Large Magellanic Cloud (LMC) in February 1987, and the brightest supernova of the last 400 years. ESO PR Photo 37a/99 ESO PR Photo 37a/99 [Preview - JPEG: 400 x 455 pix - 87k] [Normal - JPEG: 645 x 733 pix - 166k] Caption to ESO PR Photo 37a/99 : This is a direct image of SN1987A, flanked by two nearby stars. The distance between these two is 4.5 arcsec. The slit (2.0 arcsec wide) through which the echelle spectrum shown in PR Photo 37b/99 was obtained, is outlined. This reproduction is from a 2-min exposure through a R(ed) filter with the FORS1 multi-mode instrument at VLT ANTU, obtained in 0.55 arcsec seeing on September 20, 1998. North is up and East is left. ESO PR Photo 37b/99 ESO PR Photo 37b/99 [Preview - JPEG: 400 x 459 pix - 130k] [Normal - JPEG: 800 x 917 pix - 470k] [High-Res - JPEG: 3000 x 3439 pix - 6.5M] Caption to ESO PR Photo 37b/99 : This shows the raw image, as read from the CCD, with the recorded echelle spectrum of SN1987A. With this technique, the supernova spectrum is divided into many individual parts ( spectral orders , each of which appears as a narrow horizontal line) that together cover the wavelength interval from 479 to 682 nm (from the bottom to the top), i.e. from blue to red light. Many bright emission lines from different elements are visible, e.g. the strong H-alpha line from hydrogen near the centre of the fourth order from the top. Emission lines from the terrestrial atmosphere are seen as vertical bright lines that cover the full width of the individual horizontal bands. Since this exposure was done with the nearly Full Moon above the horizon, an underlying, faint absorption-line spectrum of reflected sunlight is also visible. The exposure time was 30 min and the seeing conditions were excellent (0.5 arcsec). ESO PR Photo 37c/99 ESO PR Photo 37c/99 [Preview - JPEG: 400 x 355 pix - 156k] [Normal - JPEG: 800 x 709 pix - 498k] [High-Res - JPEG: 1074 x 952 pix - 766k] Caption to ESO PR Photo 37c/99 : This false-colour image has been extracted from another UVES echelle spectrum of SN 1987A, similar to the one shown in PR Photo 37b/99 , but with a slit width of 1 arcsec only. The upper part shows the emission lines of nitrogen, sulfur and hydrogen, as recorded in some of the spectral orders. The pixel coordinates (X,Y) in the original frame are indicated; the red colour indicates the highest intensities. Below is a more detailed view of the complex H-alpha emission line, with the corresponding velocities and the position along the spectrograph slit indicated. Several components of this line can be distinguished. The bulk of the emission (here shown in red colour) comes from the ring surrounding the supernova; the elongated shape here is due to the differential velocity exhibited by the near (to us) and far sides of the ring. The two bright spots on either side are emission from two outer rings (not immediately visible in PR Photo 37a/99 ). The extended emission in the velocity direction originates from material inside the ring upon which the fastest moving ejecta from the supernova have impacted (As seen in VLT data obtained previously with the ANTU/ISAAC combination (cf. PR Photo 11/99 ), exciting times now lie ahead for SN 1987A. The ejecta moving at 30,000 km/s (1/10th the speed of light) have now, 12 years after the explosion, reached the ring of material and the predicted "fireworks" are about to be ignited.) Finally, there is a broad emission extending all along the spectrograph slit (here mostly yellow) upon which the ring emission is superimposed. This is not associated with the supernova itself, but is H-alpha emission by diffuse gas in the Large Magellanic Cloud (LMC) in which SN 1987A is located. UVES First Light: QSO HE2217-2818 The power of UVES is demonstrated by this two-hour test exposure of the southern quasar QSO HE2217-2818 with U-magnitude = 16.5 and a redshift of z = 2.4. It was discovered a few years ago during the Hamburg-ESO Quasar Survey , by means of photographic plates taken with the 1-m ESO Schmidt Telescope at La Silla, the other ESO astronomical site in Chile. ESO PR Photo 37d/99 ESO PR Photo 37d/99 [Preview - JPEG: 400 x 309 pix - 92k] [Normal - JPEG: 800x 618 pix - 311k] [High-Res - JPEG: 3000 x 2316 pix - 5.0M] ESO PR Photo 37e/99 ESO PR Photo 37e/99 [Preview - JPEG: 400 x 310 pix - 43k] [Normal - JPEG: 800 x 619 pix - 100k] [High-Res - JPEG: 3003 x 2324 pix - 436k] Caption to ESO PR Photo 37d/99 : This UVES echelle spectrum QSO HE2217-2818 (U-magnitude = 16.5) is recorded in different orders (the individual horizontal lines) and altogether covers the wavelength interval between 330 - 450 nm (from the bottom to the top). It illustrates the excellent capability of UVES to work in the UV-band on even faint targets. Simultaneously with this observation, UVES also recorded the adjacent spectral region 465 - 660 nm in its other channel. The broad Lyman-alpha emission from ionized hydrogen associated with the powerful energy source of the QSO is seen in the upper half of the spectrum at wavelength 413 nm. At shorter wavelengths, the dark regions in the spectrum are Lyman-alpha absorption lines from intervening, neutral hydrogen gas located along the line-of-sight at different redshifts (the so-called Lyman-alpha forest ) in the redshift interval z = 1.7 - 2.4. Note that since this exposure was done with the nearly Full Moon above the horizon, an underlying, faint absorption-line spectrum of reflected sunlight is also visible. Caption to ESO PR Photo 37e/99 : A tracing of one spectral order, corresponding to one horizontal line in the echelle spectrum displayed in PR Photo 37d/99 . It shows part of the Lyman-alpha forest in the ultraviolet spectrum of the southern quasar QSO HE2217-2818 . The absorption lines are caused by intervening, neutral hydrogen gas located at different distances along the line-of-sight towards this quasar. How to obtain ESO Press Information ESO Press Information is made available on the World-Wide Web (URL: http://www.eso.org../ ). ESO Press Photos may be reproduced, if credit is given to the European Southern Observatory.

  7. Enabling Near Real-Time Remote Search for Fast Transient Events with Lossy Data Compression

    NASA Astrophysics Data System (ADS)

    Vohl, Dany; Pritchard, Tyler; Andreoni, Igor; Cooke, Jeffrey; Meade, Bernard

    2017-09-01

    We present a systematic evaluation of JPEG2000 (ISO/IEC 15444) as a transport data format to enable rapid remote searches for fast transient events as part of the Deeper Wider Faster programme. Deeper Wider Faster programme uses 20 telescopes from radio to gamma rays to perform simultaneous and rapid-response follow-up searches for fast transient events on millisecond-to-hours timescales. Deeper Wider Faster programme search demands have a set of constraints that is becoming common amongst large collaborations. Here, we focus on the rapid optical data component of Deeper Wider Faster programme led by the Dark Energy Camera at Cerro Tololo Inter-American Observatory. Each Dark Energy Camera image has 70 total coupled-charged devices saved as a 1.2 gigabyte FITS file. Near real-time data processing and fast transient candidate identifications-in minutes for rapid follow-up triggers on other telescopes-requires computational power exceeding what is currently available on-site at Cerro Tololo Inter-American Observatory. In this context, data files need to be transmitted rapidly to a foreign location for supercomputing post-processing, source finding, visualisation and analysis. This step in the search process poses a major bottleneck, and reducing the data size helps accommodate faster data transmission. To maximise our gain in transfer time and still achieve our science goals, we opt for lossy data compression-keeping in mind that raw data is archived and can be evaluated at a later time. We evaluate how lossy JPEG2000 compression affects the process of finding transients, and find only a negligible effect for compression ratios up to 25:1. We also find a linear relation between compression ratio and the mean estimated data transmission speed-up factor. Adding highly customised compression and decompression steps to the science pipeline considerably reduces the transmission time-validating its introduction to the Deeper Wider Faster programme science pipeline and enabling science that was otherwise too difficult with current technology.

  8. Baseline coastal oblique aerial photographs collected from Breton Island, Louisiana, to the Alabama-Florida border, July 13, 2013

    USGS Publications Warehouse

    Morgan, Karen L.M.; Westphal, Karen A.

    2014-01-01

    The U.S. Geological Survey (USGS) conducts baseline and storm response photography missions to document and understand the changes in vulnerability of the Nation's coasts to extreme storms. On July 13, 2013, the USGS conducted an oblique aerial photographic survey from Breton Island, Louisiana, to the Alabama-Florida border, aboard a Cessna 172 flying at an altitude of 500 feet (ft) and approximately 1,000 ft offshore. This mission was flown to collect baseline data for assessing incremental changes since the last survey, and the data can be used in the assessment of future coastal change. The images provided here are Joint Photographic Experts Group (JPEG) images. ExifTtool was used to add the following to the header of each photo: time of collection, Global Positioning System (GPS) latitude, GPS longitude, keywords, credit, artist (photographer), caption, copyright, and contact information. The photograph locations are an estimate of the position of the aircraft and do not indicate the location of any feature in the images (see the Navigation Data page). These photographs document the configuration of the barrier islands and other coastal features at the time of the survey. Pages containing thumbnail images of the photographs, referred to as contact sheets, were created in 5-minute segments of flight time. These segements can be found on the Photos and Maps page. Photographs can be opened directly with any JPEG-compatible image viewer by clicking on a thumbnail on the contact sheet. Table 1 provides detailed information about the GPS location, name, date, and time each of the 1242 photographs taken along with links to each photograph. The photography is organized into segments, also referred to as contact sheets, and represent approximately 5 minutes of flight time. (Also see the Photos and Maps page). In addition to the photographs, a Google Earth Keyhole Markup Language (KML) file is provided and can be used to view the images by clicking on the marker and then clicking on either the thumbnail or the link above the thumbnail. The KML files were created using the photographic navigation files.

  9. Baseline coastal oblique aerial photographs collected from Dauphin Island, Alabama, to Breton Island, Louisiana, August 8, 2012

    USGS Publications Warehouse

    Morgan, Karen L.M.; Westphal, Karen A.

    2014-01-01

    The U.S. Geological Survey (USGS) conducts baseline and storm response photography missions to document and understand the changes in vulnerability of the Nation's coasts to extreme storms. On August 8, 2012, the USGS conducted an oblique aerial photographic survey from Dauphin Island, Alabama, to Breton Island, Louisiana, aboard a Cessna 172 at an altitude of 500 feet (ft) and approximately 1,000 ft offshore. This mission was flown to collect baseline data for assessing incremental changes since the last survey, and the data can be used in the assessment of future coastal change. The images provided here are Joint Photographic Experts Group (JPEG) images. Exiftool was used to add the following to the header of each photo: time of collection, Global Positioning System (GPS) latitude, GPS longitude, keywords, credit, artist (photographer), caption, copyright, and contact information. The photograph locations are an estimate of the position of the aircraft and do not indicate the location of any feature in the images (see the Navigation Data page). These photographs document the configuration of the barrier islands and other coastal features at the time of the survey. Pages containing thumbnail images of the photographs, referred to as contact sheets, were created in 5-minute segments of flight time. These segements can be found on the Photos and Maps page. Photographs can be opened directly with any JPEG-compatible image viewer by clicking on a thumbnail on the contact sheet. Table 1 provides detailed information about the GPS location, name, date, and time each of the 1241 photographs taken along with links to each photograph. The photography is organized into segments, also referred to as contact sheets, and represent approximately 5 minutes of flight time. (Also see the Photos and Maps page). In addition to the photographs, a Google Earth Keyhole Markup Language (KML) file is provided and can be used to view the images by clicking on the marker and then clicking on either the thumbnail or the link above the thumbnail. The KML files were created using the photographic navigation files.

  10. Post-Hurricane Ike coastal oblique aerial photographs collected along the Alabama, Mississippi, and Louisiana barrier islands and the north Texas coast, September 14-15, 2008

    USGS Publications Warehouse

    Morgan, Karen L. M.; Krohn, M. Dennis; Guy, Kristy K.

    2016-04-28

    The U.S. Geological Survey (USGS), as part of the National Assessment of Coastal Change Hazards project, conducts baseline and storm-response photography missions to document and understand the changes in vulnerability of the Nation's coasts to extreme storms (Morgan, 2009). On September 14-15, 2008, the USGS conducted an oblique aerial photographic survey along the Alabama, Mississippi, and Louisiana barrier islands and the north Texas coast, aboard a Beechcraft Super King Air 200 (aircraft) at an altitude of 500 feet (ft) and approximately 1,200 ft offshore. This mission was flown to collect post-Hurricane Ike data for assessing incremental changes in the beach and nearshore area since the last survey, flown on September 9-10, 2008, and the data can be used in the assessment of future coastal change.The photographs provided in this report are Joint Photographic Experts Group (JPEG) images. ExifTool was used to add the following to the header of each photo: time of collection, Global Positioning System (GPS) latitude, GPS longitude, keywords, credit, artist (photographer), caption, copyright, and contact information. The photograph locations are an estimate of the position of the aircraft at the time the photograph was taken and do not indicate the location of any feature in the images (see the Navigation Data page). These photographs document the state of the barrier islands and other coastal features at the time of the survey. Pages containing thumbnail images of the photographs, referred to as contact sheets, were created in 5-minute segments of flight time. These segments can be found on the Photos and Maps page. Photographs can be opened directly with any JPEG-compatible image viewer by clicking on a thumbnail on the contact sheet.In addition to the photographs, a Google Earth Keyhole Markup Language (KML) file is provided and can be used to view the images by clicking on the marker and then clicking on either the thumbnail or the link above the thumbnail. The KML file was created using the photographic navigation files. The KML file can be found in the kml folder.

  11. Photogrammetric Processing of IceBridge DMS Imagery into High-Resolution Digital Surface Models (DEM and Visible Overlay)

    NASA Astrophysics Data System (ADS)

    Arvesen, J. C.; Dotson, R. C.

    2014-12-01

    The DMS (Digital Mapping System) has been a sensor component of all DC-8 and P-3 IceBridge flights since 2009 and has acquired over 3 million JPEG images over Arctic and Antarctic land and sea ice. The DMS imagery is primarily used for identifying and locating open leads for LiDAR sea-ice freeboard measurements and documenting snow and ice surface conditions. The DMS is a COTS Canon SLR camera utilizing a 28mm focal length lens, resulting in a 10cm GSD and swath of ~400 meters from a nominal flight altitude of 500 meters. Exterior orientation is provided by an Applanix IMU/GPS which records a TTL pulse coincident with image acquisition. Notable for virtually all IceBridge flights is that parallel grids are not flown and thus there is no ability to photogrammetrically tie any imagery to adjacent flight lines. Approximately 800,000 Level-3 DMS Surface Model data products have been delivered to NSIDC, each consisting of a Digital Elevation Model (GeoTIFF DEM) and a co-registered Visible Overlay (GeoJPEG). Absolute elevation accuracy for each individual Elevation Model is adjusted to concurrent Airborne Topographic Mapper (ATM) Lidar data, resulting in higher elevation accuracy than can be achieved by photogrammetry alone. The adjustment methodology forces a zero mean difference to the corresponding ATM point cloud integrated over each DMS frame. Statistics are calculated for each DMS Elevation Model frame and show RMS differences are within +/- 10 cm with respect to the ATM point cloud. The DMS Surface Model possesses similar elevation accuracy to the ATM point cloud, but with the following advantages: · Higher and uniform spatial resolution: 40 cm GSD · 45% wider swath: 435 meters vs. 300 meters at 500 meter flight altitude · Visible RGB co-registered overlay at 10 cm GSD · Enhanced visualization through 3-dimensional virtual reality (i.e. video fly-through) Examples will be presented of the utility of these advantages and a novel use of a cell phone camera for aerial photogrammetry will also be presented.

  12. Post-Hurricane Isaac coastal oblique aerial photographs collected along the Alabama, Mississippi, and Louisiana barrier islands, September 2–3, 2012

    USGS Publications Warehouse

    Morgan, Karen L. M.; Karen A. Westphal,

    2016-04-21

    The U.S. Geological Survey (USGS), as part of the National Assessment of Coastal Change Hazards project, conducts baseline and storm-response photography missions to document and understand the changes in vulnerability of the Nation's coasts to extreme storms (Morgan, 2009). On September 2-3, 2012, the USGS conducted an oblique aerial photographic survey along the Alabama, Mississippi, and Louisiana barrier islands aboard a Cessna 172 (aircraft) at an altitude of 500 feet (ft) and approximately 1,000 ft offshore. This mission was flown to collect post-Hurricane Isaac data for assessing incremental changes in the beach and nearshore area since the last survey, flown in September 2008 (central Louisiana barrier islands) and June 2011 (Dauphin Island, Alabama, to Breton Island, Louisiana), and the data can be used in the assessment of future coastal change.The photographs provided in this report are Joint Photographic Experts Group (JPEG) images. ExifTool was used to add the following to the header of each photo: time of collection, Global Positioning System (GPS) latitude, GPS longitude, keywords, credit, artist (photographer), caption, copyright, and contact information. The photograph locations are an estimate of the position of the aircraft at the time the photograph was taken and do not indicate the location of any feature in the images (see the Navigation Data page). These photographs document the state of the barrier islands and other coastal features at the time of the survey. Pages containing thumbnail images of the photographs, referred to as contact sheets, were created in 5-minute segments of flight time. These segments can be found on the Photos and Maps page. Photographs can be opened directly with any JPEG-compatible image viewer by clicking on a thumbnail on the contact sheet.In addition to the photographs, a Google Earth Keyhole Markup Language (KML) file is provided and can be used to view the images by clicking on the marker and then clicking on either the thumbnail or the link above the thumbnail. The KML files were created using the photographic navigation files. These KML file(s) can be found in the kml folder.

  13. How C2 Goes Wrong (Briefing Chart)

    DTIC Science & Technology

    2014-06-01

    Guardian/Pix/pictures/2012/12/19/1355903591995/Hillsborough-disaster-010.jpg Cases (3): Disaster/Emergency Response (Cont.) Columbine High School ...r337173_1529332.jpg http://bossip.files.wordpress.com/2012/11/ massacre -e1352384704110.jpeg?w=625&h=389 The Punchline “What we’ve got here, is

  14. Genomics & Genetics | National Agricultural Library

    Science.gov Websites

    Skip to main content Home National Agricultural Library United States Department of Agriculture Ag agricultural and environmental settings. Deadpool proximal sensing cart docx xlsx 3x jpeg 5x pdf Data from Buytaert. NAL Home | USDA.gov | Agricultural Research Service | Plain Language | FOIA | Accessibility

  15. The physical characteristics of the sediments on and surrounding Dauphin Island, Alabama

    USGS Publications Warehouse

    Ellis, Alisha M.; Marot, Marci E.; Smith, Christopher G.; Wheaton, Cathryn J.

    2017-06-20

    Scientists from the U.S. Geological Survey, St. Petersburg Coastal and Marine Science Center collected 303 surface sediment samples from Dauphin Island, Alabama, and the surrounding water bodies in August 2015. These sediments were processed to determine physical characteristics such as organic content, bulk density, and grain-size. The environments where the sediments were collected include high and low salt marshes, washover deposits, dunes, beaches, sheltered bays, and open water. Sampling by the USGS was part of a larger study to assess the feasibility and sustainability of proposed restoration efforts for Dauphin Island, Alabama, and assess the island’s resilience to rising sea level and storm events. The data presented in this publication can be used by modelers to attempt validation of hindcast models and create predictive forecast models for both baseline conditions and storms. This study was funded by the National Fish and Wildlife Foundation, via the Gulf Environmental Benefit Fund.This report serves as an archive for sedimentological data derived from surface sediments. Downloadable data are available as Excel spreadsheets, JPEG files, and formal Federal Geographic Data Committee metadata.

  16. A two-factor error model for quantitative steganalysis

    NASA Astrophysics Data System (ADS)

    Böhme, Rainer; Ker, Andrew D.

    2006-02-01

    Quantitative steganalysis refers to the exercise not only of detecting the presence of hidden stego messages in carrier objects, but also of estimating the secret message length. This problem is well studied, with many detectors proposed but only a sparse analysis of errors in the estimators. A deep understanding of the error model, however, is a fundamental requirement for the assessment and comparison of different detection methods. This paper presents a rationale for a two-factor model for sources of error in quantitative steganalysis, and shows evidence from a dedicated large-scale nested experimental set-up with a total of more than 200 million attacks. Apart from general findings about the distribution functions found in both classes of errors, their respective weight is determined, and implications for statistical hypothesis tests in benchmarking scenarios or regression analyses are demonstrated. The results are based on a rigorous comparison of five different detection methods under many different external conditions, such as size of the carrier, previous JPEG compression, and colour channel selection. We include analyses demonstrating the effects of local variance and cover saturation on the different sources of error, as well as presenting the case for a relative bias model for between-image error.

  17. 21 CFR 892.2030 - Medical image digitizer.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Medical image digitizer. 892.2030 Section 892.2030 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED... Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std.). [63 FR 23387, Apr. 29...

  18. 21 CFR 892.2040 - Medical image hardcopy device.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Medical image hardcopy device. 892.2040 Section 892.2040 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES... Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture...

  19. Multi-Class Classification for Identifying JPEG Steganography Embedding Methods

    DTIC Science & Technology

    2008-09-01

    B.H. (2000). STEGANOGRAPHY: Hidden Images, A New Challenge in the Fight Against Child Porn . UPDATE, Volume 13, Number 2, pp. 1-4, Retrieved June 3...Other crimes involving the use of steganography include child pornography where the stego files are used to hide a predator’s location when posting

  20. 40 CFR Table 3 to Subpart Wwww of... - Organic HAP Emissions Limits for Existing Open Molding Sources, New Open Molding Sources Emitting...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Existing Open Molding Sources, New Open Molding Sources Emitting Less Than 100 TPY of HAP, and New and... CATEGORIES National Emissions Standards for Hazardous Air Pollutants: Reinforced Plastic Composites... Existing Open Molding Sources, New Open Molding Sources Emitting Less Than 100 TPY of HAP, and New and...

  1. The Chandra Source Catalog : Google Earth Interface

    NASA Astrophysics Data System (ADS)

    Glotfelty, Kenny; McLaughlin, W.; Evans, I.; Evans, J.; Anderson, C. S.; Bonaventura, N. R.; Davis, J. E.; Doe, S. M.; Fabbiano, G.; Galle, E. C.; Gibbs, D. G., II; Grier, J. D.; Hain, R.; Hall, D. M.; Harbo, P. N.; He, H.; Houck, J. C.; Karovska, M.; Kashyap, V. L.; Lauer, J.; McCollough, M. L.; McDowell, J. C.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Mossman, A. E.; Nichols, J. S.; Nowak, M. A.; Plummer, D. A.; Primini, F. A.; Refsdal, B. L.; Rots, A. R.; Siemiginowska, A. L.; Sundheim, B. A.; Tibbetts, M. S.; van Stone, D. W.; Winkelman, S. L.; Zografou, P.

    2009-09-01

    The Chandra Source Catalog (CSC) contains multi-resolution, exposure corrected, background subtracted, full-field images that are stored as individual FITS files and as three-color JPEG files. In this poster we discuss how we took these data and were able to, with relatively minimal effort, convert them for use with the Google Earth application in its ``Sky'' mode. We will highlight some of the challenges which include converting the data to the required Mercator projection, reworking the 3-color algorithm for pipeline processing, and ways to reduce the data volume through re-binning, using color-maps, and special Keyhole Markup Language (kml) tags to only load images on-demand. The result is a collection of some 11,000 3-color images that are available for all the individual observation in the CSC Release 1. We also have made available all ˜4000 Field-of-View outlines (with per-chip regions), which turns out are trivial to produce starting with a simple dmlist command. In the first week of release, approximately 40% of the images have been accessed at least once through some 50,000 individual web hits which have served over 4Gb of data to roughly 750 users in 60+ countries. We will also highlight some future directions we are exploring, including real-time catalog access to individual source properties and eventual access to file based products such as FITS images, spectra, and light-curves.

  2. Quantization Distortion in Block Transform-Compressed Data

    NASA Technical Reports Server (NTRS)

    Boden, A. F.

    1995-01-01

    The popular JPEG image compression standard is an example of a block transform-based compression scheme; the image is systematically subdivided into block that are individually transformed, quantized, and encoded. The compression is achieved by quantizing the transformed data, reducing the data entropy and thus facilitating efficient encoding. A generic block transform model is introduced.

  3. Impact of JPEG2000 compression on spatial-spectral endmember extraction from hyperspectral data

    NASA Astrophysics Data System (ADS)

    Martín, Gabriel; Ruiz, V. G.; Plaza, Antonio; Ortiz, Juan P.; García, Inmaculada

    2009-08-01

    Hyperspectral image compression has received considerable interest in recent years. However, an important issue that has not been investigated in the past is the impact of lossy compression on spectral mixture analysis applications, which characterize mixed pixels in terms of a suitable combination of spectrally pure spectral substances (called endmembers) weighted by their estimated fractional abundances. In this paper, we specifically investigate the impact of JPEG2000 compression of hyperspectral images on the quality of the endmembers extracted by algorithms that incorporate both the spectral and the spatial information (useful for incorporating contextual information in the spectral endmember search). The two considered algorithms are the automatic morphological endmember extraction (AMEE) and the spatial spectral endmember extraction (SSEE) techniques. Experimental results are conducted using a well-known data set collected by AVIRIS over the Cuprite mining district in Nevada and with detailed ground-truth information available from U. S. Geological Survey. Our experiments reveal some interesting findings that may be useful to specialists applying spatial-spectral endmember extraction algorithms to compressed hyperspectral imagery.

  4. Phoenix Telemetry Processor

    NASA Technical Reports Server (NTRS)

    Stanboli, Alice

    2013-01-01

    Phxtelemproc is a C/C++ based telemetry processing program that processes SFDU telemetry packets from the Telemetry Data System (TDS). It generates Experiment Data Records (EDRs) for several instruments including surface stereo imager (SSI); robotic arm camera (RAC); robotic arm (RA); microscopy, electrochemistry, and conductivity analyzer (MECA); and the optical microscope (OM). It processes both uncompressed and compressed telemetry, and incorporates unique subroutines for the following compression algorithms: JPEG Arithmetic, JPEG Huffman, Rice, LUT3, RA, and SX4. This program was in the critical path for the daily command cycle of the Phoenix mission. The products generated by this program were part of the RA commanding process, as well as the SSI, RAC, OM, and MECA image and science analysis process. Its output products were used to advance science of the near polar regions of Mars, and were used to prove that water is found in abundance there. Phxtelemproc is part of the MIPL (Multi-mission Image Processing Laboratory) system. This software produced Level 1 products used to analyze images returned by in situ spacecraft. It ultimately assisted in operations, planning, commanding, science, and outreach.

  5. The Emergence of Open-Source Software in China

    ERIC Educational Resources Information Center

    Pan, Guohua; Bonk, Curtis J.

    2007-01-01

    The open-source software movement is gaining increasing momentum in China. Of the limited numbers of open-source software in China, "Red Flag Linux" stands out most strikingly, commanding 30 percent share of Chinese software market. Unlike the spontaneity of open-source movement in North America, open-source software development in…

  6. A Study of Clinically Related Open Source Software Projects

    PubMed Central

    Hogarth, Michael A.; Turner, Stuart

    2005-01-01

    Open source software development has recently gained significant interest due to several successful mainstream open source projects. This methodology has been proposed as being similarly viable and beneficial in the clinical application domain as well. However, the clinical software development venue differs significantly from the mainstream software venue. Existing clinical open source projects have not been well characterized nor formally studied so the ‘fit’ of open source in this domain is largely unknown. In order to better understand the open source movement in the clinical application domain, we undertook a study of existing open source clinical projects. In this study we sought to characterize and classify existing clinical open source projects and to determine metrics for their viability. This study revealed several findings which we believe could guide the healthcare community in its quest for successful open source clinical software projects. PMID:16779056

  7. The President and the Galaxy

    NASA Astrophysics Data System (ADS)

    2004-12-01

    On December 9-10, 2004, the ESO Paranal Observatory was honoured with an overnight visit by His Excellency the President of the Republic of Chile, Ricardo Lagos and his wife, Mrs. Luisa Duran de Lagos. The high guests were welcomed by the ESO Director General, Dr. Catherine Cesarsky, ESO's representative in Chile, Mr. Daniel Hofstadt, and Prof. Maria Teresa Ruiz, Head of the Astronomy Department at the Universidad de Chile, as well as numerous ESO staff members working at the VLT site. The visit was characterised as private, and the President spent a considerable time in pleasant company with the Paranal staff, talking with and getting explanations from everybody. The distinguished visitors were shown the various high-tech installations at the observatory, including the Interferometric Tunnel with the VLTI delay lines and the first Auxiliary Telescope. Explanations were given by ESO astronomers and engineers and the President, a keen amateur astronomer, gained a good impression of the wide range of exciting research programmes that are carried out with the VLT. President Lagos showed a deep interest and impressed everyone present with many, highly relevant questions. Having enjoyed the spectacular sunset over the Pacific Ocean from the Residence terrace, the President met informally with the Paranal employees who had gathered for this unique occasion. Later, President Lagos visited the VLT Control Room from where the four 8.2-m Unit Telescopes and the VLT Interferometer (VLTI) are operated. Here, the President took part in an observing sequence of the spiral galaxy NGC 1097 (see PR Photo 35d/04) from the console of the MELIPAL telescope. After one more visit to the telescope platform at the top of Paranal, the President and his wife left the Observatory in the morning of December 10, 2004, flying back to Santiago. ESO PR Photo 35e/04 ESO PR Photo 35e/04 President Lagos Meets with ESO Staff at the Paranal Residencia [Preview - JPEG: 400 x 267pix - 144k] [Normal - JPEG: 640 x 427 pix - 240k] ESO PR Photo 35f/04 ESO PR Photo 35f/04 The Presidential Couple with Professor Maria Teresa Ruiz and the ESO Director General [Preview - JPEG: 500 x 400 pix - 224k] [Normal - JPEG: 1000 x 800 pix - 656k] [FullRes - JPEG: 1575 x 1260 pix - 1.0M] ESO PR Photo 35g/04 ESO PR Photo 35g/04 President Lagos with ESO Staff [Preview - JPEG: 500 x 400 pix - 192k] [Normal - JPEG: 1000 x 800 pix - 592k] [FullRes - JPEG: 1575 x 1200 pix - 1.1M] Captions: ESO PR Photo 35e/04 was obtained during President Lagos' meeting with ESO Staff at the Paranal Residencia. On ESO PR Photo 35f/04, President Lagos and Mrs. Luisa Duran de Lagos are seen at a quiet moment during the visit to the VLT Control Room, together with Prof. Maria Teresa Ruiz (far right), Head of the Astronomy Department at the Universidad de Chile, and the ESO Director General. ESO PR Photo 35g/04 shows President Lagos with some ESO staff members in the Paranal Residencia. VLT obtains a splendid photo of a unique galaxy, NGC 1097 ESO PR Photo 35d/04 ESO PR Photo 35d/04 Spiral Galaxy NGC 1097 (Melipal + VIMOS) [Preview - JPEG: 400 x 525 pix - 181k] [Normal - JPEG: 800 x 1049 pix - 757k] [FullRes - JPEG: 2296 x 3012 pix - 7.9M] Captions: ESO PR Photo 35d/04 is an almost-true colour composite based on three images made with the multi-mode VIMOS instrument on the 8.2-m Melipal (Unit Telescope 3) of ESO's Very Large Telescope. They were taken on the night of December 9-10, 2004, in the presence of the President of the Republic of Chile, Ricardo Lagos. Details are available in the Technical Note below. A unique and very beautiful image was obtained with the VIMOS instrument with President Lagos at the control desk. Located at a distance of about 45 million light-years in the southern constellation Fornax (the Furnace), NGC 1097 is a relatively bright, barred spiral galaxy of type SBb, seen face-on. At magnitude 9.5, and thus just 25 times fainter than the faintest object that can be seen with the unaided eye, it appears in small telescopes as a bright, circular disc. ESO PR Photo 35d/04, taken on the night of December 9 to 10, 2004 with the VIsible Multi-Object Spectrograph ("VIMOS), a four-channel multiobject spectrograph and imager attached to the 8.2-m VLT Melipal telescope, shows that the real structure is much more complicated. NGC 1097 is indeed a most interesting object in many respects. As this striking image reveals, NGC 1097 presents a centre that consists of a broken ring of bright knots surrounding the galaxy's nucleus. The sizes of these knots - presumably gigantic bubbles of hydrogen atoms having lost one electron (HII regions) through the intense radiation from luminous massive stars - range from roughly 750 to 2000 light-years. The presence of these knots suggests that an energetic burst of star formation has recently occurred. NGC 1097 is also known as an example of the so-called LINER (Low-Ionization Nuclear Emission Region Galaxies) class. Objects of this type are believed to be low-luminosity examples of Active Galactic Nuclei (AGN), whose emission is thought to arise from matter (gas and stars) falling into oblivion in a central black hole. There is indeed much evidence that a supermassive black hole is located at the very centre of NGC 1097, with a mass of several tens of million times the mass of the Sun. This is at least ten times more massive than the central black hole in our own Milky Way. However, NGC 1097 possesses a comparatively faint nucleus only, and the black hole in its centre must be on a very strict "diet": only a small amount of gas and stars is apparently being swallowed by the black hole at any given moment. A turbulent past As can be clearly seen in the upper part of PR Photo 35d/04, NGC 1097 also has a small galaxy companion; it is designated NGC 1097A and is located about 42,000 light-years away from the centre of NGC 1097. This peculiar elliptical galaxy is 25 times fainter than its big brother and has a "box-like" shape, not unlike NGC 6771, the smallest of the three galaxies that make up the famous Devil's Mask, cf. ESO PR Photo 12/04. There is evidence that NGC 1097 and NGC 1097A have been interacting in the recent past. Another piece of evidence for this galaxy's tumultuous past is the presence of four jets - not visible on this image - discovered in the 1970's on photographic plates. These jets are now believed to be the captured remains of a disrupted dwarf galaxy that passed through the inner part of the disc of NGC 1097. Moreover, another interesting feature of this active galaxy is the fact that no less than two supernovae were detected inside it within a time span of only four years. SN 1999eu was discovered by Japanese amateur Masakatsu Aoki (Toyama, Japan) on November 5, 1999. This 17th-magnitude supernova was a peculiar Type II supernova, the end result of the core collapse of a very massive star. And in the night of January 5 to 6, 2003, Reverend Robert Evans (Australia) discovered another Type II supernova of 15th magnitude. Also visible in this very nice image which was taken during very good sky conditions - the seeing was well below 1 arcsec - are a multitude of background galaxies of different colours and shapes. Given the fact that the total exposure time for this three-colour image was just 11 min, it is a remarkable feat, demonstrating once again the very high efficiency of the VLT.

  8. Open Access, Open Source and Digital Libraries: A Current Trend in University Libraries around the World

    ERIC Educational Resources Information Center

    Krishnamurthy, M.

    2008-01-01

    Purpose: The purpose of this paper is to describe the open access and open source movement in the digital library world. Design/methodology/approach: A review of key developments in the open access and open source movement is provided. Findings: Open source software and open access to research findings are of great use to scholars in developing…

  9. New Open-Source Version of FLORIS Released | News | NREL

    Science.gov Websites

    New Open-Source Version of FLORIS Released New Open-Source Version of FLORIS Released January 26 , 2018 National Renewable Energy Laboratory (NREL) researchers recently released an updated open-source simplified and documented. Because of the living, open-source nature of the newly updated utility, NREL

  10. Image Quality Assessment of JPEG Compressed Mars Science Laboratory Mastcam Images using Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Kerner, H. R.; Bell, J. F., III; Ben Amor, H.

    2017-12-01

    The Mastcam color imaging system on the Mars Science Laboratory Curiosity rover acquires images within Gale crater for a variety of geologic and atmospheric studies. Images are often JPEG compressed before being downlinked to Earth. While critical for transmitting images on a low-bandwidth connection, this compression can result in image artifacts most noticeable as anomalous brightness or color changes within or near JPEG compression block boundaries. In images with significant high-frequency detail (e.g., in regions showing fine layering or lamination in sedimentary rocks), the image might need to be re-transmitted losslessly to enable accurate scientific interpretation of the data. The process of identifying which images have been adversely affected by compression artifacts is performed manually by the Mastcam science team, costing significant expert human time. To streamline the tedious process of identifying which images might need to be re-transmitted, we present an input-efficient neural network solution for predicting the perceived quality of a compressed Mastcam image. Most neural network solutions require large amounts of hand-labeled training data for the model to learn the target mapping between input (e.g. distorted images) and output (e.g. quality assessment). We propose an automatic labeling method using joint entropy between a compressed and uncompressed image to avoid the need for domain experts to label thousands of training examples by hand. We use automatically labeled data to train a convolutional neural network to estimate the probability that a Mastcam user would find the quality of a given compressed image acceptable for science analysis. We tested our model on a variety of Mastcam images and found that the proposed method correlates well with image quality perception by science team members. When assisted by our proposed method, we estimate that a Mastcam investigator could reduce the time spent reviewing images by a minimum of 70%.

  11. DCTune Perceptual Optimization of Compressed Dental X-Rays

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    In current dental practice, x-rays of completed dental work are often sent to the insurer for verification. It is faster and cheaper to transmit instead digital scans of the x-rays. Further economies result if the images are sent in compressed form. DCTune is a technology for optimizing DCT (digital communication technology) quantization matrices to yield maximum perceptual quality for a given bit-rate, or minimum bit-rate for a given perceptual quality. Perceptual optimization of DCT color quantization matrices. In addition, the technology provides a means of setting the perceptual quality of compressed imagery in a systematic way. The purpose of this research was, with respect to dental x-rays, 1) to verify the advantage of DCTune over standard JPEG (Joint Photographic Experts Group), 2) to verify the quality control feature of DCTune, and 3) to discover regularities in the optimized matrices of a set of images. We optimized matrices for a total of 20 images at two resolutions (150 and 300 dpi) and four bit-rates (0.25, 0.5, 0.75, 1.0 bits/pixel), and examined structural regularities in the resulting matrices. We also conducted psychophysical studies (1) to discover the DCTune quality level at which the images became 'visually lossless,' and (2) to rate the relative quality of DCTune and standard JPEG images at various bitrates. Results include: (1) At both resolutions, DCTune quality is a linear function of bit-rate. (2) DCTune quantization matrices for all images at all bitrates and resolutions are modeled well by an inverse Gaussian, with parameters of amplitude and width. (3) As bit-rate is varied, optimal values of both amplitude and width covary in an approximately linear fashion. (4) Both amplitude and width vary in systematic and orderly fashion with either bit-rate or DCTune quality; simple mathematical functions serve to describe these relationships. (5) In going from 150 to 300 dpi, amplitude parameters are substantially lower and widths larger at corresponding bit-rates or qualities. (6) Visually lossless compression occurs at a DCTune quality value of about 1. (7) At 0.25 bits/pixel, comparative ratings give DCTune a substantial advantage over standard JPEG. As visually lossless bit-rates are approached, this advantage of necessity diminishes. We have concluded that DCTune optimized quantization matrices provide better visual quality than standard JPEG. Meaningful quality levels may be specified by means of the DCTune metric. Optimized matrices are very similar across the class of dental x-rays, suggesting the possibility of a 'class-optimal' matrix. DCTune technology appears to provide some value in the context of compressed dental x-rays.

  12. The successes and challenges of open-source biopharmaceutical innovation.

    PubMed

    Allarakhia, Minna

    2014-05-01

    Increasingly, open-source-based alliances seek to provide broad access to data, research-based tools, preclinical samples and downstream compounds. The challenge is how to create value from open-source biopharmaceutical innovation. This value creation may occur via transparency and usage of data across the biopharmaceutical value chain as stakeholders move dynamically between open source and open innovation. In this article, several examples are used to trace the evolution of biopharmaceutical open-source initiatives. The article specifically discusses the technological challenges associated with the integration and standardization of big data; the human capacity development challenges associated with skill development around big data usage; and the data-material access challenge associated with data and material access and usage rights, particularly as the boundary between open source and open innovation becomes more fluid. It is the author's opinion that the assessment of when and how value creation will occur, through open-source biopharmaceutical innovation, is paramount. The key is to determine the metrics of value creation and the necessary technological, educational and legal frameworks to support the downstream outcomes of now big data-based open-source initiatives. The continued focus on the early-stage value creation is not advisable. Instead, it would be more advisable to adopt an approach where stakeholders transform open-source initiatives into open-source discovery, crowdsourcing and open product development partnerships on the same platform.

  13. Black Hole in Search of a Home

    NASA Astrophysics Data System (ADS)

    2005-09-01

    Astronomers Discover Bright Quasar Without Massive Host Galaxy An international team of astronomers [1] used two of the most powerful astronomical facilities available, the ESO Very Large Telescope (VLT) at Cerro Paranal and the Hubble Space Telescope (HST), to conduct a detailed study of 20 low redshift quasars. For 19 of them, they found, as expected, that these super massive black holes are surrounded by a host galaxy. But when they studied the bright quasar HE0450-2958, located some 5 billion light-years away, they couldn't find evidence for an encircling galaxy. This, the astronomers suggest, may indicate a rare case of collision between a seemingly normal spiral galaxy and a much more exotic object harbouring a very massive black hole. With masses up to hundreds of millions that of the Sun, "super massive" black holes are the most tantalizing objects known. Hiding in the centre of most large galaxies, including our own Milky Way (see ESO PR 26/03), they sometimes manifest themselves by devouring matter they engulf from their surroundings. Shining up to the largest distances, they are then called "quasars" or "QSOs" (for "quasi-stellar objects"), as they had initially been confused with stars. Decades of observations of quasars have suggested that they are always associated with massive host galaxies. However, observing the host galaxy of a quasar is a challenging work, because the quasar is radiating so energetically that its host galaxy is hard to detect in the flare. ESO PR Photo 28a/05 ESO PR Photo 28a/05 Two Quasars with their Host Galaxy [Preview - JPEG: 400 x 760 pix - 82k] [Normal - JPEG: 800 x 1520 pix - 395k] [Full Res - JPEG: 1722 x 3271 pix - 4.0M] Caption: ESO PR Photo 28a/05 shows two examples of quasars from the sample studied by the astronomers, where the host galaxy is obvious. In each case, the quasar is the bright central spot. The host of HE1239-2426 (left), a z=0.082 quasar, displays large spiral arms, while the host of HE1503+0228 (right), having a redshift of 0.135, is more fuzzy and shows only hints of spiral arms. Although these particular objects are rather close to us and constitute therefore easy targets, their host would still be perfectly visible at much higher redshift, including at distances as large as the one of HE0450-2958 (z=0.285). The observations were done with the ACS camera on the HST. ESO PR Photo 28b/05 ESO PR Photo 28b/05 The Quasar without a Home: HE0450-2958 [Preview - JPEG: 400 x 760 pix - 53k] [Normal - JPEG: 800 x 1520 pix - 197k] [Full Res - JPEG: 1718 x 3265 pix - 1.5M] Caption of ESO PR Photo 28b/05: (Left) HST image of the z=0.285 quasar HE0450-2958. No obvious host galaxy centred on the quasar is seen. Only a strongly disturbed and star forming companion galaxy is seen near the top of the image. (Right) Same image shown after applying an efficient image sharpening method known as MCS-deconvolution. In contrast to the usual cases, as the ones shown in ESO PR Photo 28a/05, the quasar is not situated at the centre of an extended host galaxy, but on the edge of a compact structure, whose spectra (see ESO PR Photo 28c/05) show it to be composed of gas ionised by the quasar radiation. This gas may have been captured through a collision with the star-forming galaxy. The star indicated on the figure is a nearby galactic star seen by chance in the field of view. To overcome this problem, the astronomers devised a new and highly efficient strategy. Using ESO's VLT for spectroscopy and HST for imagery, they observed their quasars at the same time as a reference star. Simultaneous observation of a star allowed them to measure at best the shape of the quasar point source on spectra and images, and further to separate the quasar light from the other contribution, i.e. from the underlying galaxy itself. This very powerful image and spectra sharpening method ("MCS deconvolution") was applied to these data in order to detect the finest details of the host galaxy (see e.g. ESO PR 19/03). Using this efficient technique, the astronomers could detect a host galaxy for all but one of the quasars they studied. No stellar environment was found for HE0450-2958, suggesting that if any host galaxy exists, it must either have a luminosity at least six times fainter than expected a priori from the quasar observed luminosity, or a radius smaller than about 300 light-years. Typical radii for quasar host galaxies range between 6,000 and 50,000 light-years, i.e. they are at least 20 to 170 times larger. "With the data we managed to secure with the VLT and the HST, we would have been able to detect a normal host galaxy", says Pierre Magain (Université de Liège, Belgium), lead author of the paper reporting the study. "We must therefore conclude that, contrary to our expectations, this bright quasar is not surrounded by a massive galaxy." Instead, the astronomers detected just besides the quasar a bright cloud of about 2,500 light-years in size, which they baptized "the blob". The VLT observations show this cloud to be composed only of gas ionised by the intense radiation coming from the quasar. It is probably the gas of this cloud which is feeding the supermassive black hole, allowing it to become a quasar. ESO PR Photo 28c/05 ESO PR Photo 28c/05 Spectrum of Quasar HE0450-2958, the Blob and the Companion Galaxy (FORS/VLT) [Preview - JPEG: 400 x 561 pix - 112k] [Normal - JPEG: 800 x 1121 pix - 257k] [HiRes - JPEG: 2332 x 3268 pix - 1.1M] Caption: ESO PR Photo 28c/05 presents the spectra of the three objects indicated in ESO PR Photo 28b/05 as obtained with FORS1 on ESO's Very Large Telescope. The spectrum of the companion galaxy shown on the top panel reveals strong star formation. Thanks to the image sharpening process, it has been possible to separate very well the spectra of the quasar (centre) from that of the blob (bottom). The spectrum of the blob shows exclusively strong narrow emission lines having properties indicative of ionisation by the quasar light. There is no trace of stellar light, down to very faint levels, in the surrounding of the quasar. A strongly perturbed galaxy, showing all signs of a recent collision, is also seen on the HST images 2 arcseconds away (corresponding to about 50,000 light-years), with the VLT spectra showing it to be presently in a state where it forms stars at a frantic rate. "The absence of a massive host galaxy, combined with the existence of the blob and the star-forming galaxy, lead us to believe that we have uncovered a really exotic quasar, says team member Frédéric Courbin (Ecole Polytechnique Fédérale de Lausanne, Switzerland). "There is little doubt that a burst in the formation of stars in the companion galaxy and the quasar itself have been ignited by a collision that must haven taken place about 100 million years ago. What happened to the putative quasar host remains unknown." HE0450-2958 constitutes a challenging case of interpretation. The astronomers propose several possible explanations, that will need to be further investigated and confronted. Has the host galaxy been completely disrupted as a result of the collision? It is hard to imagine how that could happen. Has an isolated black hole captured gas while crossing the disc of a spiral galaxy? This would require very special conditions and would probably not have caused such a tremendous perturbation as is observed in the neighbouring galaxy. Another intriguing hypothesis is that the galaxy harbouring the black hole was almost exclusively made of dark matter. "Whatever the solution of this riddle, the strong observable fact is that the quasar host galaxy, if any, is much too faint", says team member Knud Jahnke (Astrophysikalisches Institut Potsdam, Germany). The report on HE0450-2958 is published in the September 15, 2005 issue of the journal Nature ("Discovery of a bright quasar without a massive host galaxy" by Pierre Magain et al.).

  14. Open for Business

    ERIC Educational Resources Information Center

    Voyles, Bennett

    2007-01-01

    People know about the Sakai Project (open source course management system); they may even know about Kuali (open source financials). So, what is the next wave in open source software? This article discusses business intelligence (BI) systems. Though open source BI may still be only a rumor in most campus IT departments, some brave early adopters…

  15. 76 FR 62134 - Bureau of Consular Affairs; Registration for the Diversity Immigrant (DV-2013) Visa Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-06

    ... Resident. We will not accept group or family photographs; you must include a separate photograph for each... new digital image: The image file format must be in the Joint Photographic Experts Group (JPEG) format... Web site four to six weeks before the scheduled interviews with U.S. consular officers at overseas...

  16. A Posteriori Restoration of Block Transform-Compressed Data

    NASA Technical Reports Server (NTRS)

    Brown, R.; Boden, A. F.

    1995-01-01

    The Galileo spacecraft will use lossy data compression for the transmission of its science imagery over the low-bandwidth communication system. The technique chosen for image compression is a block transform technique based on the Integer Cosine Transform, a derivative of the JPEG image compression standard. Considered here are two known a posteriori enhancement techniques, which are adapted.

  17. The Commercial Open Source Business Model

    NASA Astrophysics Data System (ADS)

    Riehle, Dirk

    Commercial open source software projects are open source software projects that are owned by a single firm that derives a direct and significant revenue stream from the software. Commercial open source at first glance represents an economic paradox: How can a firm earn money if it is making its product available for free as open source? This paper presents the core properties of com mercial open source business models and discusses how they work. Using a commercial open source approach, firms can get to market faster with a superior product at lower cost than possible for traditional competitors. The paper shows how these benefits accrue from an engaged and self-supporting user community. Lacking any prior comprehensive reference, this paper is based on an analysis of public statements by practitioners of commercial open source. It forges the various anecdotes into a coherent description of revenue generation strategies and relevant business functions.

  18. High-Resolution Seismic-Reflection and Marine Magnetic Data Along the Hosgri Fault Zone, Central California

    USGS Publications Warehouse

    Sliter, Ray W.; Triezenberg, Peter J.; Hart, Patrick E.; Watt, Janet T.; Johnson, Samuel Y.; Scheirer, Daniel S.

    2009-01-01

    The U.S. Geological Survey (USGS) collected high-resolution shallow seismic-reflection and marine magnetic data in June 2008 in the offshore areas between the towns of Cayucos and Pismo Beach, Calif., from the nearshore (~6-m depth) to just west of the Hosgri Fault Zone (~200-m depth). These data are in support of the California State Waters Mapping Program and the Cooperative Research and Development Agreement (CRADA) between the Pacific Gas & Electric Co. and the U.S. Geological Survey. Seismic-reflection and marine magnetic data were acquired aboard the R/V Parke Snavely, using a SIG 2Mille minisparker seismic source and a Geometrics G882 cesium-vapor marine magnetometer. More than 550 km of seismic and marine magnetic data was collected simultaneously along shore-perpendicular transects spaced 800 m apart, with an additional 220 km of marine magnetometer data collected across the Hosgri Fault Zone, resulting in spacing locally as smallas 400 m. This report includes maps of the seismic-survey sections, linked to Google Earth software, and digital data files showing images of each transect in SEG-Y, JPEG, and TIFF formats, as well as preliminary gridded marine-magnetic-anomaly and residual-magnetic-anomaly (shallow magnetic source) maps.

  19. Openly Published Environmental Sensing (OPEnS) | Advancing Open-Source Research, Instrumentation, and Dissemination

    NASA Astrophysics Data System (ADS)

    Udell, C.; Selker, J. S.

    2017-12-01

    The increasing availability and functionality of Open-Source software and hardware along with 3D printing, low-cost electronics, and proliferation of open-access resources for learning rapid prototyping are contributing to fundamental transformations and new technologies in environmental sensing. These tools invite reevaluation of time-tested methodologies and devices toward more efficient, reusable, and inexpensive alternatives. Building upon Open-Source design facilitates community engagement and invites a Do-It-Together (DIT) collaborative framework for research where solutions to complex problems may be crowd-sourced. However, barriers persist that prevent researchers from taking advantage of the capabilities afforded by open-source software, hardware, and rapid prototyping. Some of these include: requisite technical skillsets, knowledge of equipment capabilities, identifying inexpensive sources for materials, money, space, and time. A university MAKER space staffed by engineering students to assist researchers is one proposed solution to overcome many of these obstacles. This presentation investigates the unique capabilities the USDA-funded Openly Published Environmental Sensing (OPEnS) Lab affords researchers, within Oregon State and internationally, and the unique functions these types of initiatives support at the intersection of MAKER spaces, Open-Source academic research, and open-access dissemination.

  20. Open-source software: not quite endsville.

    PubMed

    Stahl, Matthew T

    2005-02-01

    Open-source software will never achieve ubiquity. There are environments in which it simply does not flourish. By its nature, open-source development requires free exchange of ideas, community involvement, and the efforts of talented and dedicated individuals. However, pressures can come from several sources that prevent this from happening. In addition, openness and complex licensing issues invite misuse and abuse. Care must be taken to avoid the pitfalls of open-source software.

  1. Developing an Open Source Option for NASA Software

    NASA Technical Reports Server (NTRS)

    Moran, Patrick J.; Parks, John W. (Technical Monitor)

    2003-01-01

    We present arguments in favor of developing an Open Source option for NASA software; in particular we discuss how Open Source is compatible with NASA's mission. We compare and contrast several of the leading Open Source licenses, and propose one - the Mozilla license - for use by NASA. We also address some of the related issues for NASA with respect to Open Source. In particular, we discuss some of the elements in the External Release of NASA Software document (NPG 2210.1A) that will likely have to be changed in order to make Open Source a reality withm the agency.

  2. WMS Server 2.0

    NASA Technical Reports Server (NTRS)

    Plesea, Lucian; Wood, James F.

    2012-01-01

    This software is a simple, yet flexible server of raster map products, compliant with the Open Geospatial Consortium (OGC) Web Map Service (WMS) 1.1.1 protocol. The server is a full implementation of the OGC WMS 1.1.1 as a fastCGI client and using Geospatial Data Abstraction Library (GDAL) for data access. The server can operate in a proxy mode, where all or part of the WMS requests are done on a back server. The server has explicit support for a colocated tiled WMS, including rapid response of black (no-data) requests. It generates JPEG and PNG images, including 16-bit PNG. The GDAL back-end support allows great flexibility on the data access. The server is a port to a Linux/GDAL platform from the original IRIX/IL platform. It is simpler to configure and use, and depending on the storage format used, it has better performance than other available implementations. The WMS server 2.0 is a high-performance WMS implementation due to the fastCGI architecture. The use of GDAL data back end allows for great flexibility. The configuration is relatively simple, based on a single XML file. It provides scaling and cropping, as well as blending of multiple layers based on layer transparency.

  3. Providing Internet Access to High-Resolution Lunar Images

    NASA Technical Reports Server (NTRS)

    Plesea, Lucian

    2008-01-01

    The OnMoon server is a computer program that provides Internet access to high-resolution Lunar images, maps, and elevation data, all suitable for use in geographical information system (GIS) software for generating images, maps, and computational models of the Moon. The OnMoon server implements the Open Geospatial Consortium (OGC) Web Map Service (WMS) server protocol and supports Moon-specific extensions. Unlike other Internet map servers that provide Lunar data using an Earth coordinate system, the OnMoon server supports encoding of data in Moon-specific coordinate systems. The OnMoon server offers access to most of the available high-resolution Lunar image and elevation data. This server can generate image and map files in the tagged image file format (TIFF) or the Joint Photographic Experts Group (JPEG), 8- or 16-bit Portable Network Graphics (PNG), or Keyhole Markup Language (KML) format. Image control is provided by use of the OGC Style Layer Descriptor (SLD) protocol. Full-precision spectral arithmetic processing is also available, by use of a custom SLD extension. This server can dynamically add shaded relief based on the Lunar elevation to any image layer. This server also implements tiled WMS protocol and super-overlay KML for high-performance client application programs.

  4. Providing Internet Access to High-Resolution Mars Images

    NASA Technical Reports Server (NTRS)

    Plesea, Lucian

    2008-01-01

    The OnMars server is a computer program that provides Internet access to high-resolution Mars images, maps, and elevation data, all suitable for use in geographical information system (GIS) software for generating images, maps, and computational models of Mars. The OnMars server is an implementation of the Open Geospatial Consortium (OGC) Web Map Service (WMS) server. Unlike other Mars Internet map servers that provide Martian data using an Earth coordinate system, the OnMars WMS server supports encoding of data in Mars-specific coordinate systems. The OnMars server offers access to most of the available high-resolution Martian image and elevation data, including an 8-meter-per-pixel uncontrolled mosaic of most of the Mars Global Surveyor (MGS) Mars Observer Camera Narrow Angle (MOCNA) image collection, which is not available elsewhere. This server can generate image and map files in the tagged image file format (TIFF), Joint Photographic Experts Group (JPEG), 8- or 16-bit Portable Network Graphics (PNG), or Keyhole Markup Language (KML) format. Image control is provided by use of the OGC Style Layer Descriptor (SLD) protocol. The OnMars server also implements tiled WMS protocol and super-overlay KML for high-performance client application programs.

  5. Open-Source Data and the Study of Homicide.

    PubMed

    Parkin, William S; Gruenewald, Jeff

    2015-07-20

    To date, no discussion has taken place in the social sciences as to the appropriateness of using open-source data to augment, or replace, official data sources in homicide research. The purpose of this article is to examine whether open-source data have the potential to be used as a valid and reliable data source in testing theory and studying homicide. Official and open-source homicide data were collected as a case study in a single jurisdiction over a 1-year period. The data sets were compared to determine whether open-sources could recreate the population of homicides and variable responses collected in official data. Open-source data were able to replicate the population of homicides identified in the official data. Also, for every variable measured, the open-sources captured as much, or more, of the information presented in the official data. Also, variables not available in official data, but potentially useful for testing theory, were identified in open-sources. The results of the case study show that open-source data are potentially as effective as official data in identifying individual- and situational-level characteristics, provide access to variables not found in official homicide data, and offer geographic data that can be used to link macro-level characteristics to homicide events. © The Author(s) 2015.

  6. Post-hurricane Joaquin Coastal Oblique Aerial Photographs Collected from the South Carolina/North Carolina Border to Montauk Point, New York, October 7–9, 2015

    USGS Publications Warehouse

    Morgan, Karen L.M.

    2016-06-27

    The U.S. Geological Survey (USGS), as part of the National Assessment of Coastal Change Hazards project, conducts baseline and storm-response photography missions to document and understand the changes in vulnerability of the Nation's coasts to extreme storms (Morgan, 2009). On October 7–9, 2015, the USGS conducted an oblique aerial photographic survey of the coast from the South Carolina/North Carolina border to Montauk Point, New York (fig. 1), aboard a Cessna 182 (aircraft) at an altitude of 500 feet (ft) and approximately 1,200 ft offshore fig. 2. This mission was conducted to collect post-Hurricane Joaquin data for assessing incremental changes in the beach and nearshore area since the last surveys, mission flown in September 2014 (Virginia to New York: Morgan, 2015), November 2012 (northern North Carolina: Morgan and others, 2014) and May 2008 (southern North Carolina: unpublished report), and the data can be used to assess of future coastal change.The photographs in this report are Joint Photographic Experts Group (JPEG) images. ExifTool was used to add the following to the header of each photo: time of collection, Global Positioning System (GPS) latitude, GPS longitude, keywords, credit, artist (photographer), caption, copyright, and contact information. The photograph locations are an estimate of the position of the aircraft at the time the photograph was taken and do not indicate the location of any feature in the images (see the Navigation Data page). These photographs document the state of the barrier islands and other coastal features at the time of the survey. Pages containing thumbnail images of the photographs, referred to as contact sheets, were created in 5-minute segments of flight time. These segments can be found on the Photos and Maps page. Photographs can be opened directly with any JPEG-compatible image viewer by clicking on a thumbnail on the contact sheet.In addition to the photographs, a Google Earth Keyhole Markup Language (KML) file is provided and can be used to view the images by clicking on the marker and then clicking on either the thumbnail or the link above the thumbnail. The KML file was created using the photographic navigation files. This KML file can be found in the kml folder.

  7. Forensic Analysis of Digital Image Tampering

    DTIC Science & Technology

    2004-12-01

    analysis of when each method fails, which Chapter 4 discusses. Finally, a test image containing an invisible watermark using LSB steganography is...2.2 – Example of invisible watermark using Steganography Software F5 ............. 8 Figure 2.3 – Example of copy-move image forgery [12...Figure 3.11 – Algorithm for JPEG Block Technique ....................................................... 54 Figure 3.12 – “Forged” Image with Result

  8. How Is Open Source Special?

    ERIC Educational Resources Information Center

    Kapor, Mitchell

    2005-01-01

    Open source software projects involve the production of goods, but in software projects, the "goods" consist of information. The open source model is an alternative to the conventional centralized, command-and-control way in which things are usually made. In contrast, open source projects are genuinely decentralized and transparent. Transparent…

  9. Video segmentation for post-production

    NASA Astrophysics Data System (ADS)

    Wills, Ciaran

    2001-12-01

    Specialist post-production is an industry that has much to gain from the application of content-based video analysis techniques. However the types of material handled in specialist post-production, such as television commercials, pop music videos and special effects are quite different in nature from the typical broadcast material which many video analysis techniques are designed to work with; shots are short and highly dynamic, and the transitions are often novel or ambiguous. We address the problem of scene change detection and develop a new algorithm which tackles some of the common aspects of post-production material that cause difficulties for past algorithms, such as illumination changes and jump cuts. Operating in the compressed domain on Motion JPEG compressed video, our algorithm detects cuts and fades by analyzing each JPEG macroblock in the context of its temporal and spatial neighbors. Analyzing the DCT coefficients directly we can extract the mean color of a block and an approximate detail level. We can also perform an approximated cross-correlation between two blocks. The algorithm is part of a set of tools being developed to work with an automated asset management system designed specifically for use in post-production facilities.

  10. Privacy-preserving photo sharing based on a public key infrastructure

    NASA Astrophysics Data System (ADS)

    Yuan, Lin; McNally, David; Küpçü, Alptekin; Ebrahimi, Touradj

    2015-09-01

    A significant number of pictures are posted to social media sites or exchanged through instant messaging and cloud-based sharing services. Most social media services offer a range of access control mechanisms to protect users privacy. As it is not in the best interest of many such services if their users restrict access to their shared pictures, most services keep users' photos unprotected which makes them available to all insiders. This paper presents an architecture for a privacy-preserving photo sharing based on an image scrambling scheme and a public key infrastructure. A secure JPEG scrambling is applied to protect regional visual information in photos. Protected images are still compatible with JPEG coding and therefore can be viewed by any one on any device. However, only those who are granted secret keys will be able to descramble the photos and view their original versions. The proposed architecture applies an attribute-based encryption along with conventional public key cryptography, to achieve secure transmission of secret keys and a fine-grained control over who may view shared photos. In addition, we demonstrate the practical feasibility of the proposed photo sharing architecture with a prototype mobile application, ProShare, which is built based on iOS platform.

  11. Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain

    PubMed Central

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality. PMID:23049544

  12. Medical image compression based on vector quantization with variable block sizes in wavelet domain.

    PubMed

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.

  13. Mixed raster content (MRC) model for compound image compression

    NASA Astrophysics Data System (ADS)

    de Queiroz, Ricardo L.; Buckley, Robert R.; Xu, Ming

    1998-12-01

    This paper will describe the Mixed Raster Content (MRC) method for compressing compound images, containing both binary test and continuous-tone images. A single compression algorithm that simultaneously meets the requirements for both text and image compression has been elusive. MRC takes a different approach. Rather than using a single algorithm, MRC uses a multi-layered imaging model for representing the results of multiple compression algorithms, including ones developed specifically for text and for images. As a result, MRC can combine the best of existing or new compression algorithms and offer different quality-compression ratio tradeoffs. The algorithms used by MRC set the lower bound on its compression performance. Compared to existing algorithms, MRC has some image-processing overhead to manage multiple algorithms and the imaging model. This paper will develop the rationale for the MRC approach by describing the multi-layered imaging model in light of a rate-distortion trade-off. Results will be presented comparing images compressed using MRC, JPEG and state-of-the-art wavelet algorithms such as SPIHT. MRC has been approved or proposed as an architectural model for several standards, including ITU Color Fax, IETF Internet Fax, and JPEG 2000.

  14. Storage, retrieval, and edit of digital video using Motion JPEG

    NASA Astrophysics Data System (ADS)

    Sudharsanan, Subramania I.; Lee, D. H.

    1994-04-01

    In a companion paper we describe a Micro Channel adapter card that can perform real-time JPEG (Joint Photographic Experts Group) compression of a 640 by 480 24-bit image within 1/30th of a second. Since this corresponds to NTSC video rates at considerably good perceptual quality, this system can be used for real-time capture and manipulation of continuously fed video. To facilitate capturing the compressed video in a storage medium, an IBM Bus master SCSI adapter with cache is utilized. Efficacy of the data transfer mechanism is considerably improved using the System Control Block architecture, an extension to Micro Channel bus masters. We show experimental results that the overall system can perform at compressed data rates of about 1.5 MBytes/second sustained and with sporadic peaks to about 1.8 MBytes/second depending on the image sequence content. We also describe mechanisms to access the compressed data very efficiently through special file formats. This in turn permits creation of simpler sequence editors. Another advantage of the special file format is easy control of forward, backward and slow motion playback. The proposed method can be extended for design of a video compression subsystem for a variety of personal computing systems.

  15. Mount Shasta Snowpack

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Full-size images June 17, 2001 (2.0 MB JPEG) June 14, 2000 (2.1 MB JPEG) Light snowfall in the winter of 2000-01 led to a dry summer in the Pacific Northwest. The drought led to a conflict between farmers and fishing communities in the Klamath River Basin over water rights, and a series of forest fires in Washington, Oregon, and Northern California. The pair of images above, both acquired by the Enhanced Thematic Mapper Plus (ETM+) aboard the Landsat 7 satellite, show the snowpack on Mt. Shasta in June 2000 and 2001. On June 14, 2000, the snow extends to the lower slopes of the 4,317-meter (14,162-foot) volcano. At nearly the same time this year (June 17, 2001) the snow had retreated well above the tree-line. The drought in the region was categorized as moderate to severe by the National Oceanographic and Atmospheric Administration (NOAA), and the United States Geological Survey (USGS) reported that streamflow during June was only about 25 percent of the average. Above and to the left of Mt. Shasta is Lake Shastina, a reservoir which is noticeably lower in the 2001 image than the 2000 image. Images courtesy USGS EROS Data Center and the Landsat 7 Science Team

  16. JPEG2000-coded image error concealment exploiting convex sets projections.

    PubMed

    Atzori, Luigi; Ginesu, Giaime; Raccis, Alessio

    2005-04-01

    Transmission errors in JPEG2000 can be grouped into three main classes, depending on the affected area: LL, high frequencies at the lower decomposition levels, and high frequencies at the higher decomposition levels. The first type of errors are the most annoying but can be concealed exploiting the signal spatial correlation like in a number of techniques proposed in the past; the second are less annoying but more difficult to address; the latter are often imperceptible. In this paper, we address the problem of concealing the second class or errors when high bit-planes are damaged by proposing a new approach based on the theory of projections onto convex sets. Accordingly, the error effects are masked by iteratively applying two procedures: low-pass (LP) filtering in the spatial domain and restoration of the uncorrupted wavelet coefficients in the transform domain. It has been observed that a uniform LP filtering brought to some undesired side effects that negatively compensated the advantages. This problem has been overcome by applying an adaptive solution, which exploits an edge map to choose the optimal filter mask size. Simulation results demonstrated the efficiency of the proposed approach.

  17. Improved compression technique for multipass color printers

    NASA Astrophysics Data System (ADS)

    Honsinger, Chris

    1998-01-01

    A multipass color printer prints a color image by printing one color place at a time in a prescribed order, e.g., in a four-color systems, the cyan plane may be printed first, the magenta next, and so on. It is desirable to discard the data related to each color plane once it has been printed, so that data from the next print may be downloaded. In this paper, we present a compression scheme that allows the release of a color plane memory, but still takes advantage of the correlation between the color planes. The compression scheme is based on a block adaptive technique for decorrelating the color planes followed by a spatial lossy compression of the decorrelated data. A preferred method of lossy compression is the DCT-based JPEG compression standard, as it is shown that the block adaptive decorrelation operations can be efficiently performed in the DCT domain. The result of the compression technique are compared to that of using JPEG on RGB data without any decorrelating transform. In general, the technique is shown to improve the compression performance over a practical range of compression ratios by at least 30 percent in all images, and up to 45 percent in some images.

  18. Common characteristics of open source software development and applicability for drug discovery: a systematic review.

    PubMed

    Ardal, Christine; Alstadsæter, Annette; Røttingen, John-Arne

    2011-09-28

    Innovation through an open source model has proven to be successful for software development. This success has led many to speculate if open source can be applied to other industries with similar success. We attempt to provide an understanding of open source software development characteristics for researchers, business leaders and government officials who may be interested in utilizing open source innovation in other contexts and with an emphasis on drug discovery. A systematic review was performed by searching relevant, multidisciplinary databases to extract empirical research regarding the common characteristics and barriers of initiating and maintaining an open source software development project. Common characteristics to open source software development pertinent to open source drug discovery were extracted. The characteristics were then grouped into the areas of participant attraction, management of volunteers, control mechanisms, legal framework and physical constraints. Lastly, their applicability to drug discovery was examined. We believe that the open source model is viable for drug discovery, although it is unlikely that it will exactly follow the form used in software development. Hybrids will likely develop that suit the unique characteristics of drug discovery. We suggest potential motivations for organizations to join an open source drug discovery project. We also examine specific differences between software and medicines, specifically how the need for laboratories and physical goods will impact the model as well as the effect of patents.

  19. Open Source Paradigm: A Synopsis of The Cathedral and the Bazaar for Health and Social Care.

    PubMed

    Benson, Tim

    2016-07-04

    Open source software (OSS) is becoming more fashionable in health and social care, although the ideas are not new. However progress has been slower than many had expected. The purpose is to summarise the Free/Libre Open Source Software (FLOSS) paradigm in terms of what it is, how it impacts users and software engineers and how it can work as a business model in health and social care sectors. Much of this paper is a synopsis of Eric Raymond's seminal book The Cathedral and the Bazaar, which was the first comprehensive description of the open source ecosystem, set out in three long essays. Direct quotes from the book are used liberally, without reference to specific passages. The first part contrasts open and closed source approaches to software development and support. The second part describes the culture and practices of the open source movement. The third part considers business models. A key benefit of open source is that users can access and collaborate on improving the software if they wish. Closed source code may be regarded as a strategic business risk that that may be unacceptable if there is an open source alternative. The sharing culture of the open source movement fits well with that of health and social care.

  20. Weather forecasting with open source software

    NASA Astrophysics Data System (ADS)

    Rautenhaus, Marc; Dörnbrack, Andreas

    2013-04-01

    To forecast the weather situation during aircraft-based atmospheric field campaigns, we employ a tool chain of existing and self-developed open source software tools and open standards. Of particular value are the Python programming language with its extension libraries NumPy, SciPy, PyQt4, Matplotlib and the basemap toolkit, the NetCDF standard with the Climate and Forecast (CF) Metadata conventions, and the Open Geospatial Consortium Web Map Service standard. These open source libraries and open standards helped to implement the "Mission Support System", a Web Map Service based tool to support weather forecasting and flight planning during field campaigns. The tool has been implemented in Python and has also been released as open source (Rautenhaus et al., Geosci. Model Dev., 5, 55-71, 2012). In this presentation we discuss the usage of free and open source software for weather forecasting in the context of research flight planning, and highlight how the field campaign work benefits from using open source tools and open standards.

  1. Open Source Software Development

    DTIC Science & Technology

    2011-01-01

    Software, 2002, 149(1), 3-17. 3. DiBona , C., Cooper, D., and Stone, M. (Eds.), Open Sources 2.0, 2005, O’Reilly Media, Sebastopol, CA. Also see, C... DiBona , S. Ockman, and M. Stone (Eds.). Open Sources: Vocides from the Open Source Revolution, 1999. O’Reilly Media, Sebastopol, CA. 4. Ducheneaut, N

  2. Map_plot and bgg_plot: software for integration of geoscience datasets

    NASA Astrophysics Data System (ADS)

    Gaillot, Philippe; Punongbayan, Jane T.; Rea, Brice

    2004-02-01

    Since 1985, the Ocean Drilling Program (ODP) has been supporting multidisciplinary research in exploring the structure and history of Earth beneath the oceans. After more than 200 Legs, complementary datasets covering different geological environments, periods and space scales have been obtained and distributed world-wide using the ODP-Janus and Lamont Doherty Earth Observatory-Borehole Research Group (LDEO-BRG) database servers. In Earth Sciences, more than in any other science, the ensemble of these data is characterized by heterogeneous formats and graphical representation modes. In order to fully and quickly assess this information, a set of Unix/Linux and Generic Mapping Tool-based C programs has been designed to convert and integrate datasets acquired during the present ODP and the future Integrated ODP (IODP) Legs. Using ODP Leg 199 datasets, we show examples of the capabilities of the proposed programs. The program map_plot is used to easily display datasets onto 2-D maps. The program bgg_plot (borehole geology and geophysics plot) displays data with respect to depth and/or time. The latter program includes depth shifting, filtering and plotting of core summary information, continuous and discrete-sample core measurements (e.g. physical properties, geochemistry, etc.), in situ continuous logs, magneto- and bio-stratigraphies, specific sedimentological analyses (lithology, grain size, texture, porosity, etc.), as well as core and borehole wall images. Outputs from both programs are initially produced in PostScript format that can be easily converted to Portable Document Format (PDF) or standard image formats (GIF, JPEG, etc.) using widely distributed conversion programs. Based on command line operations and customization of parameter files, these programs can be included in other shell- or database-scripts, automating plotting procedures of data requests. As an open source software, these programs can be customized and interfaced to fulfill any specific plotting need of geoscientists using ODP-like datasets.

  3. Ames Stereo Pipeline for Operation IceBridge

    NASA Astrophysics Data System (ADS)

    Beyer, R. A.; Alexandrov, O.; McMichael, S.; Fong, T.

    2017-12-01

    We are using the NASA Ames Stereo Pipeline to process Operation IceBridge Digital Mapping System (DMS) images into terrain models and to align them with the simultaneously acquired LIDAR data (ATM and LVIS). The expected outcome is to create a contiguous, high resolution terrain model for each flight that Operation IceBridge has flown during its eight year history of Arctic and Antarctic flights. There are some existing terrain models in the NSIDC repository that cover 2011 and 2012 (out of the total period of 2009 to 2017), which were made with the Agisoft Photoscan commercial software. Our open-source stereo suite has been verified to create terrains of similar quality. The total number of images we expect to process is around 5 million. There are numerous challenges with these data: accurate determination and refinement of camera pose when the images were acquired based on data logged during the flights and/or using information from existing orthoimages, aligning terrains with little or no features, images containing clouds, JPEG artifacts in input imagery, inconsistencies in how data was acquired/archived over the entire period, not fully reliable camera calibration files, and the sheer amount of data. We will create the majority of terrain models at 40 cm/pixel with a vertical precision of 10 to 20 cm. In some circumstances when the aircraft was flying higher than usual, those values will get coarser. We will create orthoimages at 10 cm/pixel (with the same caveat that some flights are at higher altitudes). These will differ from existing orthoimages by using the underlying terrain we generate rather than some pre-existing very low-resolution terrain model that may differ significantly from what is on the ground at the time of IceBridge acquisition.The results of this massive processing will be submitted to the NSIDC so that cryosphere researchers will be able to use these data for their investigations.

  4. Open-source hardware for medical devices

    PubMed Central

    2016-01-01

    Open-source hardware is hardware whose design is made publicly available so anyone can study, modify, distribute, make and sell the design or the hardware based on that design. Some open-source hardware projects can potentially be used as active medical devices. The open-source approach offers a unique combination of advantages, including reducing costs and faster innovation. This article compares 10 of open-source healthcare projects in terms of how easy it is to obtain the required components and build the device. PMID:27158528

  5. Open-source hardware for medical devices.

    PubMed

    Niezen, Gerrit; Eslambolchilar, Parisa; Thimbleby, Harold

    2016-04-01

    Open-source hardware is hardware whose design is made publicly available so anyone can study, modify, distribute, make and sell the design or the hardware based on that design. Some open-source hardware projects can potentially be used as active medical devices. The open-source approach offers a unique combination of advantages, including reducing costs and faster innovation. This article compares 10 of open-source healthcare projects in terms of how easy it is to obtain the required components and build the device.

  6. The case for open-source software in drug discovery.

    PubMed

    DeLano, Warren L

    2005-02-01

    Widespread adoption of open-source software for network infrastructure, web servers, code development, and operating systems leads one to ask how far it can go. Will "open source" spread broadly, or will it be restricted to niches frequented by hopeful hobbyists and midnight hackers? Here we identify reasons for the success of open-source software and predict how consumers in drug discovery will benefit from new open-source products that address their needs with increased flexibility and in ways complementary to proprietary options.

  7. Choosing Open Source ERP Systems: What Reasons Are There For Doing So?

    NASA Astrophysics Data System (ADS)

    Johansson, Björn; Sudzina, Frantisek

    Enterprise resource planning (ERP) systems attract a high attention and open source software does it as well. The question is then if, and if so, when do open source ERP systems take off. The paper describes the status of open source ERP systems. Based on literature review of ERP system selection criteria based on Web of Science articles, it discusses reported reasons for choosing open source or proprietary ERP systems. Last but not least, the article presents some conclusions that could act as input for future research. The paper aims at building up a foundation for the basic question: What are the reasons for an organization to adopt open source ERP systems.

  8. Developing open-source codes for electromagnetic geophysics using industry support

    NASA Astrophysics Data System (ADS)

    Key, K.

    2017-12-01

    Funding for open-source software development in academia often takes the form of grants and fellowships awarded by government bodies and foundations where there is no conflict-of-interest between the funding entity and the free dissemination of the open-source software products. Conversely, funding for open-source projects in the geophysics industry presents challenges to conventional business models where proprietary licensing offers value that is not present in open-source software. Such proprietary constraints make it easier to convince companies to fund academic software development under exclusive software distribution agreements. A major challenge for obtaining commercial funding for open-source projects is to offer a value proposition that overcomes the criticism that such funding is a give-away to the competition. This work draws upon a decade of experience developing open-source electromagnetic geophysics software for the oil, gas and minerals exploration industry, and examines various approaches that have been effective for sustaining industry sponsorship.

  9. Behind Linus's Law: Investigating Peer Review Processes in Open Source

    ERIC Educational Resources Information Center

    Wang, Jing

    2013-01-01

    Open source software has revolutionized the way people develop software, organize collaborative work, and innovate. The numerous open source software systems that have been created and adopted over the past decade are influential and vital in all aspects of work and daily life. The understanding of open source software development can enhance its…

  10. Implementing Open Source Platform for Education Quality Enhancement in Primary Education: Indonesia Experience

    ERIC Educational Resources Information Center

    Kisworo, Marsudi Wahyu

    2016-01-01

    Information and Communication Technology (ICT)-supported learning using free and open source platform draws little attention as open source initiatives were focused in secondary or tertiary educations. This study investigates possibilities of ICT-supported learning using open source platform for primary educations. The data of this study is taken…

  11. An Analysis of Open Source Security Software Products Downloads

    ERIC Educational Resources Information Center

    Barta, Brian J.

    2014-01-01

    Despite the continued demand for open source security software, a gap in the identification of success factors related to the success of open source security software persists. There are no studies that accurately assess the extent of this persistent gap, particularly with respect to the strength of the relationships of open source software…

  12. Research on OpenStack of open source cloud computing in colleges and universities’ computer room

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Zhang, Dandan

    2017-06-01

    In recent years, the cloud computing technology has a rapid development, especially open source cloud computing. Open source cloud computing has attracted a large number of user groups by the advantages of open source and low cost, have now become a large-scale promotion and application. In this paper, firstly we briefly introduced the main functions and architecture of the open source cloud computing OpenStack tools, and then discussed deeply the core problems of computer labs in colleges and universities. Combining with this research, it is not that the specific application and deployment of university computer rooms with OpenStack tool. The experimental results show that the application of OpenStack tool can efficiently and conveniently deploy cloud of university computer room, and its performance is stable and the functional value is good.

  13. Common characteristics of open source software development and applicability for drug discovery: a systematic review

    PubMed Central

    2011-01-01

    Background Innovation through an open source model has proven to be successful for software development. This success has led many to speculate if open source can be applied to other industries with similar success. We attempt to provide an understanding of open source software development characteristics for researchers, business leaders and government officials who may be interested in utilizing open source innovation in other contexts and with an emphasis on drug discovery. Methods A systematic review was performed by searching relevant, multidisciplinary databases to extract empirical research regarding the common characteristics and barriers of initiating and maintaining an open source software development project. Results Common characteristics to open source software development pertinent to open source drug discovery were extracted. The characteristics were then grouped into the areas of participant attraction, management of volunteers, control mechanisms, legal framework and physical constraints. Lastly, their applicability to drug discovery was examined. Conclusions We believe that the open source model is viable for drug discovery, although it is unlikely that it will exactly follow the form used in software development. Hybrids will likely develop that suit the unique characteristics of drug discovery. We suggest potential motivations for organizations to join an open source drug discovery project. We also examine specific differences between software and medicines, specifically how the need for laboratories and physical goods will impact the model as well as the effect of patents. PMID:21955914

  14. The 2017 Bioinformatics Open Source Conference (BOSC)

    PubMed Central

    Harris, Nomi L.; Cock, Peter J.A.; Chapman, Brad; Fields, Christopher J.; Hokamp, Karsten; Lapp, Hilmar; Munoz-Torres, Monica; Tzovaras, Bastian Greshake; Wiencko, Heather

    2017-01-01

    The Bioinformatics Open Source Conference (BOSC) is a meeting organized by the Open Bioinformatics Foundation (OBF), a non-profit group dedicated to promoting the practice and philosophy of Open Source software development and Open Science within the biological research community. The 18th annual BOSC ( http://www.open-bio.org/wiki/BOSC_2017) took place in Prague, Czech Republic in July 2017. The conference brought together nearly 250 bioinformatics researchers, developers and users of open source software to interact and share ideas about standards, bioinformatics software development, open and reproducible science, and this year’s theme, open data. As in previous years, the conference was preceded by a two-day collaborative coding event open to the bioinformatics community, called the OBF Codefest. PMID:29118973

  15. The 2017 Bioinformatics Open Source Conference (BOSC).

    PubMed

    Harris, Nomi L; Cock, Peter J A; Chapman, Brad; Fields, Christopher J; Hokamp, Karsten; Lapp, Hilmar; Munoz-Torres, Monica; Tzovaras, Bastian Greshake; Wiencko, Heather

    2017-01-01

    The Bioinformatics Open Source Conference (BOSC) is a meeting organized by the Open Bioinformatics Foundation (OBF), a non-profit group dedicated to promoting the practice and philosophy of Open Source software development and Open Science within the biological research community. The 18th annual BOSC ( http://www.open-bio.org/wiki/BOSC_2017) took place in Prague, Czech Republic in July 2017. The conference brought together nearly 250 bioinformatics researchers, developers and users of open source software to interact and share ideas about standards, bioinformatics software development, open and reproducible science, and this year's theme, open data. As in previous years, the conference was preceded by a two-day collaborative coding event open to the bioinformatics community, called the OBF Codefest.

  16. The Efficient Utilization of Open Source Information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baty, Samuel R.

    These are a set of slides on the efficient utilization of open source information. Open source information consists of a vast set of information from a variety of sources. Not only does the quantity of open source information pose a problem, the quality of such information can hinder efforts. To show this, two case studies are mentioned: Iran and North Korea, in order to see how open source information can be utilized. The huge breadth and depth of open source information can complicate an analysis, especially because open information has no guarantee of accuracy. Open source information can provide keymore » insights either directly or indirectly: looking at supporting factors (flow of scientists, products and waste from mines, government budgets, etc.); direct factors (statements, tests, deployments). Fundamentally, it is the independent verification of information that allows for a more complete picture to be formed. Overlapping sources allow for more precise bounds on times, weights, temperatures, yields or other issues of interest in order to determine capability. Ultimately, a "good" answer almost never comes from an individual, but rather requires the utilization of a wide range of skill sets held by a team of people.« less

  17. The 2015 Bioinformatics Open Source Conference (BOSC 2015).

    PubMed

    Harris, Nomi L; Cock, Peter J A; Lapp, Hilmar; Chapman, Brad; Davey, Rob; Fields, Christopher; Hokamp, Karsten; Munoz-Torres, Monica

    2016-02-01

    The Bioinformatics Open Source Conference (BOSC) is organized by the Open Bioinformatics Foundation (OBF), a nonprofit group dedicated to promoting the practice and philosophy of open source software development and open science within the biological research community. Since its inception in 2000, BOSC has provided bioinformatics developers with a forum for communicating the results of their latest efforts to the wider research community. BOSC offers a focused environment for developers and users to interact and share ideas about standards; software development practices; practical techniques for solving bioinformatics problems; and approaches that promote open science and sharing of data, results, and software. BOSC is run as a two-day special interest group (SIG) before the annual Intelligent Systems in Molecular Biology (ISMB) conference. BOSC 2015 took place in Dublin, Ireland, and was attended by over 125 people, about half of whom were first-time attendees. Session topics included "Data Science;" "Standards and Interoperability;" "Open Science and Reproducibility;" "Translational Bioinformatics;" "Visualization;" and "Bioinformatics Open Source Project Updates". In addition to two keynote talks and dozens of shorter talks chosen from submitted abstracts, BOSC 2015 included a panel, titled "Open Source, Open Door: Increasing Diversity in the Bioinformatics Open Source Community," that provided an opportunity for open discussion about ways to increase the diversity of participants in BOSC in particular, and in open source bioinformatics in general. The complete program of BOSC 2015 is available online at http://www.open-bio.org/wiki/BOSC_2015_Schedule.

  18. Sharpest Ever VLT Images at NAOS-CONICA "First Light"

    NASA Astrophysics Data System (ADS)

    2001-12-01

    Very Promising Start-Up of New Adaptive Optics Instrument at Paranal Summary A team of astronomers and engineers from French and German research institutes and ESO at the Paranal Observatory is celebrating the successful accomplishment of "First Light" for the NAOS-CONICA Adaptive Optics facility . With this event, another important milestone for the Very Large Telescope (VLT) project has been passed. Normally, the achievable image sharpness of a ground-based telescope is limited by the effect of atmospheric turbulence. However, with the Adaptive Optics (AO) technique, this drawback can be overcome and the telescope produces images that are at the theoretical limit, i.e., as sharp as if it were in space . Adaptive Optics works by means of a computer-controlled, flexible mirror that counteracts the image distortion induced by atmospheric turbulence in real time. The larger the main mirror of the telescope is, and the shorter the wavelength of the observed light, the sharper will be the images recorded. During a preceding four-week period of hard and concentrated work, the expert team assembled and installed this major astronomical instrument at the 8.2-m VLT YEPUN Unit Telescope (UT4). On November 25, 2001, following careful adjustments of this complex apparatus, a steady stream of photons from a southern star bounced off the computer-controlled deformable mirror inside NAOS and proceeded to form in CONICA the sharpest image produced so far by one of the VLT telescopes. With a core angular diameter of only 0.07 arcsec, this image is near the theoretical limit possible for a telescope of this size and at the infrared wavelength used for this demonstration (the K-band at 2.2 µm). Subsequent tests reached the spectacular performance of 0.04 arcsec in the J-band (wavelength 1.2 µm). "I am proud of this impressive achievement", says ESO Director General Catherine Cesarsky. "It shows the true potential of European science and technology and it provides a fine demonstration of the value of international collaboration. ESO and its partner institutes and companies in France and Germany have worked a long time towards this goal - with the first, extremely promising results, we shall soon be able to offer a new and fully tuned instrument to our wide research community." The NAOS adaptive optics corrector was built, under an ESO contract, by Office National d'Etudes et de Recherches Aérospatiales (ONERA) , Laboratoire d'Astrophysique de Grenoble (LAOG) and the DESPA and DASGAL laboratories of the Observatoire de Paris in France, in collaboration with ESO. The CONICA infra-red camera was built, under an ESO contract, by the Max-Planck-Institut für Astronomie (MPIA) (Heidelberg) and the Max-Planck Institut für Extraterrestrische Physik (MPE) (Garching) in Germany, in collaboration with ESO. The present event happens less than four weeks after "First Fringes" were achieved for the VLT Interferometer (VLTI) with two of the 8.2-m Unit Telescopes. No wonder that a spirit of great enthusiasm reigns at Paranal! Information for the media: ESO is producing a Video News Release ( ESO Video News Reel No. 13 ) with sequences from the NAOS-CONICA "First Light" event at Paranal, a computer animation illustrating the principle of adaptive optics in NAOS-CONICA, as well as the first astronomical images obtained. In addition to the usual distribution, this VNR will also be transmitted via satellite Friday 7 December 2001 from 09:00 to 09:15 CET (10:00 to 10:15 UT) on "Europe by Satellite" . These video images may be used free of charge by broadcasters. Satellite details, the script and the shotlist will be on-line from 6 December on the ESA TV Service Website http://television.esa.int. Also a pre-view Real Video Stream of the video news release will be available as of that date from this URL. Video Clip 07/01 : Various video scenes related to the NAOS-CONICA "First Light" Event ( ESO Video News Reel No. 13 ). PR Photo 33a/01 : NAOS-CONICA "First light" image of an 8-mag star. PR Photo 33b/01 : The moment of "First Light" at the YEPUN Control Consoles. PR Photo 33c/01 : Image of NGC 3603 (K-band) area (NAOS-CONICA) . PR Photo 33d/01 : Image of NGC 3603 wider field (ISAAC) PR Photo 33e/01 : I-band HST-WFPC2 image of NGC 3603 field . PR Photo 33f/01 : Animated GIF, with NAOS-CONICA (K-band) and HST-WFPC2 (I-band) images of NGC 3603 area PR Photo 33g/01 : Image of the Becklin-Neugebauer Object . PR Photo 33h/01 : Image of a very close double star . PR Photo 33i/01 : Image of a 17-magnitude reference star PR Photo 33j/01 : Image of the central area of the 30 Dor star cluster . PR Photo 33k/01 : The top of the Paranal Mountain (November 25, 2001). PR Photo 33l/01 : The NAOS-CONICA instrument attached to VLT YEPUN.. A very special moment at Paranal! First light for NAOS-CONICA at the VLT - PR Video Clip 07/01] ESO PR Video Clip 07/01 "First Light for NAOS-CONICA" (25 November 2001) (3850 frames/2:34 min) [MPEG Video+Audio; 160x120 pix; 3.6Mb] [MPEG Video+Audio; 320x240 pix; 8.9Mb] [RealMedia; streaming; 34kps] [RealMedia; streaming; 200kps] ESO Video Clip 07/01 provides some background scenes and images around the NAOS-CONICA "First Light" event on November 25, 2001 (extracted from ESO Video News Reel No. 13 ). Contents: NGC 3603 image from ISAAC and a smaller field as observed by NAOS-CONICA ; the Paranal platform in the afternoon, before the event; YEPUN and NAOS-CONICA with cryostat sounds; Tension is rising in the VLT Control Room; Wavefront Sensor display; the "Loop is Closed"; happy team members; the first corrected image on the screen; Images of NGC 3603 by HST and VLT; 30 Doradus central cluster; BN Object in Orion; Statement by the Head of the ESO Instrument Division. ESO PR Photo 33a/01 ESO PR Photo 33a/01 [Preview - JPEG: 317 x 400 pix - 27k] [Normal - JPEG: 800 x 634 pix - 176k] ESO PR Photo 33b/01 ESO PR Photo 33b/01 [Preview - JPEG: 400 x 322 pix - 176k] [Normal - JPEG: 800 x 644 pix - 360k] ESO PR Photo 33a/01 shows the first image in the infrared K-band (wavelength 2.2 µm) of a star (visual magnitude 8) obtained - before (left) and after (right) the adaptive optics was switched on (see the text). The middle panel displays the 3-D intensity profiles of these images, demonstrating the tremendous gain, both in image sharpness and central intensity. ESO PR Photo 33b/01 shows some of the NAOS-CONICA team members in the VLT Control Room at the moment of "First Light" in the night between November 25-26, 2001. From left to right: Thierry Fusco (ONERA), Clemens Storz (MPIA), Robin Arsenault (ESO), Gerard Rousset (ONERA). The numerous boxes with the many NAOS and CONICA parts arrived at the ESO Paranal Observatory on October 24, 2001. Astronomers and engineers from ESO and the participating institutes and organisations then began the painstaking assembly of these very complex instruments on one of the Nasmyth platforms on the fourth VLT 8.2-m Unit Telescope, YEPUN . Then followed days of technical tests and adjustments, working around the clock. In the afternoon of Sunday, November 25, the team finally declared the instrument fit to attempt its "First Light" observation. The YEPUN dome was opened at sunset and a small, rather apprehensive group gathered in the VLT Control Room, peering intensively at the computer screens over the shoulders of their colleagues, the telescope and instrument operators. Time passed imperceptibly to those present, as the basic calibrations required at this early stage to bring NAOS-CONICA to full operational state were successfully completed. Everybody sensed the special moment approaching when, finally, the telescope operator pushed a button and the giant telescope started to turn smoothly towards the first test object, an otherwise undistinguished star in our Milky Way. Its non-corrected infra-red image was recorded by the CONICA detector array and soon appeared on the computer screen. It was already very good by astronomical standards, with a diameter of only 0.50 arsec (FWHM), cf. PR Photo 33a/01 (left) . Then, by another command, the instrument operator switched on the NAOS adaptive optics system , thereby "closing the loop" for the first time on a sky field, by using that ordinary star as a reference light source to measure the atmospheric turbulence. Obediently, the deformable mirror in NAOS began to follow the "orders" that were issued 500 times per second by its powerful control computer.... As if by magics, that stellar image on the computer screen pulled itself together....! What seconds before had been a jumping, rather blurry patch of light suddenly became a rock-steady, razor-sharp and brilliant spot of light. The entire room burst into applause - there were happy faces and smiles all over, and then the operator announced the measured image diameter - a truly impressive 0.068 arcsec, already at this first try, cf. PR Photo 33a/01 (right) ! All the team members who were lucky to be there sent a special thought to those many others who had also put in over four years' hard and dedicated work to make this event a reality. The time of this historical moment was November 25, 2001, 23:00 Chilean time (November 26, 2001, 02:00 am UT) . During this and the following nights, more images were made of astronomcal objects, opening a new chapter of the long tradition of Adaptive Optics at ESO. More information about the NAOS-CONICA international collaboration , technical details about this instrument and its special advantages are available below. The first images The star-forming region around NGC 3603 ESO PR Photo 33c/01 ESO PR Photo 33c/01 [Preview - JPEG: 326 x 400 pix - 200k] [Normal - JPEG: 651 x 800 pix - 480k] ESO PR Photo 33d/01 ESO PR Photo 33d/01 [Preview - JPEG: 348 x 400 pix - 240k] [Normal - JPEG: 695 x 800 pix - 592k] Caption : PR Photo 33c/01 displays a NAOS-CONICA image of the starburst cluster NGC 3603, obtained during the second night of NAOS-CONICA operation. The sky region shown is some 20 arcsec to the North of the centre of the cluster. NAOS was compensating atmospheric disturbances by analyzing light from the central star with its visual wavefront sensor, while CONICA was observing in the K-band. The image is nearly diffraction-limited and has a Full-Width-Half-Maximum (FWHM) diameter of 0.07 arcsec, with a central Strehl ratio of 56% (a measure of the degree of concentration of the light). The exposure lasted 300 seconds. North is up and East is left. The field measures 27 x 27 arcsec. On PR Photo 33d/01 , the sky area shown in this NAOS-CONICA high-resolution image is indicated on an earlier image of a much larger area, obtained in 1999 with the ISAAC multi-mode instrument on VLT ANTU ( ESO PR 16/99 ) Among the first images to be obtained of astronomical objects was one of the stellar cluster NGC 3603 that is located in the Carina spiral arm in the Milky Way at a distance of about 20,000 light-years, cf. PR Photo 33c/01 . With its central starburst cluster, it is one of the densest and most massive star forming regions in our Galaxy. Some of the most massive stars - with masses up to 120 times the mass of our Sun - can be found in this cluster. For a long time astronomers have suspected that the formation of low-mass stars is suppressed by the presence of high-mass stars, but two years ago, stars with masses as low as 10% of the mass of our Sun were detected in NGC 3603 with the ISAAC multi-mode instrument at VLT ANTU, cf. PR Photo 33d/01 and ESO PR 16/99. The high stellar density in this region, however, prevented the search for objects with still lower masses, so-called Brown Dwarfs. The new, high-resolution K-band images like PR Photo 33c/01 , obtained with NAOS-CONICA at YEPUN, now for the first time facilitate the study of the elusive class of brown dwarfs in such a starburst environment. This will, among others, offer very valuable insight into the fundamental problem about the total amount of matter that is deposited into stars in star-forming regions. An illustration of the potential of Adaptive Optics ESO PR Photo 33e/01 ESO PR Photo 33e/01 [Preview - JPEG: 376 x 400 pix - 128k] [Normal - JPEG: 752 x 800 pix - 336k] ESO PR Photo 33f/01 ESO PR Photo 33f/01 [Animated GIF: 400 x 425 pix - 71k] Caption : PR Photo 33e/01 was obtained with the WFPC2 camera on the Hubble Space Telescope (HST) in the I-band (800nm). It is a 400-sec exposure and shows the same sky region as in the NAOS-CONICA image shown in PR Photo 33c/01. PR Photo 33f/01 provides a direct comparison of the two images (animated GIF). The HST image was extracted from archival data. HST is operated by NASA and ESA. Normally, the achievable image sharpness of a ground-based telescope is limited by the effect of atmospheric turbulence . However, the Adaptive Optics (AO) technique overcomes this problem and when the AO instrument is optimized, the telescope produces images that are at the theoretical limit, i.e., as sharp as if it were in space . The theoretical image diameter is inversely proportional to the diameter of the main mirror of the telescope and proportional to the wavelength of the observed light. Thus, the larger the telescope and the shorter the wavelength, the sharper will be the images recorded . To illustrate this, a comparison of the NAOS-CONICA image of NGC 3603 ( PR Photo 33c/01 ) is here made with a near-infrared image obtained earlier by the Hubble Space Telescope (HST) covering the same sky area ( PR Photo 33e/01 ). Both images are close to the theoretical limit ("diffraction limited"). However, the diameter of the VLT YEPUN mirror (8.2-m) is somewhat more than three times that of that of HST (2.4-m). This is "compensated" by the fact that the wavelength of the NAOS-CONICA image (2.2 µm) is about two-and-a-half times longer that than of the HST image (0.8 µm). The measured image diameters are therefore not too different, approx. 0.085 arcsec (HST) vrs. approx. 0.068 arcsec (VLT). Although the exposure times are similar (300 sec for the VLT image; 400 sec for the HST image), the VLT image shows considerably fainter objects. This is partly due to the larger mirror, partly because by observing at a longer wavelength, NAOS-CONICA can detect a host of cool low-mass stars. The Becklin-Neugebauer object and its associated nebulosity ESO PR Photo 33g/01 ESO PR Photo 33g/01 [Preview - JPEG: 299 x 400 pix - 128k] [Normal - JPEG: 597 x 800 pix - 272k] Caption : PR Photo 33g/01 is a composite (false-) colour image obtained by NAOS-CONICA of the region around the Becklin-Neugebauer object that is deeply embedded in the Orion Nebula. It is based on two exposures, one in the light of shock-excited molecular hydrogen line (H 2 ; wavelength 2.12 µm; here rendered as blue) and one in the broader K-band (2.2 µm; red) from ionized hydrogen. A third (green) image was produced as an "average" of the H 2 and K-band images. The field-of-view measures 20 x 25 arcsec 2 , cf. the 1 x 1 arcsec 2 square. North is up and east to the left. PR Photo 33g/01 is a composite image of the region around the Becklin-Neugebauer object (generally refered to as "BN" ). With its associated Kleinmann-Low nebula, it is located in the Orion star forming region at a distance of approx. 1500 light-years. It is the nearest high-mass star-forming complex. The immediate vicinity of BN (the brightest star in the image) is highly dynamic with outflows and cloudlets glowing in the light of shock-excited molecular hydrogen. While many masers and outflows have been detected, the identification of their driving sources is still lacking. Deep images in the infrared K and H bands, as well as in the light of molecular hydrogen emission were obtained with NAOS-CONICA at VLT YEPUN during the current tests. The new images facilitate the detection of fainter and smaller structures in the cloud than ever before. More details on the embedded star cluster are revealed as well. These observations were only made possible by the infrared wavefront sensor of NAOS. The latter is a unique capability of NAOS and allows to do adaptive optics on highly embedded infrared sources, which are practically invisible at optical wavelengths. Exploring the limits ESO PR Photo 33h/01 ESO PR Photo 33h/01 [Preview - JPEG: 400 x 260 pix - 44k] [Normal - JPEG: 800 x 520 pix - 112k] Caption : PR Photo 33h/01 shows a NAOS-CONICA image of the double star GJ 263 for which the angular distance between the two components is only 0.030 arcsec . The raw image, as directly recorded by CONICA, is shown in the middle, with a computer-processed (using the ONERA MISTRAL myopic deconvolution algorithm) version to the right. The recorded Point-Spread-Function (PSF) is shown to the left. For this, the C50S camera (0.01325 arcsec/pixel) was used, with an FeII filter at the near-infrared wavelength 1.257 µm. The exposure time was 10 seconds. ESO PR Photo 33i/01 ESO PR Photo 33i/01 [Preview - JPEG: 400 x 316 pix - 82k] [Normal - JPEG: 800 x 631 pix - 208k] Caption : PR Photo 33i/01 shows the near-diffraction-limited image of a 17-mag reference star , as recorded with NAOS-CONICA during a 200-second exposure in the K-band under 0.60 arcsec seeing. The 3D-profile is also shown. ESO PR Photo 33j/01 ESO PR Photo 33j/01 [Preview - JPEG: 342 x 400 pix - 83k] [Normal - JPEG: 684 x 800 pix - 200k] Caption : PR Photo 33j/01 shows the central cluster in the 30 Doradus HII region in the Large Magellanic Cloud (LMC), a satellite of our Milky Way Galaxy. It was obtained by NAOS-CONICA in the infrared K-band during a 600 second exposure. The field shown here measures 15 x 15 arcsec 2. PR Photos 33h-j/01 provide three examples of images obtained during specific tests where the observers pushed NAOS-CONICA towards the limits to explore the potential of the new instrument. Although, as expected, these images are not "perfect", they bear clear witness to the impressive performance, already at this early stage of the commissioning programme. The first PR Photo 33h/01 shows how diffraction-limited imaging with NAOS-CONICA at a wavelength of 1.257 µm allows to view the individual components of a close double star, here the binary star GJ 263 for which the angular distance between the two stars is only 0.030 arcsec (i.e., the angle subtended by a 1 Euro coin at a distance of 160 km). Spatially resolved observations of binary stars like this one will allow the determination of orbital parameters, and ultimately of the masses of the individual binary star components. After few days of optimisation and calibration, NAOS-CONICA was able to "close the loop" on a reference star as faint as visual magnitude 17 and to provide a fine diffraction-limited K-band image with Strehl ratio 19% under 0.6 arcsec seeing. PR Photo 33i/01 provides a view of this image, as seen in the recorder frame and as a 3D-profile. The exposure time was 200 seconds. The ability to use reference stars as faint as this is an enormous asset for NAOS-CONICA - it will be first to offer this capability to non-specialist users with an instrument on an 8-10 m class telescope . This permits to access many sky fields and already get significant AO corrections, without having to wait for the artificial laser guide star now being constructed for the VLT, see below. 30 Doradus in the Large Magellanic Cloud (LMC - a satellite of our Galaxy) is the most luminous, giant HII region in the Local Group of Galaxies. It is powered by a massive star cluster with more than 100 ultra-luminous stars (of the "Wolf-Rayet"-type and O-stars). The NAOS CONICA K-band image PR Photo 33x/01 resolves the dense stellar core of high-mass stars at the centre of the cluster, revealing thousands of lower mass cluster members. Due to the lack of a sufficiently bright, isolated and single reference star in this sky field, the observers used instead the bright central star complex (R136a) to generate the corrective signals to the flexible mirror, needed to compensate for the atmospheric turbulence. However, R136a is not a round object; it is strongly elongated in the "5 hour"-direction. As a result, all star images seen in this photo are slightly elongated in the same direction as R136a. Nevertheless, this is a small penalty to pay for the large improvement obtained over a direct (seeing-limited) image! Adaptive Optics at ESO - a long tradition ESO PR Photo 33k/01 ESO PR Photo 33k/01 [Preview - JPEG: 400 x 320 pix - 144k] [Normal - JPEG: 800 x 639 pix - 344k] [Hi-Res - JPEG: 3000 x 2398 pix - 3.0M] ESO PR Photo 33l/01 ESO PR Photo 33l/01 [Preview - JPEG: 400 x 367 pix - 47k] [Normal - JPEG: 800 x 734 pix - 592k] [Hi-Res - JPEG: 3000 x 2754 pix - 3.9M] Caption : PR Photo 33k/01 is a view of the upper platform at the ESO Paranal Observatory with the four enclosures for the VLT 8.2-m Unit Telescopes and the partly subterranean Interferometric Laboratory (at centre). YEPUN (UT4) is housed in the enclosure to the right. This photo was obtained in the evening of November 25, 2001, some hours before "First Light" was achieved for the new NAOS-CONICA instrument, mounted at that telescope. PR Photo 33l/01 NAOS-CONICA installed on the Nasmyth B platform of the 8.2-m VLT YEPUN Unit Telescope. From left to right: the telescope adapter/rotator (dark blue), NAOS (light blue) and the CONICA cryostat (red). The control electronics is housed in the white cabinet. "Adaptive Optics" is a modern buzzword of astronomy. It embodies the seemingly magic way by which ground-based telescopes can overcome the undesirable blurring effect of atmospheric turbulence that has plagued astronomers for centuries. With "Adaptive Optics", the images of stars and galaxies captured by these instruments are now as sharp as theoretically possible. Or, as the experts like to say, "it is as if a giant ground-based telescope is 'lifted' into space by a magic hand!" . Adaptive Optics works by means of a computer-controlled, flexible mirror that counteracts the image distortion induced by atmospheric turbulence in real time. The concept is not new. Already in 1989, the first Adaptive Optics system ever built for Astronomy (aptly named "COME-ON" ) was installed on the 3.6-m telescope at the ESO La Silla Observatory, as the early fruit of a highly successful continuing collaboration between ESO and French research institutes (ONERA and Observatoire de Paris). Ten years ago, ESO initiated an Adaptive Optics program , to serve the needs for its frontline VLT project. In 1993, the Adaptive Optics facility (ADONIS) was offered to Europe's astronomers, as the first instrument of its kind, available for non-specialists. It is still in operation and continues to produce frontline results, cf. ESO PR 22/01. In 1997, ESO launched a collaborative effort with a French Consortium ( see below) for the development of the NAOS Nasmyth Adaptive Optics System . With its associated CONICA IR high angular resolution camera , developed with a German Consortium ( see below), it provides a full high angular resolution capability on the VLT at Paranal. With the successful "First Light" on November 25, 2001, this project is now about to enter into the operational phase. The advantages of NAOS-CONICA NAOS-CONICA belongs to a new generation of sophisticated adaptive optics (AO) devices. They have certain advantages over past systems. In particular, NAOS is unique in being equipped with an infrared-sensitive Wavefront Sensor (WFS) that permits to look inside regions that are highly obscured by interstellar dust and therefore unobservable in visible light. With its other WFS for visible light , NAOS should be able to achieve the highest degree of light concentration (the so-called "Strehl ratio") obtained at any existing 8-m class telescope. It also provides partially corrected images, using reference stars (see PR Photo 33e/01 ) as faint as visual magnitude 18, fainter than demonstrated so far at any other AO system at such large telescope. A major advantage of CONICA is to offer the large format and very high image quality required to fully match NAOS' performance , as well as a variety of observing modes. Moreover, NAOS-CONICA is the first astronomical AO instrument to be offered with a full end-to-end observing capability. It is completely integrated into the VLT dataflow system , with a seamless process from the preparation of the observations, including optimization of the instrument, to their execution at the telescope and on to automatic data quality assessment and storage in the VLT Archive. Collaboration and Institutes The Nasmyth Adaptive Optics System (NAOS) has been developed, with the support of INSU-CNRS, by a French Consortium in collaboration with ESO. The French consortium consists of Office National d'Etudes et de Recherches Aérospatiales (ONERA) , Laboratoire d'Astrophysique de Grenoble (LAOG) and Observatoire de Paris (DESPA and DASGAL). The Project Manager is Gérard Rousset (ONERA), the Instrument Responsible is François Lacombe (Observatoire de Paris) and the Project Scientist is Anne-Marie Lagrange (Laboratoire d'Astrophysique de Grenoble). The CONICA Near-Infrared CAmera has been developed by a German Consortium, with an extensive ESO collaboration. The Consortium consists of Max-Planck-Institut für Astronomie (MPIA) (Heidelberg) and the Max-Planck-Institut für Extraterrestrische Physik (MPE) (Garching). The Principal Investigator (PI) is Rainer Lenzen (MPIA), with Reiner Hofmann (MPE) as Co-Investigator. Contacts Norbert Hubin European Southern Observatory Garching, Germany Tel.: +4989-3200-6517 email: nhubin@eso.org Alan Moorwood European Southern Observatory Garching, Germany Tel.: +4989-3200-6294 email: amoorwoo@eso.org Appendix: Technical Information about NAOS and CONICA Once fully tested, NAOS-CONICA will provide adaptive optics assisted imaging, polarimetry and spectroscopy in the 1 - 5 µm waveband. NAOS is an adaptive optics system equipped with both visible and infrared, Shack-Hartmann type, wavefront sensors. Provided a reference source (e.g., a star) with visual magnitude V brighter than 18 or K-magnitude brighter than 13 mag is available within 60 arcsec of the science target, NAOS-CONICA will ultimately offer diffraction limited resolution at the level of 0.030 arcsec at a wavelength of 1 µm, albeit with a large halo around the image core for the faint end of the reference source brightness. This may be compared with VLT median seeing images of 0.65 arcsec at a wavelength of 1 µm and exceptionally good images around 0.30 arcsec. NAOS-CONICA is installed at Nasmyth Focus B at VLT YEPUN (UT4). In about two years' time, this instrument will benefit from a sodium Laser Guide Star (LGS) facility. The creation of an artificial guide star is then possible in any sky field of interest, thereby providing a much better sky coverage than what is possible with natural guide stars only. NAOS is equipped with two wavefront sensors, one in the visible part of the spectrum (0.45 - 0.95 µm) and one in the infrared part (1 - 2.5 µm); both are based on the Shack-Hartmann principle. The maximum correction frequency is about 500 Hz. There are 185 deformable mirror actuators plus a tip-tilt mirror correction. Together, they should permit to obtain a high Strehl ratio in the K-band (2.2 µm), up to 70%, depending on the actual seeing and waveband. Both the visible and IR wavefront sensors (WFS) have been optimized to provide AO correction for faint objects/stars. The visible WFS provides a low-order correction for objects as faint as visual magnitude ~ 18. The IR WFS will provide a low-order correction for objects as faint as K-magnitude 13. CONICA is a high performant instrument in terms of image quality and detector sensitivity. It has been designed so that it is able to make optimal use of the AO system. Inherent mechanical flexures are corrected on-line by NAOS through a pointing model. It offers a variety of modes, e.g., direct imaging, polarimetry, slit spectroscopy, coronagraphy and spectro-imaging. The ESO PR Video Clips service to visitors to the ESO website provides "animated" illustrations of the ongoing work and events at the European Southern Observatory. The most recent clip was: ESO PR Video Clip 06/01 about observations of a binary star (8 October 2001). Information is also available on the web about other ESO videos.

  19. Open Source, Openness, and Higher Education

    ERIC Educational Resources Information Center

    Wiley, David

    2006-01-01

    In this article David Wiley provides an overview of how the general expansion of open source software has affected the world of education in particular. In doing so, Wiley not only addresses the development of open source software applications for teachers and administrators, he also discusses how the fundamental philosophy of the open source…

  20. The Emergence of Open-Source Software in North America

    ERIC Educational Resources Information Center

    Pan, Guohua; Bonk, Curtis J.

    2007-01-01

    Unlike conventional models of software development, the open source model is based on the collaborative efforts of users who are also co-developers of the software. Interest in open source software has grown exponentially in recent years. A "Google" search for the phrase open source in early 2005 returned 28.8 million webpage hits, while…

  1. A Unified Steganalysis Framework

    DTIC Science & Technology

    2013-04-01

    contains more than 1800 images of different scenes. In the experiments, we used four JPEG based steganography techniques: Out- guess [13], F5 [16], model...also compressed these images again since some of the steganography meth- ods are double compressing the images . Stego- images are generated by embedding...randomly chosen messages (in bits) into 1600 grayscale images using each of the four steganography techniques. A random message length was determined

  2. Confidential storage and transmission of medical image data.

    PubMed

    Norcen, R; Podesser, M; Pommer, A; Schmidt, H-P; Uhl, A

    2003-05-01

    We discuss computationally efficient techniques for confidential storage and transmission of medical image data. Two types of partial encryption techniques based on AES are proposed. The first encrypts a subset of bitplanes of plain image data whereas the second encrypts parts of the JPEG2000 bitstream. We find that encrypting between 20% and 50% of the visual data is sufficient to provide high confidentiality.

  3. Basic Investigation on Medical Ultrasonic Echo Image Compression by JPEG2000 - Availability of Wavelet Transform and ROI Method

    DTIC Science & Technology

    2001-10-25

    Table III. In spite of the same quality in ROI, it is decided that the images in the cases where QF is 1.3, 1.5 or 2.0 are not good for diagnosis. Of...but (b) is not good for diagnosis by decision of ultrasonographer. Results reveal that wavelet transform achieves higher quality of image compared

  4. Air Force Institute of Technology Research Report 2008

    DTIC Science & Technology

    2009-05-01

    Chapter) Instructor of the Year, March 2008. PETERSON , GILBERT L. Air Force Junior Scientist of the Year, September 2008. RAINES, RICHARD A...DIRECTORATE RODRIGUEZ, BENJAMIN M., II, JPEG Steganography Embedding Methods. AFIT/DEE/ENG/08-20. Faculty Advisor: Dr. Gilbert L. Peterson . Sponsor...Faculty Advisor: Dr. Gilbert L. Peterson . Sponsor: AFRL/RY. GIRARD, JASON A., Material Perturbations to Enhance Performance of the Theile Half-Width

  5. Open Data, Open Source and Open Standards in chemistry: The Blue Obelisk five years on

    PubMed Central

    2011-01-01

    Background The Blue Obelisk movement was established in 2005 as a response to the lack of Open Data, Open Standards and Open Source (ODOSOS) in chemistry. It aims to make it easier to carry out chemistry research by promoting interoperability between chemistry software, encouraging cooperation between Open Source developers, and developing community resources and Open Standards. Results This contribution looks back on the work carried out by the Blue Obelisk in the past 5 years and surveys progress and remaining challenges in the areas of Open Data, Open Standards, and Open Source in chemistry. Conclusions We show that the Blue Obelisk has been very successful in bringing together researchers and developers with common interests in ODOSOS, leading to development of many useful resources freely available to the chemistry community. PMID:21999342

  6. Open Genetic Code: on open source in the life sciences.

    PubMed

    Deibel, Eric

    2014-01-01

    The introduction of open source in the life sciences is increasingly being suggested as an alternative to patenting. This is an alternative, however, that takes its shape at the intersection of the life sciences and informatics. Numerous examples can be identified wherein open source in the life sciences refers to access, sharing and collaboration as informatic practices. This includes open source as an experimental model and as a more sophisticated approach of genetic engineering. The first section discusses the greater flexibly in regard of patenting and the relationship to the introduction of open source in the life sciences. The main argument is that the ownership of knowledge in the life sciences should be reconsidered in the context of the centrality of DNA in informatic formats. This is illustrated by discussing a range of examples of open source models. The second part focuses on open source in synthetic biology as exemplary for the re-materialization of information into food, energy, medicine and so forth. The paper ends by raising the question whether another kind of alternative might be possible: one that looks at open source as a model for an alternative to the commodification of life that is understood as an attempt to comprehensively remove the restrictions from the usage of DNA in any of its formats.

  7. The Open Source Teaching Project (OSTP): Research Note.

    ERIC Educational Resources Information Center

    Hirst, Tony

    The Open Source Teaching Project (OSTP) is an attempt to apply a variant of the successful open source software approach to the development of educational materials. Open source software is software licensed in such a way as to allow anyone the right to modify and use it. From such a simple premise, a whole industry has arisen, most notably in the…

  8. Free for All: Open Source Software

    ERIC Educational Resources Information Center

    Schneider, Karen

    2008-01-01

    Open source software has become a catchword in libraryland. Yet many remain unclear about open source's benefits--or even what it is. So what is open source software (OSS)? It's software that is free in every sense of the word: free to download, free to use, and free to view or modify. Most OSS is distributed on the Web and one doesn't need to…

  9. Reflections on the role of open source in health information system interoperability.

    PubMed

    Sfakianakis, S; Chronaki, C E; Chiarugi, F; Conforti, F; Katehakis, D G

    2007-01-01

    This paper reflects on the role of open source in health information system interoperability. Open source is a driving force in computer science research and the development of information systems. It facilitates the sharing of information and ideas, enables evolutionary development and open collaborative testing of code, and broadens the adoption of interoperability standards. In health care, information systems have been developed largely ad hoc following proprietary specifications and customized design. However, the wide deployment of integrated services such as Electronic Health Records (EHRs) over regional health information networks (RHINs) relies on interoperability of the underlying information systems and medical devices. This reflection is built on the experiences of the PICNIC project that developed shared software infrastructure components in open source for RHINs and the OpenECG network that offers open source components to lower the implementation cost of interoperability standards such as SCP-ECG, in electrocardiography. Open source components implementing standards and a community providing feedback from real-world use are key enablers of health care information system interoperability. Investing in open source is investing in interoperability and a vital aspect of a long term strategy towards comprehensive health services and clinical research.

  10. Open Standards, Open Source, and Open Innovation: Harnessing the Benefits of Openness

    ERIC Educational Resources Information Center

    Committee for Economic Development, 2006

    2006-01-01

    Digitization of information and the Internet have profoundly expanded the capacity for openness. This report details the benefits of openness in three areas--open standards, open-source software, and open innovation--and examines the major issues in the debate over whether openness should be encouraged or not. The report explains each of these…

  11. The 2015 Bioinformatics Open Source Conference (BOSC 2015)

    PubMed Central

    Harris, Nomi L.; Cock, Peter J. A.; Lapp, Hilmar

    2016-01-01

    The Bioinformatics Open Source Conference (BOSC) is organized by the Open Bioinformatics Foundation (OBF), a nonprofit group dedicated to promoting the practice and philosophy of open source software development and open science within the biological research community. Since its inception in 2000, BOSC has provided bioinformatics developers with a forum for communicating the results of their latest efforts to the wider research community. BOSC offers a focused environment for developers and users to interact and share ideas about standards; software development practices; practical techniques for solving bioinformatics problems; and approaches that promote open science and sharing of data, results, and software. BOSC is run as a two-day special interest group (SIG) before the annual Intelligent Systems in Molecular Biology (ISMB) conference. BOSC 2015 took place in Dublin, Ireland, and was attended by over 125 people, about half of whom were first-time attendees. Session topics included “Data Science;” “Standards and Interoperability;” “Open Science and Reproducibility;” “Translational Bioinformatics;” “Visualization;” and “Bioinformatics Open Source Project Updates”. In addition to two keynote talks and dozens of shorter talks chosen from submitted abstracts, BOSC 2015 included a panel, titled “Open Source, Open Door: Increasing Diversity in the Bioinformatics Open Source Community,” that provided an opportunity for open discussion about ways to increase the diversity of participants in BOSC in particular, and in open source bioinformatics in general. The complete program of BOSC 2015 is available online at http://www.open-bio.org/wiki/BOSC_2015_Schedule. PMID:26914653

  12. Enhanced virtual microscopy for collaborative education.

    PubMed

    Triola, Marc M; Holloway, William J

    2011-01-26

    Curricular reform efforts and a desire to use novel educational strategies that foster student collaboration are challenging the traditional microscope-based teaching of histology. Computer-based histology teaching tools and Virtual Microscopes (VM), computer-based digital slide viewers, have been shown to be effective and efficient educational strategies. We developed an open-source VM system based on the Google Maps engine to transform our histology education and introduce new teaching methods. This VM allows students and faculty to collaboratively create content, annotate slides with markers, and it is enhanced with social networking features to give the community of learners more control over the system. We currently have 1,037 slides in our VM system comprised of 39,386,941 individual JPEG files that take up 349 gigabytes of server storage space. Of those slides 682 are for general teaching and available to our students and the public; the remaining 355 slides are used for practical exams and have restricted access. The system has seen extensive use with 289,352 unique slide views to date. Students viewed an average of 56.3 slides per month during the histology course and accessed the system at all hours of the day. Of the 621 annotations added to 126 slides 26.2% were added by faculty and 73.8% by students. The use of the VM system reduced the amount of time faculty spent administering the course by 210 hours, but did not reduce the number of laboratory sessions or the number of required faculty. Laboratory sessions were reduced from three hours to two hours each due to the efficiencies in the workflow of the VM system. Our virtual microscope system has been an effective solution to the challenges facing traditional histopathology laboratories and the novel needs of our revised curriculum. The web-based system allowed us to empower learners to have greater control over their content, as well as the ability to work together in collaborative groups. The VM system saved faculty time and there was no significant difference in student performance on an identical practical exam before and after its adoption. We have made the source code of our VM freely available and encourage use of the publically available slides on our website.

  13. [Development of a video image system for wireless capsule endoscopes based on DSP].

    PubMed

    Yang, Li; Peng, Chenglin; Wu, Huafeng; Zhao, Dechun; Zhang, Jinhua

    2008-02-01

    A video image recorder to record video picture for wireless capsule endoscopes was designed. TMS320C6211 DSP of Texas Instruments Inc. is the core processor of this system. Images are periodically acquired from Composite Video Broadcast Signal (CVBS) source and scaled by video decoder (SAA7114H). Video data is transported from high speed buffer First-in First-out (FIFO) to Digital Signal Processor (DSP) under the control of Complex Programmable Logic Device (CPLD). This paper adopts JPEG algorithm for image coding, and the compressed data in DSP was stored to Compact Flash (CF) card. TMS320C6211 DSP is mainly used for image compression and data transporting. Fast Discrete Cosine Transform (DCT) algorithm and fast coefficient quantization algorithm are used to accelerate operation speed of DSP and decrease the executing code. At the same time, proper address is assigned for each memory, which has different speed;the memory structure is also optimized. In addition, this system uses plenty of Extended Direct Memory Access (EDMA) to transport and process image data, which results in stable and high performance.

  14. And Then There Were Three...!

    NASA Astrophysics Data System (ADS)

    2000-01-01

    VLT MELIPAL Achieves Successful "First Light" in Record Time This was a night to remember at the ESO Paranal Observatory! For the first time, three 8.2-m VLT telescopes were observing in parallel, with a combined mirror surface of nearly 160 m 2. In the evening of January 26, the third 8.2-m Unit Telescope, MELIPAL ("The Southern Cross" in the Mapuche language), was pointed to the sky for the first time and successfully achieved "First Light". During this night, a number of astronomical exposures were made that served to evaluate provisionally the performance of the new telescope. The ESO staff expressed great satisfaction with MELIPAL and there were broad smiles all over the mountain. The first images ESO PR Photo 04a/00 ESO PR Photo 04a/00 [Preview - JPEG: 400 x 352 pix - 95k] [Normal - JPEG: 800 x 688 pix - 110k] Caption : ESO PR Photo 04a/00 shows the "very first light" image for MELIPAL . It is that of a relatively bright star, as recorded by the Guide Probe at about 21:50 hrs local time on January 26, 2000. It is a 0.1 sec exposure, obtained after preliminary adjustment of the optics during a few iterations with the computer controlled "active optics" system. The image quality is measured as 0.46 arcsec FWHM (Full-Width at Half Maximum). ESO PR Photo 04b/00 ESO PR Photo 04b/00 [Preview - JPEG: 400 x 429 pix - 39k] [Normal - JPEG: 885 x 949 pix - 766k] Caption : ESO PR Photo 04b/00 shows the central region of the Crab Nebula, the famous supernova remnant in the constellation Taurus (The Bull). It was obtained early in the night of "First Light" with the third 8.2-m VLT Unit Telescope, MELIPAL . It is a composite of several 30-sec exposures with the VLT Test Camera in three broad-band filters, B (here rendered as blue; most synchrotron emission), V (green) and R (red; mostly emission from hydrogen atoms). The Crab Pulsar is visible to the left; it is the lower of the two brightest stars near each other. The image quality is about 0.9 arcsec, and is completely determined by the external seeing caused by the atmospheric turbulence above the telescope at the time of the observation. The coloured, vertical lines to the left are artifacts of a "bad column" of the CCD. The field measures about 1.3 x 1.3 arcmin 2. This image may be compared with that of the same area that was recently obtained with the FORS2 instrument at KUEYEN ( PR Photo 40g/99 ). Following two days of preliminary adjustments after the installation of the secondary mirror, cf. ESO PR Photos 03a-n/00 , MELIPAL was pointed to the sky above Paranal for the first time, soon after sunset in the evening of January 26. The light of a bright star was directed towards the Guide Probe camera, and the VLT Commissioning Team, headed by Dr. Jason Spyromilio , initiated the active optics procedure . This adjusts the 150 computer-controlled supports under the main 8.2-m Zerodur mirror as well as the position of the secondary 1.1-m Beryllium mirror. After just a few iterations, the optical quality of the recorded stellar image was measured as 0.46 arcsec ( PR Photo 04a/00 ), a truly excellent value, especially at this stage! Immediately thereafter, at 22:16 hrs local time (i.e., at 01:16 hrs UT on January 27), the shutter of the VLT Test Camera at the Cassegrain focus was opened. A 1-min exposure was made through a R(ed) optical filter of a distant star cluster in the constellation Eridanus (The River). The light from its faint stars was recorded by the CCD at the focal plane and the resulting frame was read into the computer. Despite the comparatively short exposure time, myriads of stars were seen when this "first frame" was displayed on the computer screen. Moreover, the sizes of these images were found to be virtually identical to the 0.6 arcsec seeing measured simultaneously with a monitor telescope, outside the telescope enclosure. This confirmed that MELIPAL was in very good shape. Nevertheless, these very first images were still slightly elongated and further optical adjustments and tests were therefore made to eliminate this unwanted effect. It is a tribute to the extensive experience and fine skills of the ESO staff that within only 1 hour, a 30 sec exposure of the central region of the Crab Nebula in Taurus with round images was obtained, cf. PR Photo 04b/00 . The ESO Director General, Dr. Catherine Cesarsky , who assumed her function in September 1999, was present in the Control Room during these operations. She expressed great satisfaction with the excellent result and warmly congratulated the ESO staff to this achievement. She was particularly impressed with the apparent ease with which a completely new telescope of this size could be adjusted in such a short time. A part of her statement on this occasion was recorded on ESO PR Video Clip 02/00 that accompanies this Press Release. Three telescopes now in operation at Paranal At 02:30 UT on January 27, 2000, three VLT Unit Telescopes were observing in parallel, with measured seeing values of 0.6 arcsec ( ANTU - "The Sun"), 0.7 arcsec ( KUEYEN -"The Moon") and 0.7 arcsec ( MELIPAL ). MELIPAL has now joined ANTU and KUEYEN that had "First Light" in May 1998 and March 1999, respectively. The fourth VLT Unit Telescope, YEPUN ("Sirius") will become operational later this year. While normal scientific observations continue with ANTU , the UVES and FORS2 astronomical instruments are now being commissioned at KUEYEN , before this telescope will be handed over to the astronomers on April 1, 2000. The telescope commissioning period will now start for MELIPAL , after which its first instrument, VIMOS will be installed later this year. Impressions from the MELIPAL "First Light" event First Light for MELIPAL ESO PR Video Clip 02/00 "First Light for MELIPAL" (3350 frames/2:14 min) [MPEG Video+Audio; 160x120 pix; 3.1Mb] [MPEG Video+Audio; 320x240 pix; 9.4 Mb] [RealMedia; streaming; 34kps] [RealMedia; streaming; 200kps] ESO Video Clip 02/00 shows sequences from the Control Room at the Paranal Observatory, recorded with a fixed TV-camera on January 27 at 03:00 UT, soon after the moment of "First Light" with the third 8.2-m VLT Unit Telescope ( MELIPAL ). The video sequences were transmitted via ESO's dedicated satellite communication link to the Headquarters in Garching for production of the Clip. It begins with a statement by the Manager of the VLT Project, Dr. Massimo Tarenghi , as exposures of the Crab Nebula are obtained with the telescope and the raw frames are successively displayed on the monitor screen. In a following sequence, ESO's Director General, Dr. Catherine Cesarsky , briefly relates the moment of "First Light" for MELIPAL , as she experienced it at the telescope controls. ESO Press Photo 04c/00 ESO Press Photo 04c/00 [Preview; JPEG: 400 x 300; 44k] [Full size; JPEG: 1600 x 1200; 241k] The computer screen with the image of a bright star, as recorded by the Guide Probe in the early evening of January 26; see also PR Photo 04a/00. This image was used for the initial adjustments by means of the active optics system. (Digital Photo). ESO Press Photo 04d/00 ESO Press Photo 04d/00 [Preview; JPEG: 400 x 314; 49k] [Full size; JPEG: 1528 x 1200; 189k] ESO staff at the moment of "First Light" for MELIPAL in the evening of January 26. The photo was made in the wooden hut on the telescope observing floor from where the telescope was controlled during the first hours. (Digital Photo). ESO PR Photos may be reproduced, if credit is given to the European Southern Observatory. The ESO PR Video Clips service to visitors to the ESO website provides "animated" illustrations of the ongoing work and events at the European Southern Observatory. The most recent clip was: ESO PR Video Clip 01/00 with aerial sequences from Paranal (12 January 2000). Information is also available on the web about other ESO videos.

  15. The 2016 Bioinformatics Open Source Conference (BOSC).

    PubMed

    Harris, Nomi L; Cock, Peter J A; Chapman, Brad; Fields, Christopher J; Hokamp, Karsten; Lapp, Hilmar; Muñoz-Torres, Monica; Wiencko, Heather

    2016-01-01

    Message from the ISCB: The Bioinformatics Open Source Conference (BOSC) is a yearly meeting organized by the Open Bioinformatics Foundation (OBF), a non-profit group dedicated to promoting the practice and philosophy of Open Source software development and Open Science within the biological research community. BOSC has been run since 2000 as a two-day Special Interest Group (SIG) before the annual ISMB conference. The 17th annual BOSC ( http://www.open-bio.org/wiki/BOSC_2016) took place in Orlando, Florida in July 2016. As in previous years, the conference was preceded by a two-day collaborative coding event open to the bioinformatics community. The conference brought together nearly 100 bioinformatics researchers, developers and users of open source software to interact and share ideas about standards, bioinformatics software development, and open and reproducible science.

  16. Beyond Open Source: According to Jim Hirsch, Open Technology, Not Open Source, Is the Wave of the Future

    ERIC Educational Resources Information Center

    Villano, Matt

    2006-01-01

    This article presents an interview with Jim Hirsch, an associate superintendent for technology at Piano Independent School District in Piano, Texas. Hirsch serves as a liaison for the open technologies committee of the Consortium for School Networking. In this interview, he shares his opinion on the significance of open source in K-12.

  17. EMISSIONS OF ORGANIC AIR TOXICS FROM OPEN ...

    EPA Pesticide Factsheets

    A detailed literature search was performed to collect and collate available data reporting emissions of toxic organic substances into the air from open burning sources. Availability of data varied according to the source and the class of air toxics of interest. Volatile organic compound (VOC) and polycyclic aromatic hydrocarbon (PAH) data were available for many of the sources. Data on semivolatile organic compounds (SVOCs) that are not PAHs were available for several sources. Carbonyl and polychlorinated dibenzo-p-dioxins and polychlorinated dibenzofuran (PCDD/F) data were available for only a few sources. There were several sources for which no emissions data were available at all. Several observations were made including: 1) Biomass open burning sources typically emitted less VOCs than open burning sources with anthropogenic fuels on a mass emitted per mass burned basis, particularly those where polymers were concerned; 2) Biomass open burning sources typically emitted less SVOCs and PAHs than anthropogenic sources on a mass emitted per mass burned basis. Burning pools of crude oil and diesel fuel produced significant amounts of PAHs relative to other types of open burning. PAH emissions were highest when combustion of polymers was taking place; and 3) Based on very limited data, biomass open burning sources typically produced higher levels of carbonyls than anthropogenic sources on a mass emitted per mass burned basis, probably due to oxygenated structures r

  18. Open-Source 3D-Printable Optics Equipment

    PubMed Central

    Zhang, Chenlong; Anzalone, Nicholas C.; Faria, Rodrigo P.; Pearce, Joshua M.

    2013-01-01

    Just as the power of the open-source design paradigm has driven down the cost of software to the point that it is accessible to most people, the rise of open-source hardware is poised to drive down the cost of doing experimental science to expand access to everyone. To assist in this aim, this paper introduces a library of open-source 3-D-printable optics components. This library operates as a flexible, low-cost public-domain tool set for developing both research and teaching optics hardware. First, the use of parametric open-source designs using an open-source computer aided design package is described to customize the optics hardware for any application. Second, details are provided on the use of open-source 3-D printers (additive layer manufacturing) to fabricate the primary mechanical components, which are then combined to construct complex optics-related devices. Third, the use of the open-source electronics prototyping platform are illustrated as control for optical experimental apparatuses. This study demonstrates an open-source optical library, which significantly reduces the costs associated with much optical equipment, while also enabling relatively easily adapted customizable designs. The cost reductions in general are over 97%, with some components representing only 1% of the current commercial investment for optical products of similar function. The results of this study make its clear that this method of scientific hardware development enables a much broader audience to participate in optical experimentation both as research and teaching platforms than previous proprietary methods. PMID:23544104

  19. Open-source 3D-printable optics equipment.

    PubMed

    Zhang, Chenlong; Anzalone, Nicholas C; Faria, Rodrigo P; Pearce, Joshua M

    2013-01-01

    Just as the power of the open-source design paradigm has driven down the cost of software to the point that it is accessible to most people, the rise of open-source hardware is poised to drive down the cost of doing experimental science to expand access to everyone. To assist in this aim, this paper introduces a library of open-source 3-D-printable optics components. This library operates as a flexible, low-cost public-domain tool set for developing both research and teaching optics hardware. First, the use of parametric open-source designs using an open-source computer aided design package is described to customize the optics hardware for any application. Second, details are provided on the use of open-source 3-D printers (additive layer manufacturing) to fabricate the primary mechanical components, which are then combined to construct complex optics-related devices. Third, the use of the open-source electronics prototyping platform are illustrated as control for optical experimental apparatuses. This study demonstrates an open-source optical library, which significantly reduces the costs associated with much optical equipment, while also enabling relatively easily adapted customizable designs. The cost reductions in general are over 97%, with some components representing only 1% of the current commercial investment for optical products of similar function. The results of this study make its clear that this method of scientific hardware development enables a much broader audience to participate in optical experimentation both as research and teaching platforms than previous proprietary methods.

  20. Aerostat-Lofted Instrument Platform and Sampling Method for Determination of Emissions from Open Area Sources

    EPA Science Inventory

    Sampling emissions from open area sources, particularly sources of open burning, is difficult due to fast dilution of emissions and safety concerns for personnel. Representative emission samples can be difficult to obtain with flaming and explosive sources since personnel safety ...

  1. The Visible Human Data Sets (VHD) and Insight Toolkit (ITk): Experiments in Open Source Software

    PubMed Central

    Ackerman, Michael J.; Yoo, Terry S.

    2003-01-01

    From its inception in 1989, the Visible Human Project was designed as an experiment in open source software. In 1994 and 1995 the male and female Visible Human data sets were released by the National Library of Medicine (NLM) as open source data sets. In 2002 the NLM released the first version of the Insight Toolkit (ITk) as open source software. PMID:14728278

  2. Capacity is the Wrong Paradigm

    DTIC Science & Technology

    2002-01-01

    short, steganography values detection over ro- bustness, whereas watermarking values robustness over de - tection.) Hiding techniques for JPEG images ...world length of the code. D: If the algorithm is known, this method is trivially de - tectable if we are sending images (with no encryption). If we are...implications of the work of Chaitin and Kolmogorov on algorithmic complex- ity [5]. We have also concentrated on screen images in this paper and have not

  3. Aladin Lite: Lightweight sky atlas for browsers

    NASA Astrophysics Data System (ADS)

    Boch, Thomas

    2014-02-01

    Aladin Lite is a lightweight version of the Aladin tool, running in the browser and geared towards simple visualization of a sky region. It allows visualization of image surveys (JPEG multi-resolution HEALPix all-sky surveys) and permits superimposing tabular (VOTable) and footprints (STC-S) data. Aladin Lite is powered by HTML5 canvas technology and is easily embeddable on any web page and can also be controlled through a Javacript API.

  4. Cloud Intrusion Detection and Repair (CIDAR)

    DTIC Science & Technology

    2016-02-01

    form for VLC , Swftools-png2swf, Swftools-jpeg2swf, Dillo and GIMP. The superscript indicates the bit width of each expression atom. “sext(v, w... challenges in input rectification is the need to deal with nested fields. In general, input formats are in tree structures containing arbitrarily...length indicator constraints is challeng - ing, because of the presence of nested fields in hierarchical input format. For example, an integer field may

  5. A new concept of real-time security camera monitoring with privacy protection by masking moving objects

    NASA Astrophysics Data System (ADS)

    Yabuta, Kenichi; Kitazawa, Hitoshi; Tanaka, Toshihisa

    2006-02-01

    Recently, monitoring cameras for security have been extensively increasing. However, it is normally difficult to know when and where we are monitored by these cameras and how the recorded images are stored and/or used. Therefore, how to protect privacy in the recorded images is a crucial issue. In this paper, we address this problem and introduce a framework for security monitoring systems considering the privacy protection. We state requirements for monitoring systems in this framework. We propose a possible implementation that satisfies the requirements. To protect privacy of recorded objects, they are made invisible by appropriate image processing techniques. Moreover, the original objects are encrypted and watermarked into the image with the "invisible" objects, which is coded by the JPEG standard. Therefore, the image decoded by a normal JPEG viewer includes the objects that are unrecognized or invisible. We also introduce in this paper a so-called "special viewer" in order to decrypt and display the original objects. This special viewer can be used by limited users when necessary for crime investigation, etc. The special viewer allows us to choose objects to be decoded and displayed. Moreover, in this proposed system, real-time processing can be performed, since no future frame is needed to generate a bitstream.

  6. JPEG 2000 in advanced ground station architectures

    NASA Astrophysics Data System (ADS)

    Chien, Alan T.; Brower, Bernard V.; Rajan, Sreekanth D.

    2000-11-01

    The integration and management of information from distributed and heterogeneous information producers and providers must be a key foundation of any developing imagery intelligence system. Historically, imagery providers acted as production agencies for imagery, imagery intelligence, and geospatial information. In the future, these imagery producers will be evolving to act more like e-business information brokers. The management of imagery and geospatial information-visible, spectral, infrared (IR), radar, elevation, or other feature and foundation data-is crucial from a quality and content perspective. By 2005, there will be significantly advanced collection systems and a myriad of storage devices. There will also be a number of automated and man-in-the-loop correlation, fusion, and exploitation capabilities. All of these new imagery collection and storage systems will result in a higher volume and greater variety of imagery being disseminated and archived in the future. This paper illustrates the importance-from a collection, storage, exploitation, and dissemination perspective-of the proper selection and implementation of standards-based compression technology for ground station and dissemination/archive networks. It specifically discusses the new compression capabilities featured in JPEG 2000 and how that commercially based technology can provide significant improvements to the overall imagery and geospatial enterprise both from an architectural perspective as well as from a user's prospective.

  7. A Novel Image Compression Algorithm for High Resolution 3D Reconstruction

    NASA Astrophysics Data System (ADS)

    Siddeq, M. M.; Rodrigues, M. A.

    2014-06-01

    This research presents a novel algorithm to compress high-resolution images for accurate structured light 3D reconstruction. Structured light images contain a pattern of light and shadows projected on the surface of the object, which are captured by the sensor at very high resolutions. Our algorithm is concerned with compressing such images to a high degree with minimum loss without adversely affecting 3D reconstruction. The Compression Algorithm starts with a single level discrete wavelet transform (DWT) for decomposing an image into four sub-bands. The sub-band LL is transformed by DCT yielding a DC-matrix and an AC-matrix. The Minimize-Matrix-Size Algorithm is used to compress the AC-matrix while a DWT is applied again to the DC-matrix resulting in LL2, HL2, LH2 and HH2 sub-bands. The LL2 sub-band is transformed by DCT, while the Minimize-Matrix-Size Algorithm is applied to the other sub-bands. The proposed algorithm has been tested with images of different sizes within a 3D reconstruction scenario. The algorithm is demonstrated to be more effective than JPEG2000 and JPEG concerning higher compression rates with equivalent perceived quality and the ability to more accurately reconstruct the 3D models.

  8. Impact of JPEG2000 compression on endmember extraction and unmixing of remotely sensed hyperspectral data

    NASA Astrophysics Data System (ADS)

    Martin, Gabriel; Gonzalez-Ruiz, Vicente; Plaza, Antonio; Ortiz, Juan P.; Garcia, Inmaculada

    2010-07-01

    Lossy hyperspectral image compression has received considerable interest in recent years due to the extremely high dimensionality of the data. However, the impact of lossy compression on spectral unmixing techniques has not been widely studied. These techniques characterize mixed pixels (resulting from insufficient spatial resolution) in terms of a suitable combination of spectrally pure substances (called endmembers) weighted by their estimated fractional abundances. This paper focuses on the impact of JPEG2000-based lossy compression of hyperspectral images on the quality of the endmembers extracted by different algorithms. The three considered algorithms are the orthogonal subspace projection (OSP), which uses only spatial information, and the automatic morphological endmember extraction (AMEE) and spatial spectral endmember extraction (SSEE), which integrate both spatial and spectral information in the search for endmembers. The impact of compression on the resulting abundance estimation based on the endmembers derived by different methods is also substantiated. Experimental results are conducted using a hyperspectral data set collected by NASA Jet Propulsion Laboratory over the Cuprite mining district in Nevada. The experimental results are quantitatively analyzed using reference information available from U.S. Geological Survey, resulting in recommendations to specialists interested in applying endmember extraction and unmixing algorithms to compressed hyperspectral data.

  9. Reevaluation of JPEG image compression to digitalized gastrointestinal endoscopic color images: a pilot study

    NASA Astrophysics Data System (ADS)

    Kim, Christopher Y.

    1999-05-01

    Endoscopic images p lay an important role in describing many gastrointestinal (GI) disorders. The field of radiology has been on the leading edge of creating, archiving and transmitting digital images. With the advent of digital videoendoscopy, endoscopists now have the ability to generate images for storage and transmission. X-rays can be compressed 30-40X without appreciable decline in quality. We reported results of a pilot study using JPEG compression of 24-bit color endoscopic images. For that study, the result indicated that adequate compression ratios vary according to the lesion and that images could be compressed to between 31- and 99-fold smaller than the original size without an appreciable decline in quality. The purpose of this study was to expand upon the methodology of the previous sty with an eye towards application for the WWW, a medium which would expand both clinical and educational purposes of color medical imags. The results indicate that endoscopists are able to tolerate very significant compression of endoscopic images without loss of clinical image quality. This finding suggests that even 1 MB color images can be compressed to well under 30KB, which is considered a maximal tolerable image size for downloading on the WWW.

  10. Quality labeled faces in the wild (QLFW): a database for studying face recognition in real-world environments

    NASA Astrophysics Data System (ADS)

    Karam, Lina J.; Zhu, Tong

    2015-03-01

    The varying quality of face images is an important challenge that limits the effectiveness of face recognition technology when applied in real-world applications. Existing face image databases do not consider the effect of distortions that commonly occur in real-world environments. This database (QLFW) represents an initial attempt to provide a set of labeled face images spanning the wide range of quality, from no perceived impairment to strong perceived impairment for face detection and face recognition applications. Types of impairment include JPEG2000 compression, JPEG compression, additive white noise, Gaussian blur and contrast change. Subjective experiments are conducted to assess the perceived visual quality of faces under different levels and types of distortions and also to assess the human recognition performance under the considered distortions. One goal of this work is to enable automated performance evaluation of face recognition technologies in the presence of different types and levels of visual distortions. This will consequently enable the development of face recognition systems that can operate reliably on real-world visual content in the presence of real-world visual distortions. Another goal is to enable the development and assessment of visual quality metrics for face images and for face detection and recognition applications.

  11. The 2016 Bioinformatics Open Source Conference (BOSC)

    PubMed Central

    Harris, Nomi L.; Cock, Peter J.A.; Chapman, Brad; Fields, Christopher J.; Hokamp, Karsten; Lapp, Hilmar; Muñoz-Torres, Monica; Wiencko, Heather

    2016-01-01

    Message from the ISCB: The Bioinformatics Open Source Conference (BOSC) is a yearly meeting organized by the Open Bioinformatics Foundation (OBF), a non-profit group dedicated to promoting the practice and philosophy of Open Source software development and Open Science within the biological research community. BOSC has been run since 2000 as a two-day Special Interest Group (SIG) before the annual ISMB conference. The 17th annual BOSC ( http://www.open-bio.org/wiki/BOSC_2016) took place in Orlando, Florida in July 2016. As in previous years, the conference was preceded by a two-day collaborative coding event open to the bioinformatics community. The conference brought together nearly 100 bioinformatics researchers, developers and users of open source software to interact and share ideas about standards, bioinformatics software development, and open and reproducible science. PMID:27781083

  12. OpenMx: An Open Source Extended Structural Equation Modeling Framework

    ERIC Educational Resources Information Center

    Boker, Steven; Neale, Michael; Maes, Hermine; Wilde, Michael; Spiegel, Michael; Brick, Timothy; Spies, Jeffrey; Estabrook, Ryne; Kenny, Sarah; Bates, Timothy; Mehta, Paras; Fox, John

    2011-01-01

    OpenMx is free, full-featured, open source, structural equation modeling (SEM) software. OpenMx runs within the "R" statistical programming environment on Windows, Mac OS-X, and Linux computers. The rationale for developing OpenMx is discussed along with the philosophy behind the user interface. The OpenMx data structures are…

  13. a Framework for AN Open Source Geospatial Certification Model

    NASA Astrophysics Data System (ADS)

    Khan, T. U. R.; Davis, P.; Behr, F.-J.

    2016-06-01

    The geospatial industry is forecasted to have an enormous growth in the forthcoming years and an extended need for well-educated workforce. Hence ongoing education and training play an important role in the professional life. Parallel, in the geospatial and IT arena as well in the political discussion and legislation Open Source solutions, open data proliferation, and the use of open standards have an increasing significance. Based on the Memorandum of Understanding between International Cartographic Association, OSGeo Foundation, and ISPRS this development led to the implementation of the ICA-OSGeo-Lab imitative with its mission "Making geospatial education and opportunities accessible to all". Discussions in this initiative and the growth and maturity of geospatial Open Source software initiated the idea to develop a framework for a worldwide applicable Open Source certification approach. Generic and geospatial certification approaches are already offered by numerous organisations, i.e., GIS Certification Institute, GeoAcademy, ASPRS, and software vendors, i. e., Esri, Oracle, and RedHat. They focus different fields of expertise and have different levels and ways of examination which are offered for a wide range of fees. The development of the certification framework presented here is based on the analysis of diverse bodies of knowledge concepts, i.e., NCGIA Core Curriculum, URISA Body Of Knowledge, USGIF Essential Body Of Knowledge, the "Geographic Information: Need to Know", currently under development, and the Geospatial Technology Competency Model (GTCM). The latter provides a US American oriented list of the knowledge, skills, and abilities required of workers in the geospatial technology industry and influenced essentially the framework of certification. In addition to the theoretical analysis of existing resources the geospatial community was integrated twofold. An online survey about the relevance of Open Source was performed and evaluated with 105 respondents worldwide. 15 interviews (face-to-face or by telephone) with experts in different countries provided additional insights into Open Source usage and certification. The findings led to the development of a certification framework of three main categories with in total eleven sub-categories, i.e., "Certified Open Source Geospatial Data Associate / Professional", "Certified Open Source Geospatial Analyst Remote Sensing & GIS", "Certified Open Source Geospatial Cartographer", "Certified Open Source Geospatial Expert", "Certified Open Source Geospatial Associate Developer / Professional Developer", "Certified Open Source Geospatial Architect". Each certification is described by pre-conditions, scope and objectives, course content, recommended software packages, target group, expected benefits, and the methods of examination. Examinations can be flanked by proofs of professional career paths and achievements which need a peer qualification evaluation. After a couple of years a recertification is required. The concept seeks the accreditation by the OSGeo Foundation (and other bodies) and international support by a group of geospatial scientific institutions to achieve wide and international acceptance for this Open Source geospatial certification model. A business case for Open Source certification and a corresponding SWOT model is examined to support the goals of the Geo-For-All initiative of the ICA-OSGeo pact.

  14. The Case for Open Source: Open Source Has Made Significant Leaps in Recent Years. What Does It Have to Offer Education?

    ERIC Educational Resources Information Center

    Guhlin, Miguel

    2007-01-01

    Open source has continued to evolve and in the past three years the development of a graphical user interface has made it increasingly accessible and viable for end users without special training. Open source relies to a great extent on the free software movement. In this context, the term free refers not to cost, but to the freedom users have to…

  15. SolTrace | Concentrating Solar Power | NREL

    Science.gov Websites

    NREL packaged distribution or from source code at the SolTrace open source project website. NREL Publications Support FAQs SolTrace open source project The code uses Monte-Carlo ray-tracing methodology. The -tracing capabilities. With the release of the SolTrace open source project, the software has adopted

  16. Leveraging Metadata to Create Interactive Images... Today!

    NASA Astrophysics Data System (ADS)

    Hurt, Robert L.; Squires, G. K.; Llamas, J.; Rosenthal, C.; Brinkworth, C.; Fay, J.

    2011-01-01

    The image gallery for NASA's Spitzer Space Telescope has been newly rebuilt to fully support the Astronomy Visualization Metadata (AVM) standard to create a new user experience both on the website and in other applications. We encapsulate all the key descriptive information for a public image, including color representations and astronomical and sky coordinates and make it accessible in a user-friendly form on the website, but also embed the same metadata within the image files themselves. Thus, images downloaded from the site will carry with them all their descriptive information. Real-world benefits include display of general metadata when such images are imported into image editing software (e.g. Photoshop) or image catalog software (e.g. iPhoto). More advanced support in Microsoft's WorldWide Telescope can open a tagged image after it has been downloaded and display it in its correct sky position, allowing comparison with observations from other observatories. An increasing number of software developers are implementing AVM support in applications and an online image archive for tagged images is under development at the Spitzer Science Center. Tagging images following the AVM offers ever-increasing benefits to public-friendly imagery in all its standard forms (JPEG, TIFF, PNG). The AVM standard is one part of the Virtual Astronomy Multimedia Project (VAMP); http://www.communicatingastronomy.org

  17. When Free Isn't Free: The Realities of Running Open Source in School

    ERIC Educational Resources Information Center

    Derringer, Pam

    2009-01-01

    Despite the last few years' growth in awareness of open-source software in schools and the potential savings it represents, its widespread adoption is still hampered. Randy Orwin, technology director of the Bainbridge Island School District in Washington State and a strong open-source advocate, cautions that installing an open-source…

  18. OMPC: an Open-Source MATLAB®-to-Python Compiler

    PubMed Central

    Jurica, Peter; van Leeuwen, Cees

    2008-01-01

    Free access to scientific information facilitates scientific progress. Open-access scientific journals are a first step in this direction; a further step is to make auxiliary and supplementary materials that accompany scientific publications, such as methodological procedures and data-analysis tools, open and accessible to the scientific community. To this purpose it is instrumental to establish a software base, which will grow toward a comprehensive free and open-source language of technical and scientific computing. Endeavors in this direction are met with an important obstacle. MATLAB®, the predominant computation tool in many fields of research, is a closed-source commercial product. To facilitate the transition to an open computation platform, we propose Open-source MATLAB®-to-Python Compiler (OMPC), a platform that uses syntax adaptation and emulation to allow transparent import of existing MATLAB® functions into Python programs. The imported MATLAB® modules will run independently of MATLAB®, relying on Python's numerical and scientific libraries. Python offers a stable and mature open source platform that, in many respects, surpasses commonly used, expensive commercial closed source packages. The proposed software will therefore facilitate the transparent transition towards a free and general open-source lingua franca for scientific computation, while enabling access to the existing methods and algorithms of technical computing already available in MATLAB®. OMPC is available at http://ompc.juricap.com. PMID:19225577

  19. Open Source Vision

    ERIC Educational Resources Information Center

    Villano, Matt

    2006-01-01

    Increasingly, colleges and universities are turning to open source as a way to meet their technology infrastructure and application needs. Open source has changed life for visionary CIOs and their campus communities nationwide. The author discusses what these technologists see as the benefits--and the considerations.

  20. SINFONI Opens with Upbeat Chords

    NASA Astrophysics Data System (ADS)

    2004-08-01

    First Observations with New VLT Instrument Hold Great Promise [1] Summary The European Southern Observatory, the Max-Planck-Institute for Extraterrestrial Physics (Garching, Germany) and the Nederlandse Onderzoekschool Voor Astronomie (Leiden, The Netherlands), and with them all European astronomers, are celebrating the successful accomplishment of "First Light" for the Adaptive Optics (AO) assisted SINFONI ("Spectrograph for INtegral Field Observation in the Near-Infrared") instrument, just installed on ESO's Very Large Telescope at the Paranal Observatory (Chile). This is the first facility of its type ever installed on an 8-m class telescope, now providing exceptional observing capabilities for the imaging and spectroscopic studies of very complex sky regions, e.g. stellar nurseries and black-hole environments, also in distant galaxies. Following smooth assembly at the 8.2-m VLT Yepun telescope of SINFONI's two parts, the Adaptive Optics Module that feeds the SPIFFI spectrograph, the "First Light" spectrum of a bright star was recorded with SINFONI in the early evening of July 9, 2004. The following thirteen nights served to evaluate the performance of the new instrument and to explore its capabilities by test observations on a selection of exciting astronomical targets. They included the Galactic Centre region, already imaged with the NACO AO-instrument on the same telescope. Unprecedented high-angular resolution spectra and images were obtained of stars in the immediate vicinity of the massive central black hole. During the night of July 15 - 16, SINFONI recorded a flare from this black hole in great detail. Other interesting objects observed during this period include galaxies with active nuclei (e.g., the Circinus Galaxy and NGC 7469), a merging galaxy system (NGC 6240) and a young starforming galaxy pair at redshift 2 (BX 404/405). These first results were greeted with enthusiasm by the team of astronomers and engineers [2] from the consortium of German and Dutch Institutes and ESO who have worked on the development of SINFONI for nearly 7 years. The work on SINFONI at Paranal included successful commissioning in June 2004 of the Adaptive Optics Module built by ESO, during which exceptional test images were obtained of the main-belt asteroid (22) Kalliope and its moon. Moreover, the ability was demonstrated to correct the atmospheric turbulence by means of even very faint "guide" objects (magnitude 17.5), crucial for the observation of astronomical objects in many parts of the sky. SPIFFI - SPectrometer for Infrared Faint Field Imaging - was developed at the Max Planck Institute for Extraterrestrische Physik (MPE) in Garching (Germany), in a collaboration with the Nederlandse Onderzoekschool Voor Astronomie (NOVA) in Leiden and the Netherlands Foundation for Research in Astronomy (ASTRON), and ESO. PR Photo 24a/04: SINFONI Adaptive Optics Module at VLT Yepun (June 2004) PR Photo 24b/04: SINFONI at VLT Yepun, now fully assembled (July 2004) PR Photo 24c/04: "First Light" image from the SINFONI Adaptive Optics Module PR Photo 24d/04: AO-corrected Image of a 17.5-magnitude Star PR Photo 24e/04: SINFONI undergoing Balancing and Flexure Tests at VLT Yepun PR Photo 24f/04: SINFONI "First Light" Spectrum of HD 130163 PR Photo 24g/04: Members of the SINFONI Adaptive Optics Module Commissioning Team PR Photo 24h/04: Members of the SPIFFI Commissioning Team PR Photo 24i/04: The Principle of Integral Field Spectroscopy (IFS) PR Photo 24j/04: The Orbital Motion of Linus around (22) Kalliope PR Photo 24k/04: SINFONI Observations of the Galactic Centre Region PR Photo 24l/04: SINFONI Observations of the Circinus Galaxy PR Photo 24m/04: SINFONI Observations of the AGN Galaxy NGC 7469 PR Photo 24n/04: SINFONI Observations of NGC 6240 PR Photo 24o/04: SINFONI Observations of the Young Starforming Galaxies BX 404/405 PR Video Clip 07/04: The Orbital Motion of Linus around (22) Kalliope SINFONI: A powerful and complex instrument ESO PR Photo 24a/04 ESO PR Photo 24a/04 The SINFONI Adaptive Optics Module Commissioning Setup [Preview - JPEG: 427 x 400 pix - 230k] [Normal - JPEG: 854 x 800 pix - 551k] ESO PR Photo 24b/04 ESO PR Photo 24b/04 SINFONI at the VLT Yepun Cassegrain Focus [Preview - JPEG: 414 x 400 pix - 222k] [Normal - JPEG: 827 x 800 pix - 574k] Captions: ESO PR Photo 24a/04 shows the SINFONI Adaptive Optics Module, installed at the 8.2-m VLT YEPUN telescope during the first tests in June 2004. At this time, SPIFFI was not yet installed. The blue ring is the Adaptive Optics Module. The yellow parts, with a weight of 800 kg, simulate SPIFFI. The IR Test Imager is located inside the yellow ring. On ESO PR Photo 24b/04, the Near-Infrared Spectrograph SPIFFI in its cryogenic aluminium cylinder has now been attached. A new and very powerful astronomical instrument, a world-leader in its field, has been installed on the Very Large Telescope at the Paranal Observatory (Chile), cf. PR Photos 24a-b/04. Known as SINFONI ("Spectrograph for INtegral Field Observation in the Near-Infrared"), it was mounted in two steps at the Cassegrain focus of the 8.2-m VLT YEPUN telescope. First Light of the completed instrument was achieved on July 9, 2004 and various test observations during the subsequent commissioning phase were carried out with great success. SINFONI has two parts, the Near Infrared Integral Field Spectrograph, also known as SPIFFI (SPectrometer for Infrared Faint Field Imaging), and the Adaptive Optics Module. SPIFFI was developed at the Max Planck Institute for Extraterrestrische Physik (MPE) (Garching, Germany), in a collaboration with the Nederlandse Onderzoekschool Voor Astronomie (NOVA) in Leiden, the Netherlands Foundation for Research in Astronomy (ASTRON) (The Netherlands), and the European Southern Observatory (ESO) (Garching, Germany). The Adaptive Optics (AO) Module was developed by ESO. Once fully commissioned, SINFONI will provide adaptive-optics assisted Integral Field Spectroscopy in the near-infrared 1.1 - 2.45 µm waveband. This advanced technique provides simultaneous spectra of numerous adjacent regions in a small sky field, e.g., of an interstellar nebula, the stars in a dense stellar cluster or a galaxy. Astronomers refer to these data as "3D-spectra" or "data cubes" (i.e., one spectrum for each small area in the two-dimensional sky field), cf. Appendix A. The SINFONI Adaptive Optics Module is based on a 60-element curvature system, similar to the Multi Application Curvature Adaptive Optics devices (MACAO), developed by the ESO Adaptive Optics Department and of which three have already been installed at the VLT (ESO PR 11/03); the last one in August 2004. Provided a sufficiently bright reference source ("guide star") is available within 60 arcsec of the observed field, the SINFONI AO module will ultimately offer diffraction-limited images (resolution 0.050 arcsec) at a wavelength of 2 µm. At the centre of the field, partial correction can be performed with guide stars as faint as magnitude 17.5. In about 6-months' time, it will benefit from a sodium Laser Guide Star, achieving a much better sky coverage than what is now possible. SPIFFI is a fully cryogenic near-infrared integral field spectrograph allowing observers to obtain simultaneously spectra of 2048 pixels within a 64 x 32 pixel field-of-view. In conjunction with the AO Module, it performs spectroscopy with slit-width sampling at the diffraction limit of an 8-m class telescope. For observations of very faint, extended celestial objects, the spatial resolution can be degraded so that both sensitivity and field-of-view are increased. SPIFFI works in the near-infrared wavelength range (1.1 - 2.45 µm) with a moderate spectral resolving power (R = 1500 to 4500). More information about the way SPIFFI functions will be found in Appendix A. "First Light with SINFONI's Adaptive Optics Module ESO PR Photo 24c/04 ESO PR Photo 24c/04 SINFONI AO "First Light" Image [Preview - JPEG: 400 x 482 pix - 106k] [Normal - JPEG: 800 x 963 pix - 256k] ESO PR Photo 24d/04 ESO PR Photo 24d/04 AO-corrected image of 17.5-magnitude Star [Preview - JPEG: 509 x 400 pix - 80k] [Normal - JPEG: 1018 x 800 pix - 182k] Captions: ESO PR Photo 24c/04 shows the "First Light" image obtained with the SINFONI AO Module and a high-angular-resolution near-infrared Test Camera during the night of May 31 - June 1, 2004. The magnitude of the observed star is 11 and the seeing conditions median. The diffraction limit at wavelength 2.2 µm of the 8.2-m telescope (FWHM 0.06 arcsec) was reached and is indicated by the bar. ESO PR Photo 24d/04: Image of a very faint guide star (visual magnitude 17.5), obtained with the SINFONI AO Module. To the right, the seeing-limited K-band image (FWHM 0.38 arcsec). To the left, the AO-corrected image (FWHM 0.145 arcsec). The ability to perform AO corrections on very faint guide objects is essential for SINFONI in order to observe very faint extragalactic objects. Because of the complexity of SINFONI, with its two modules, it was decided to perform the installation on the 8.2-m VLT Yepun telescope in two steps. The Adaptive Optics module was completely dismounted at ESO-Garching (Germany) and the corresponding 6 tons of equipment was air-freighted from Frankfurt to Santiago de Chile. The subsequent transport by road arrived at the Paranal Observatory on April 21, 2004. After 6 weeks of reintegration and testing in the Integration Hall, the AO Module was mounted on Yepun on May 30 - 31, together with a high-angular-resolution near-infrared Test Camera, cf. PR Photo 24a/04. Technical "First-Light" with this system was achieved around midnight on May 31st by observing a 11-magnitude star, cf. PR Photo 24c/04, reaching right away the theoretical diffraction limit of the 8.2-m telescope (0.06 arcsec) at this wavelength (2.2 µm). Following this early success, the ESO AO team continued the full on-sky tuning and testing of the AO Module until June 8, setting in particular a new world record by reaching a limiting guide-star magnitude of 17.5, two-and-a-half magnitudes (a factor of 10) fainter than ever achieved with any telescope! The ability to perform AO corrections on very faint guide objects is essential for SINFONI in order to observe very faint extragalactic objects. During this commissioning period, test observations were performed of the binary asteroid (22) Kalliope and its moon Linus. They were made by the ESO AO team and served to demonstrate the high performance of this ESO-built Adaptive Optics (AO) system at near-infrared wavelengths. More information about these observations, including a movie of the orbital motion of Linus is available in Appendix B. "First Light" with SINFONI ESO PR Photo 24e/04 ESO PR Photo 24e/04 SINFONI Undergoing Balancing and Flexure Tests at VLT Yepun [Preview - JPEG: 427 x 400 pix - 269k] [Normal - JPEG: 854 x 800 pix - 730k] ESO PR Photo 24f/04 ESO PR Photo 24f/04 SINFONI "First Light" Spectrum [Preview - JPEG: 427 x 400 pix - 94k] [Normal - JPEG: 854 x 800 pix - 222k] Captions: ESO PR Photo 24e/04 shows SINFONI attached to the Cassegrain focus of the 8.2-m VLT Yepun telescope during balancing and flexure tests. ESO PR Photo 24f/04: "First Light" "data cube" spectrum obtained with SINFONI on the bright star HD 130163 on July 9, 2004, as seen on the science data computer screen. This 7th-magnitude A0 V star was observed in the near-infrared H-band with a moderate seeing of 0.8 arcsec. The width of the slitlets in this image is 0.25 arcsec. The exposure time was 1 second. The fully integrated SPIFFI module was air-freighted from Frankfurt to Santiago de Chile and arrived at Paranal on June 5, 2004. The subsequent cool-down to -195 °C was done and an extensive test programme was carried through during the next two weeks. Meanwhile, the AO Module was removed from the telescope and the "wedding" with SPIFFI was celebrated on June 20 in the Paranal Integration Hall. All went well and the first AO-corrected test spectra were obtained immediately thereafter. The extensive tests of SINFONI continued at this site until July 7, 2004, when the instrument was declared fit for work at the telescope. The installation at the 8.2-m VLT Yepun telescope was then accomplished on July 8 - 9, cf. PR Photos 24b/04 and 24e/04. "First Light" was achieved in the early evening of July 9, 2004, only 30 min after the telescope enclosure was opened. At 19:30 local time, SINFONI recorded the first AO-corrected "data cube" with spectra of HD 130163, cf. PR Photo 24f/04. This 7th-magnitude star was observed in the near-infrared H-band with a moderate seeing of 0.8 arcsec. Test Observations with SINFONI ESO PR Photo 24k/04 ESO PR Photo 24k/04 SINFONI Observations of the Galactic Centre [Preview - JPEG: 427 x 400 pix - 213k] [Normal - JPEG: 854 x 800 pix - 511k] ESO PR Photo 24o/04 ESO PR Photo 24o/04 SINFONI Observations of the Distant Galaxy Pair BX 404/405 [Preview - JPEG: 481 x 400 pix - 86k] [Normal - JPEG: 962 x 800 pix - 251k] Captions: ESO PR Photo 24k/04: The coloured image (background) shows a three-band composite image (H, K, and L-bands) obtained with the AO imager NACO on the 8.2-m VLT Yepun telescope. On July 15, 2004, the new SINFONI instrument, mounted at the Cassegrain focus of the same telescope, observed the innermost region (the central 1 x 1 arcsec) of the Milky Way Galaxy in the combined H+K band (1.45 - 2.45 µm) during a total of 110 min "on-source". The insert (upper left) shows the immediate neighbourhood of the central black hole as seen with SINFONI. The position of the black hole is marked with a yellow circle. Later in the night (03:37 UT on July 16), a flare from the black hole ocurred (a zoom-in is shown in the insert at the lower left) and the first-ever infrared spectrum of this phenomenon was observed. It was also possible to register for the first time in great detail the near-infrared spectra of young massive stars orbiting the black hole; some of these are shown in the inserts at the upper right; stars are identified by their "S"-designations. The lower right inserts show the spectra of stars in "IRS 13 E", a very compact cluster of very young and massive stars, located about 3.5 arcsec to the south-west of the black hole. The wavefront reference ("guide") star employed for these AO observations is comparably faint (red magnitude approx. 15), and it is located about 20 arcsec away from the field centre. The seeing during these observations was about 0.6 arcsec. The width of the slitlets was 0.025 arcsec. See Appendix G for more detail. ESO PR Photo 24o/04 shows the distant galaxy pair BX 404/405, as recorded in the K-band (wavelength 2 µm, centered on the redshifted H-alpha line), without AO-correction because of the lack of a nearby, sufficiently bright "guide" star. The width of each slitlet was 0.25 arcsec and the seeing about 0.6 arcsec. The integration time on the galaxy was 2 hours "on-source". The image shown has been reconstructed by combining all of the spectral elements around the H-alpha spectral line. The spectrum of BX 405 (upper right) clearly reveals signs of a velocity shear while that of BX 404 does not. This may be a sign of rotation, a possible signature of a young disc in this galaxy. More information can be found in Appendix C. Until July 22, test observations on a number of celestial objects were performed in order to tune the instrument, to evaluate the performance and to demonstrate its astronomical capabilities. In particular, spectra were obtained of various highly interesting celestial objects and sky regions. Details about these observations (and some images obtained with the AO Module alone) are available in the Appendices to this Press Release: * a video of the motion of the moon Linus around the main-belt asteroid (22) Kalliope, providing the best view of this binary system obtained so far (Appendix B), * images and first-ever detailed spectra of many of the stars that move near the massive black hole at the Galactic Centre, with crucial information on the nature of the individual stars and their motions (Appendix C), * images and spectra of the heavily dust-obscured, active centre of the Circinus galaxy, one of the closest active galaxies, showing ordered rotation in this area and distinct broad and narrow components of the spectral line of Ca7+-ions (Appendix D), * images and spectra of the less obscured central area of NGC 7469, a more distant active galaxy, with spectral lines of molecular hydrogen and carbon monoxide showing a very different distribution of these species (Appendix E), * images and spectra of the Infrared Luminous Galaxy (ULIRG) NGC 6240, a typical galaxy merger, displaying important differences between the two nuclei (Appendix F), and * images and spectra of the young starforming galaxies BX 404/405, casting more light on the formation of disks in spiral galaxies (Appendix G) The SINFONI Teams ESO PR Photo 24g/04 ESO PR Photo 24g/04 Members of the SINFONI Adaptive Optics Commissioning Team [Preview - JPEG: 646 x 400 pix - 198k] [Normal - JPEG: 1291 x 800 pix - 618k] ESO PR Photo 24h/04 ESO PR Photo 24h/04 Members of the SPIFFI Commissioning Team [Preview - JPEG: 491 x 400 pix - 193k] [Normal - JPEG: 982 x 800 pix - 482k] Captions: ESO PR Photo 24g/04 Members of the SINFONI Adaptice Optics Commissioning Team in the VLT Control Room in the night between June 7 - 8, 2004. From left to right and top to bottom: Thomas Szeifert, Sebastien Tordo, Stefan Stroebele, Jerome Paufique, Chris Lidman, Robert Donaldson, Enrico Fedrigo, Markus Kissler Patig, Norbert Hubin, Henri Bonnet. ESO PR Photo 24h/04: Members of the SPIFFI Commissioning Team on August 17. From left to right, Roberto Abuter, Frank Eisenhauer, Andrea Gilbert and Matthew Horrobin. The first SINFONI results have been greeted with enthusiasm, in particular by the team of astronomers and engineers from the consortium of German and Dutch institutes and ESO who worked on the development of SINFONI for nearly 7 years. Some of the members of the Commissioning Teams are depicted in PR Photos 24g/04 and 24h/04; in addition to the SPIFFI team members present on the second photo, Walter Bornemann, Reinhard Genzel, Hans Gemperlein, Stefan Huber have also been working on the reintegration/commissioning in Paranal. Notes [1] This press release is issued in coordination between ESO, the Max-Planck-Institute for Extraterrestrial Physics (MPE) in Garching, Germany, and the Nederlandse Onderzoekschool Voor Astronomie in Leiden, The Netherlands. A German version is available at http://www.mpg.de/bilderBerichteDokumente/dokumentation/pressemitteilungen/2004/pressemitteilung20040824/index.html and a Dutch version at http://www.astronomy.nl/inhoud/pers/persberichten/30_08_04.html. [2] The SINFONI team consists of Roberto Abuter, Andrew Baker, Walter Bornemann, Ric Davies, Frank Eisenhauer (SPIFFI Principal Investigator), Hans Gemperlein, Reinhard Genzel (MPE Director), Andrea Gilbert, Armin Goldbrunner, Matthew Horrobin, Stefan Huber, Christof Iserlohe, Matthew Lehnert, Werner Lieb, Dieter Lutz, Nicole Nesvadba, Claudia Röhrle, Jürgen Schreiber, Linda Tacconi, Matthias Tecza, Niranjan Thatte, Harald Weisz (Max-Planck-Institut für Extraterrestrische Physik, Garching, Germany), Anthony Brown, Paul van der Werf (NOVA, Leiden, The Netherlands), Eddy Elswijk, Johan Pragt, Jan Kragt, Gabby Kroes, Ton Schoenmaker, Rik ter Horst (ASTRON, Dwingeloo, The Netherlands), Henri Bonnet (SINFONI Project Manager), Roberto Castillo, Ralf Conzelmann, Romuald Damster, Bernard Delabre, Christophe Dupuy, Robert Donaldson, Christophe Dumas, Enrico Fedrigo, Gert Finger, Gordon Gillet, Norbert Hubin (Head of Adaptive Optics Dept.), Andreas Kaufer, Franz Koch, Johann Kolb, Andrea Modigliani, Guy Monnet (Head of Telescope Systems Division), Chris Lidman, Jochen Liske, Jean Louis Lizon, Markus Kissler-Patig (SINFONI Instrument Scientist), Jerome Paufique, Juha Reunanen, Silvio Rossi, Riccardo Schmutzer, Armin Silber, Stefan Ströbele (SINFONI System Engineer), Thomas Szeifert, Sebastien Tordo, Leander Mehrgan, Joerg Stegmeier, Reinhold Dorn (European Southern Observatory). Contacts Frank Eisenhauer Max-Planck-Institut für Extraterrestrische Physik (MPE) Garching, Germany Phone: +49-89-30000-3563 Email: eisenhau@mpe.mpg.de Paul van der Werf Leiden Observatory Leiden, The Netherlands Phone: +31-71-5275883 Email: pvdwerf@strw.leidenuniv.nl Henri Bonnet European Southern Observatory (ESO) Email: hbonnet@eso.org Reinhard Genzel Max-Planck-Institut für Extraterrestrische Physik (MPE) Garching, Germany Phone: +49-89-30000-3280 Email: Norbert Hubin European Southern Observatory (ESO) Email: nhubin@eso.org Appendix A: Integral Field Spectroscopy as a Powerful Discovery Tool ESO PR Photo 24i/04 ESO PR Photo 24i/04 How Integral Field Spectroscopy Works [Preview - JPEG: 400 x 425 pix - 127k] [Normal - JPEG: 800 x 850 pix - 366k] Caption: ESO PR Photo 24i/04 shows the principle of Integrated Field Spectroscopy (IFS). The detailed explanation is found in the text. How does SINFONI work? What is Integral Field Spectroscopy (IFS)? The idea of IFS is to obtain a spectrum of each defined spatial element ("spaxel") in the field-of-view. Several techniques to do this are available - in SINFONI, the slicer principle is applied. This involves (PR Photo 24i/04) that * the two-dimensional field-of-view is cut into slices, the so-called slitlets (short slits in contrast to normal long-slit spectroscopy), * the slitlets are then arranged next to each other to form a pseudo-long-slit, * a grating is used to disperse the light, and * the photons are detected with a Near-InfraRed detector. Following data reduction, the set of generated spectra can be re-arranged in the computer to form a 3-dimensional "data cube" of two spatial, and one wavelength dimension. Thus the term "3D-Spectroscopy" is sometimes used for IFS. Appendix B: Linus' orbital motion around Kalliope ESO PR Photo 24j/04 ESO PR Photo 24j/04 Asteroid Kalliope and its Moon Linus [Preview - JPEG: 400 x 427 pix - 50k] [Normal - JPEG: 800 x 854 pix - 136k] ESO PR Video 07/04 ESO PR Video 07/04 The Motion of Linus around Kalliope [MPG: 800 x 800 pix - 128k] [AVI : 800 x 800 pix - 176k] [Animated GIF : 800 x 800 pix - 592k] Caption: ESO PR Photo 24j/04 and Video Clip 07/04 show the best-ever images of the moon Linus orbiting Asteroid (22) Kalliope. It was obtained with the SINFONI Adaptive Optics Module and a high-angular-resolution near-infrared Test Camera during commissioning in June 2004. At minimum separation, the satellite approaches Kalliope to 0.33 arcsec, i.e. the angle under which a 1 Euro coin is seen at a distance of 15 kilometers. At maximum separation, the angular distance is nearly twice as large. For clarity, the brightness of the asteroid has been artificially decreased by a factor of 15, to the level of the moon. This image processing technique also permits to perceive the variation of the asteroid's shape as Kalliope spins around its own axis with a period of 4.15 hours. The asteroid, with an angular diameter of 0.11 arcsec, is barely resolved in these VLT images (resolution 0.06 arcsec at wavelength 2.2 µm). The satellite measures about 50 km acroos and orbits Kalliope at a distance of about 1000 kilometers. ESO Video Clip 07/04 shows the 3.6-day orbital motion of the satellite (moon) Linus around the main-belt asteroid (22) Kalliope. Kalliope orbits the Sun between Mars and Jupiter; it measures about 180 km across and the diameter of its moon is 50 km. This system was observed with the SINFONI AO Module for short periods over four consecutive nights. Linus moves around Kalliope in a circular orbit, at a distance of 1000 km and with a direction of motion similar to the rotation of Kalliope (prograde rotation); the orbital plane of the moon was seen under a 60°-angle with respect to the line-of-sight. The unobserved parts of this orbit are indicated by a dotted line. A hypothetical observer on the surface of Kalliope would live in a strange world: the days would be 14 hours long, and the sky would be filled by a moon five times bigger than our own! The brightness changes of the Linus images is due to variations in the sky conditions at the time of the observations. Rapid changes in the atmosphere result in variations in the sharpness of the corrected images. During the first two nights, seeing conditions were very good, but less so during the last two nights; this can be seen as a slight loss of sharpness of the corresponding satellite images. The discovery of this asteroid satellite, named Linus after the son of Kalliope, the Greek muse of heroic poetry, was first reported in September 2001 by a group of astronomers using the Canadian-France-Hawaii telescope on Mauna Kea (Hawaii, USA). Although previously believed to consist of metal-rich material, the discovery of Linus allowed the scientists to determine the mean density of Kalliope as ~ 2 g/cm3, a rather low value and not consistent with a metal-rich object. Kalliope is now believed to be a "rubble-pile" stony asteroid. Its porous interior is due to a catastrophic collision with another, smaller asteroid early in its history and which also gave birth to Linus. Other references related to Kalliope can be found in the International Astronomical Union Circular (IAUC) 7703 (2001) and a research article "A low density M-type asteroid in the main-belt" by Margot and Brown (Science 300, 193, 2003). Appendix C: Stars at the Galactic Centre and a Flare from the Black Hole ESO PR Photo 24k/04 ESO PR Photo 24k/04 SINFONI Observations of the Galactic Centre [Preview - JPEG: 427 x 400 pix - 213k] [Normal - JPEG: 854 x 800 pix - 511k] Caption: ESO PR Photo 24k/04: The coloured image (background) shows a three-band composite image (H, K, and L-bands) obtained with the AO imager NACO on the 8.2-m VLT Yepun telescope. On July 15, 2004, the new SINFONI instrument, mounted at the Cassegrain focus of the same telescope, observed the innermost region (the central 1 x 1 arcsec) of the Milky Way Galaxy in the combined H+K band (1.45 - 2.45 µm) during a total of 110 min "on-source". The insert (upper left) shows the immediate neighbourhood of the central black hole as seen with SINFONI. The position of the black hole is marked with a yellow circle. Later in the night (03:37 UT on July 16), a flare from the black hole ocurred (a zoom-in is shown in the insert at the lower left) and the first-ever infrared spectrum of this phenomenon was observed. It was also possible to register for the first time in great detail the near-infrared spectra of young massive stars orbiting the black hole; some of these are shown in the inserts at the upper right; stars are identified by their "S"-designations. The lower right inserts show the spectra of stars in "IRS 13 E", a very compact cluster of very young and massive stars, located about 3.5 arcsec to the south-west of the black hole. The wavefront reference ("guide") star employed for these AO observations is comparably faint (red magnitude approx. 15), and it is located about 20 arcsec away from the field centre. The seeing during these observations was about 0.6 arcsec. The width of the slitlets was 0.025 arcsec. The Milky Way Centre is a unique laboratory for studying physical processes that are thought to be common in galactic nuclei. The Galactic Centre is not only the best studied case of a supermassive black hole, but the region also hosts the largest population of high-mass stars in the Galaxy. Diffraction-limited near-IR integral field spectroscopy offers a unique opportunity for exploring in detail the physical phenomena responsible for the active phases of this supermassive black hole, and for studying the dynamics and evolution of the star cluster in its immediate vicinity. Earlier observations with the VLT have been described in ESO PR 17/02 and ESO PR 26/03. With the new SINFONI observations, some of which are displayed in PR Photo 24k/04, it was possible to obtain for the first time very detailed near-infrared spectra of several young and massive stars orbiting the black hole at the centre of our galaxy. The presence of spectral signatures from ionised hydrogen (the Bracket-gamma line) and Helium clearly classify these stars as young, massive early-type stars. They are comparatively short-lived, and the large fraction of such stars in the immediate vicinity of a supermassive black hole is a mystery. The first SINFONI observations of the stellar populations in the innermost Galactic Centre region will now help to explain the origin and formation process of those stars. Moreover, the observed spectral features allow measuring their motions along the line-of-sight (the "radial velocities"). Combining them with the motions in the sky (the "proper motions") obtained from previous observations with the NACO instrument (ESO PR 17/02), it is now possible to determine all orbital parameters for the "S"-stars. This in turn makes it possible to measure directly the mass and the distance of the supermassive black hole at the centre of our galaxy. But not only this! Even more exciting, it became possible to register for the first time the infrared spectrum of a flare from the Galactic Centre black hole (cf. ESO PR 26/03). From the earlier imaging observations, it is known that such outbursts occur approximately once every 4 hours, giving us a uniquely detailed glimpse of a black hole feeding on left-over gas in its close surroundings. It is only the innovative technique of SINFONI - providing spectra for every pixel in a diffraction-limited image - that made it possible to capture the infrared spectrum of such a flare. Such spectra from SINFONI will soon allow to understand better the physics and mechanisms involved in the flare emission. Appendix D: The Active Circinus Galaxy ESO PR Photo 24l/04 ESO PR Photo 24l/04 SINFONI Observations of the Circinus Galaxy [Preview - JPEG: 824 x 400 pix - 324k] [Normal - JPEG: 412 x 800 pix - 131k] Caption: ESO PR Photo 24l/04: The Circinus galaxy - one of the nearest galaxies with an active centre (AGN) - was observed in the K-band (wavelength 2 µm) using the nucleus to guide the SINFONI AO Module. The seeing was 0.5 arcsec and the width of each slitlet 0.025 arcsec; the total integration time on the galaxy was 40 min. At the top is a K-band image of the central arcsec of the galaxy (left insert) and a K-band spectrum of the nucleus (right). In the lower half are images (left) in the light of ionised hydrogen (the Brackett-gamma line) and molecular hydrogen lines (H2), together with their combined rotation curve (middle), as well as images of the broad and narrow components of the high excitation [Ca VIII] spectral line (right). The false-colours in the images represent regions of different surface brightness. At a distance of about 13 million light-years, the Circinus galaxy is one of the nearest galaxies with a very active black hole at the centre. It is seen behind a highly obscured sky field, only 3° from the Milky Way main plane in the southern constellation of this name ("The Pair of Compasses"). Using the nucleus of this galaxy to guide the AO Module, SINFONI was able to zoom in on the central arcsec region - only 60 light-years across - and to map the immediate environment of the black hole at the centre, cf. PR Photo 24l/04. The K-band (wavelength 2 µm) image (insert at the upper left) displays a very compact structure; the emission recorded at this wavelength comes from hot dust heated by radiation from the accretion disc around the black hole. However, as may be seen in the two inserts below, both the emission from ionized hydrogen (the Brackett-gamma line) and molecular hydrogen (H2) are more extended, up to about 30 light-years. As these spectral lines (cf. the spectral tracing at the upper right) are quite narrow and show ordered rotation up to ±40km/s, it is likely that they arise from star formation in a disk around the central black hole. A surprise from the SINFONI observations is that the spectral line of Ca7+-ions (seven times ionised Calcium atoms, or [Ca VIII], which are produced by the ionizing effect of very energetic ultraviolet radiation) in this area appears to have distinct broad and narrow components (images at the lower right). The broad component is centred on the region around the black hole, and probably arises in the so-called "Broad-Line Region". The narrow component is displaced to the north-west and most likely indicates a region where there is a direct line-of-sight from the black hole to some gas clouds. Appendix E: The Active Nucleus in NGC 7469 ESO PR Photo 24m/04 ESO PR Photo 24m/04 SINFONI Observations of NGC 7469 [Preview - JPEG: 470 x 400 pix - 116k] [Normal - JPEG: 939 x 800 pix - 324k] Caption: ESO PR Photo 24m/04: NGC 7469 was observed in K band (wavelength 2 µm) using the nucleus to guide the adaptive optics. The width of each slitlet was 0.025 arcsec and the seeing was 1.1 arcsec. The total integration time on the galaxy was 70 min "on-source". To the upper left is a K-band image (2 µm) of the central arcsec of the NGC7469 and to the upper right, the spectrum of the nucleus. To the lower left is an image of the molecular hydrogen line, together with its rotation curve. There is an image in the light of ionized hydrogen (Bracket-gamma line) at the lower middle and an image of the CO 2-0 absorption bandhead which traces young stars (lower right). The galaxy NGC 7469 (seen north of the celestial equator in the constellation Pegasus) also hosts an active galactic nucleus, but contrary to the Circinus galaxy, it is relatively unobscured. Since NGC 7469 is at a much larger distance, about 225 million light-years, the 0.15 arcsec resolution achieved by SINFONI here corresponds to about 165 light-years. The K-band image (PR Photo 24m/04) shows the bright, compact nucleus of this galaxy, and the spectrum displays very broad lines of ionized hydrogen (the Brackett-gamma line) and helium. This emission arises in the "Broad-Line" region which is still unresolved, as shown by the Brackett-gamma image. On the other hand, the molecular hydrogen extends up to 650 light-years from the centre and shows an ordered rotation. In contrast, the image obtained in the light of CO-molecules - which directly traces late-type stars typical for starbursts - appears very compact. These results confirm those obtained by means of earlier AO observations, but with the new SINFONI data corresponding to various spectral lines, the detailed, two-dimensional structure and motions close to the central black hole are now clearly revealed for the first time. Appendix F: The Galaxy Merger NGC 6240 ESO PR Photo 24n/04 ESO PR Photo 24n/04 SINFONI Observations of NGC 6240 [Preview - JPEG: 506 x 400 pix - 96k] [Normal - JPEG: 1011 x 800 pix - 277k] Caption: ESO PR Photo 24n/04: The galaxy merger system NGC 6240 was observed with SINFONI in the K-band (wavelength 2 µm). This object has two nuclei; the image of the southern one is also shown enlarged, together with the corresponding spectrum. The width of each slitlet was 0.025 arcsec and the seeing was 0.8 arcsec. The total integration time on the galaxy was 80 min. The false-colours in the images represent regions of different surface brightness. The infrared-luminous galaxy NGC 6240 in the constellation Ophiuchus (The Serpent-holder) is in many ways the prototype of a gas-rich, infrared-(ultra-)luminous galaxy merger. This system has two rapidly rotating, massive bulges/nuclei at a projected angular separation of 1.6 arcsec. Each of them contains a powerful starburst region and a luminous, highly obscured, X-ray-emitting supermassive black hole. As such, NGC 6240 is probably a nearby example of dust and gas-rich galaxy merger systems seen at larger distances. NGC6240 is also the most luminous, nearby source of molecular hydrogen emission. It was observed in the K-band (wavelength 2 µm), using a faint star at a distance of about 35 arcsec as the AO "guide" star. The starburst activity is traced by the ionized gas and occurs mostly at the two nuclei in regions measuring around 650 light-years across. The distribution of the molecular gas is very different. It follows a complex spatial and dynamical pattern with several extended streamers. The high-resolution SINFONI data now makes it possible - for the first time - to investigate the distribution and motion of the molecular gas, as well as the stellar population in this galaxy with a "resolution" of about 80 light-years. Appendix G: Motions in the Young Star-Forming Galaxies BX 404/405 ESO PR Photo 24o/04 ESO PR Photo 24o/04 SINFONI Observations of the Distant Galaxy Pair BX 404/405 [Preview - JPEG: 481 x 400 pix - 86k] [Normal - JPEG: 962 x 800 pix - 251k] Caption: ESO PR Photo 24o/04 shows the distant galaxy pair BX 404/405, as recorded in the K-band (wavelength 2 µm, centered on the redshifted H-alpha line), without AO-correction because of the lack of a nearby, sufficiently bright "guide" star. The width of each slitlet was 0.25 arcsec and the seeing about 0.6 arcsec. The integration time on the galaxy was 2 hours "on-source". The image shown has been reconstructed by combining all of the spectral elements around the H-alpha spectral line. The spectrum of BX 405 (upper right) clearly reveals signs of a velocity shear while that of BX 404 does not. This may be a sign of rotation, a possible signature of a young disc in this galaxy. How and when did the discs in spiral galaxies like the Milky Way form? This is one of the longest-standing puzzles in modern cosmology. Two general models presently describe how disk galaxies may form. One is based on a scenario in which there is a gentle collapse of gas clouds that collide and lose momentum. They sink towards a "centre", thereby producing a disc of gas in which stars are formed. The other implies that galaxies grow through repeated mergers of smaller gas-rich galaxies. Together they first produce a spherical mass distribution at the centre and any remaining gas then settles into a disk. Recent studies of stars in the Milky Way system and nearby spiral galaxies suggest that the discs now present in these systems formed about 10,000 million years ago. This corresponds to the epoch when we observe galaxies at redshifts of about 1.5 - 2.5. Interestingly, studies of galaxies at these distances seem consistent with current ideas about when disks may have formed, and there is some evidence that most of the mass in the galaxies was also assembled at that time. In any case, the most direct way to verify such a connection is to observe galaxies at redshifts 1.5-2.5, in order to elucidate whether their observed properties are consistent with velocity patterns of rotating disks of gas and stars. This would be visible as a "velocity shear", i.e., a significant difference in velocity of neigbouring regions. In addition, such observations may provide a good test of the above mentioned hypotheses for how discs may have formed. Various groups of astrophysicists in the US and Europe have developed observational selection criteria which may be used to identify galaxies with properties similar to those expected for young disc galaxies. Observations with SINFONI was made of one of these objects, the galaxy pair BX 404/405 discovered by a group of astronomers at Caltech (USA). For BX 405, clear signs were found of a "velocity shear" like that expected for rotation of a forming disk, but the other object does not show this. It may thus be that the properties of star-forming galaxies at this epoch are quite complex and that only some of them have young disks.

  1. 76 FR 34634 - Federal Acquisition Regulation; Prioritizing Sources of Supplies and Services for Use by the...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-14

    ... contracts before commercial sources in the open market. The proposed rule amends FAR 8.002 as follows: The... requirements for supplies and services from commercial sources in the open market. The proposed FAR 8.004 would... subpart 8.6). (b) Commercial sources (including educational and non-profit institutions) in the open...

  2. A Steganographic Embedding Undetectable by JPEG Compatibility Steganalysis

    DTIC Science & Technology

    2002-01-01

    itd.nrl.navy.mil Abstract. Steganography and steganalysis of digital images is a cat- and-mouse game. In recent work, Fridrich, Goljan and Du introduced a method...proposed embedding method. 1 Introduction Steganography and steganalysis of digital images is a cat-and-mouse game. Ever since Kurak and McHugh’s seminal...paper on LSB embeddings in images [10], various researchers have published work on either increasing the payload, im- proving the resistance to

  3. C2 Failures: A Taxonomy and Analysis

    DTIC Science & Technology

    2013-06-01

    2, pp. 171-199. Huber, Reiner, Tor Langsaeter, Petra Eggenhofer, Fernando Freire, Antonio Grilo, Anne-Marie Grisogono, Jose Martine , Jens Roemer... Martin (2012). Mission Command White Paper. Washington, D.C.: U.S. Department of Defense, Office of the Chairman of the Joint Chiefs of Staff. http...e1352384704110.jpeg?w=625&h=389 The Punchline “What we’ve got here, is failure to communicate” Strother Martin as “The Captain,” Cool Hand Luke, (Warner

  4. A seasonal comparison of surface sediment characteristics in Chincoteague Bay, Maryland and Virginia, USA

    USGS Publications Warehouse

    Ellis, Alisha M.; Marot, Marci E.; Wheaton, Cathryn J.; Bernier, Julie C.; Smith, Christopher G.

    2016-02-03

    This report is an archive for sedimentological data derived from the surface sediment of Chincoteague Bay. Data are available for the spring (March/April 2014) and fall (October 2014) samples collected. Downloadable data are provided as Excel spreadsheets and as JPEG files. Additional files include ArcGIS shapefiles of the sampling sites, detailed results of sediment grain-size analyses, and formal Federal Geographic Data Committee metadata (data downloads).

  5. On LSB Spatial Domain Steganography and Channel Capacity

    DTIC Science & Technology

    2008-03-21

    reveal the hidden information should not be taken as proof that the image is now clean. The survivability of LSB type spatial domain steganography ...the mindset that JPEG compressing an image is sufficient to destroy the steganography for spatial domain LSB type stego. We agree that JPEGing...modeling of 2 bit LSB steganography shows that theoretically there is non-zero stego payload possible even though the image has been JPEGed. We wish to

  6. Biosecurity and Open-Source Biology: The Promise and Peril of Distributed Synthetic Biological Technologies.

    PubMed

    Evans, Nicholas G; Selgelid, Michael J

    2015-08-01

    In this article, we raise ethical concerns about the potential misuse of open-source biology (OSB): biological research and development that progresses through an organisational model of radical openness, deskilling, and innovation. We compare this organisational structure to that of the open-source software model, and detail salient ethical implications of this model. We demonstrate that OSB, in virtue of its commitment to openness, may be resistant to governance attempts.

  7. Using R to implement spatial analysis in open source environment

    NASA Astrophysics Data System (ADS)

    Shao, Yixi; Chen, Dong; Zhao, Bo

    2007-06-01

    R is an open source (GPL) language and environment for spatial analysis, statistical computing and graphics which provides a wide variety of statistical and graphical techniques, and is highly extensible. In the Open Source environment it plays an important role in doing spatial analysis. So, to implement spatial analysis in the Open Source environment which we called the Open Source geocomputation is using the R data analysis language integrated with GRASS GIS and MySQL or PostgreSQL. This paper explains the architecture of the Open Source GIS environment and emphasizes the role R plays in the aspect of spatial analysis. Furthermore, one apt illustration of the functions of R is given in this paper through the project of constructing CZPGIS (Cheng Zhou Population GIS) supported by Changzhou Government, China. In this project we use R to implement the geostatistics in the Open Source GIS environment to evaluate the spatial correlation of land price and estimate it by Kriging Interpolation. We also use R integrated with MapServer and php to show how R and other Open Source software cooperate with each other in WebGIS environment, which represents the advantages of using R to implement spatial analysis in Open Source GIS environment. And in the end, we points out that the packages for spatial analysis in R is still scattered and the limited memory is still a bottleneck when large sum of clients connect at the same time. Therefore further work is to group the extensive packages in order or design normative packages and make R cooperate better with other commercial software such as ArcIMS. Also we look forward to developing packages for land price evaluation.

  8. [The use of open source software in graphic anatomic reconstructions and in biomechanic simulations].

    PubMed

    Ciobanu, O

    2009-01-01

    The objective of this study was to obtain three-dimensional (3D) images and to perform biomechanical simulations starting from DICOM images obtained by computed tomography (CT). Open source software were used to prepare digitized 2D images of tissue sections and to create 3D reconstruction from the segmented structures. Finally, 3D images were used in open source software in order to perform biomechanic simulations. This study demonstrates the applicability and feasibility of open source software developed in our days for the 3D reconstruction and biomechanic simulation. The use of open source software may improve the efficiency of investments in imaging technologies and in CAD/CAM technologies for implants and prosthesis fabrication which need expensive specialized software.

  9. Web GIS in practice IV: publishing your health maps and connecting to remote WMS sources using the Open Source UMN MapServer and DM Solutions MapLab

    PubMed Central

    Boulos, Maged N Kamel; Honda, Kiyoshi

    2006-01-01

    Open Source Web GIS software systems have reached a stage of maturity, sophistication, robustness and stability, and usability and user friendliness rivalling that of commercial, proprietary GIS and Web GIS server products. The Open Source Web GIS community is also actively embracing OGC (Open Geospatial Consortium) standards, including WMS (Web Map Service). WMS enables the creation of Web maps that have layers coming from multiple different remote servers/sources. In this article we present one easy to implement Web GIS server solution that is based on the Open Source University of Minnesota (UMN) MapServer. By following the accompanying step-by-step tutorial instructions, interested readers running mainstream Microsoft® Windows machines and with no prior technical experience in Web GIS or Internet map servers will be able to publish their own health maps on the Web and add to those maps additional layers retrieved from remote WMS servers. The 'digital Asia' and 2004 Indian Ocean tsunami experiences in using free Open Source Web GIS software are also briefly described. PMID:16420699

  10. Rapid development of medical imaging tools with open-source libraries.

    PubMed

    Caban, Jesus J; Joshi, Alark; Nagy, Paul

    2007-11-01

    Rapid prototyping is an important element in researching new imaging analysis techniques and developing custom medical applications. In the last ten years, the open source community and the number of open source libraries and freely available frameworks for biomedical research have grown significantly. What they offer are now considered standards in medical image analysis, computer-aided diagnosis, and medical visualization. A cursory review of the peer-reviewed literature in imaging informatics (indeed, in almost any information technology-dependent scientific discipline) indicates the current reliance on open source libraries to accelerate development and validation of processes and techniques. In this survey paper, we review and compare a few of the most successful open source libraries and frameworks for medical application development. Our dual intentions are to provide evidence that these approaches already constitute a vital and essential part of medical image analysis, diagnosis, and visualization and to motivate the reader to use open source libraries and software for rapid prototyping of medical applications and tools.

  11. Open-Source RTOS Space Qualification: An RTEMS Case Study

    NASA Technical Reports Server (NTRS)

    Zemerick, Scott

    2017-01-01

    NASA space-qualification of reusable off-the-shelf real-time operating systems (RTOSs) remains elusive due to several factors notably (1) The diverse nature of RTOSs utilized across NASA, (2) No single NASA space-qualification criteria, lack of verification and validation (V&V) analysis, or test beds, and (3) different RTOS heritages, specifically open-source RTOSs and closed vendor-provided RTOSs. As a leader in simulation test beds, the NASA IV&V Program is poised to help jump-start and lead the space-qualification effort of the open source Real-Time Executive for Multiprocessor Systems (RTEMS) RTOS. RTEMS, as a case-study, can be utilized as an example of how to qualify all RTOSs, particularly the reusable non-commercial (open-source) ones that are gaining usage and popularity across NASA. Qualification will improve the overall safety and mission assurance of RTOSs for NASA-agency wide usage. NASA's involvement in space-qualification of an open-source RTOS such as RTEMS will drive the RTOS industry toward a more qualified and mature open-source RTOS product.

  12. Cyberscience and the Knowledge-Based Economy. Open Access and Trade Publishing: From Contradiction to Compatibility with Non-Exclusive Copyright Licensing

    ERIC Educational Resources Information Center

    Armbruster, Chris

    2008-01-01

    Open source, open content and open access are set to fundamentally alter the conditions of knowledge production and distribution. Open source, open content and open access are also the most tangible result of the shift towards e-science and digital networking. Yet, widespread misperceptions exist about the impact of this shift on knowledge…

  13. Learning from hackers: open-source clinical trials.

    PubMed

    Dunn, Adam G; Day, Richard O; Mandl, Kenneth D; Coiera, Enrico

    2012-05-02

    Open sharing of clinical trial data has been proposed as a way to address the gap between the production of clinical evidence and the decision-making of physicians. A similar gap was addressed in the software industry by their open-source software movement. Here, we examine how the social and technical principles of the movement can guide the growth of an open-source clinical trial community.

  14. Evaluation and selection of open-source EMR software packages based on integrated AHP and TOPSIS.

    PubMed

    Zaidan, A A; Zaidan, B B; Al-Haiqi, Ahmed; Kiah, M L M; Hussain, Muzammil; Abdulnabi, Mohamed

    2015-02-01

    Evaluating and selecting software packages that meet the requirements of an organization are difficult aspects of software engineering process. Selecting the wrong open-source EMR software package can be costly and may adversely affect business processes and functioning of the organization. This study aims to evaluate and select open-source EMR software packages based on multi-criteria decision-making. A hands-on study was performed and a set of open-source EMR software packages were implemented locally on separate virtual machines to examine the systems more closely. Several measures as evaluation basis were specified, and the systems were selected based a set of metric outcomes using Integrated Analytic Hierarchy Process (AHP) and TOPSIS. The experimental results showed that GNUmed and OpenEMR software can provide better basis on ranking score records than other open-source EMR software packages. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. OMPC: an Open-Source MATLAB-to-Python Compiler.

    PubMed

    Jurica, Peter; van Leeuwen, Cees

    2009-01-01

    Free access to scientific information facilitates scientific progress. Open-access scientific journals are a first step in this direction; a further step is to make auxiliary and supplementary materials that accompany scientific publications, such as methodological procedures and data-analysis tools, open and accessible to the scientific community. To this purpose it is instrumental to establish a software base, which will grow toward a comprehensive free and open-source language of technical and scientific computing. Endeavors in this direction are met with an important obstacle. MATLAB((R)), the predominant computation tool in many fields of research, is a closed-source commercial product. To facilitate the transition to an open computation platform, we propose Open-source MATLAB((R))-to-Python Compiler (OMPC), a platform that uses syntax adaptation and emulation to allow transparent import of existing MATLAB((R)) functions into Python programs. The imported MATLAB((R)) modules will run independently of MATLAB((R)), relying on Python's numerical and scientific libraries. Python offers a stable and mature open source platform that, in many respects, surpasses commonly used, expensive commercial closed source packages. The proposed software will therefore facilitate the transparent transition towards a free and general open-source lingua franca for scientific computation, while enabling access to the existing methods and algorithms of technical computing already available in MATLAB((R)). OMPC is available at http://ompc.juricap.com.

  16. Another Look at an Enigmatic New World

    NASA Astrophysics Data System (ADS)

    2005-02-01

    VLT NACO Performs Outstanding Observations of Titan's Atmosphere and Surface On January 14, 2005, the ESA Huygens probe arrived at Saturn's largest satellite, Titan. After a faultless descent through the dense atmosphere, it touched down on the icy surface of this strange world from where it continued to transmit precious data back to the Earth. Several of the world's large ground-based telescopes were also active during this exciting event, observing Titan before and near the Huygens encounter, within the framework of a dedicated campaign coordinated by the members of the Huygens Project Scientist Team. Indeed, large astronomical telescopes with state-of-the art adaptive optics systems allow scientists to image Titan's disc in quite some detail. Moreover, ground-based observations are not restricted to the limited period of the fly-by of Cassini and landing of Huygens. They hence complement ideally the data gathered by this NASA/ESA mission, further optimising the overall scientific return. A group of astronomers [1] observed Titan with ESO's Very Large Telescope (VLT) at the Paranal Observatory (Chile) during the nights from 14 to 16 January, by means of the adaptive optics NAOS/CONICA instrument mounted on the 8.2-m Yepun telescope [2]. The observations were carried out in several modes, resulting in a series of fine images and detailed spectra of this mysterious moon. They complement earlier VLT observations of Titan, cf. ESO Press Photos 08/04 and ESO Press Release 09/04. The highest contrast images ESO PR Photo 04a/05 ESO PR Photo 04a/05 Titan's surface (NACO/VLT) [Preview - JPEG: 400 x 712 pix - 64k] [Normal - JPEG: 800 x 1424 pix - 524k] ESO PR Photo 04b/05 ESO PR Photo 04b/05 Map of Titan's Surface (NACO/VLT) [Preview - JPEG: 400 x 651 pix - 41k] [Normal - JPEG: 800 x 1301 pix - 432k] Caption: ESO PR Photo 04a/05 shows Titan's trailing hemisphere [3] with the Huygens landing site marked as an "X". The left image was taken with NACO and a narrow-band filter centred at 2 microns. On the right is the NACO/SDI image of the same location showing Titan's surface through the 1.6 micron methane window. A spherical projection with coordinates on Titan is overplotted. ESO PR Photo 04b/05 is a map of Titan taken with NACO at 1.28 micron (a methane window allowing it to probe down to the surface). On the leading side of Titan, the bright equatorial feature ("Xanadu") is dominating. On the trailing side, the landing site of the Huygens probe is indicated. ESO PR Photo 04c/05 ESO PR Photo 04c/05 Titan, the Enigmatic Moon, and Huygens Landing Site (NACO-SDI/VLT and Cassini/ISS) [Preview - JPEG: 400 x 589 pix - 40k] [Normal - JPEG: 800 x 1178 pix - 290k] Caption: ESO PR Photo 04c/05 is a comparison between the NACO/SDI image and an image taken by Cassini/ISS while approaching Titan. The Cassini image shows the Huygens landing site map wrapped around Titan, rotated to the same position as the January NACO SDI observations. The yellow "X" marks the landing site of the ESA Huygens probe. The Cassini/ISS image is courtesy of NASA, JPL, Space Science Institute (see http://sci.esa.int/science-e/www/object/index.cfm?fobjectid=36222). The coloured lines delineate the regions that were imaged by Cassini at differing resolutions. The lower-resolution imaging sequences are outlined in blue. Other areas have been specifically targeted for moderate and high resolution mosaicking of surface features. These include the site where the European Space Agency's Huygens probe has touched down in mid-January (marked with the yellow X), part of the bright region named Xanadu (easternmost extent of the area covered), and a boundary between dark and bright regions. ESO PR Photo 04d/05 ESO PR Photo 04d/05 Evolution of the Atmosphere of Titan (NACO/VLT) [Preview - JPEG: 400 x 902 pix - 40k] [Normal - JPEG: 800 x 1804 pix - 320k] Caption: ESO PR Photo 04d/05 is an image of Titan's atmosphere at 2.12 microns as observed with NACO on the VLT at three different epochs from 2002 till now. Titan's atmosphere exhibits seasonal and meteorological changes which can clearly be seen here : the North-South asymmetry - indicative of changes in the chemical composition in one pole or the other, depending on the season - is now clearly in favour of the North pole. Indeed, the situation has reversed with respect to a few years ago when the South pole was brighter. Also visible in these images is a bright feature in the South pole, found to be presently dimming after having appeared very bright from 2000 to 2003. The differences in size are due to the variation in the distance to Earth of Saturn and its planetary system. The new images show Titan's atmosphere and surface at various near-infrared spectral bands. The surface of Titan's trailing side is visible in images taken through narrow-band filters at wavelengths 1.28, 1.6 and 2.0 microns. They correspond to the so-called "methane windows" which allow to peer all the way through the lower Titan atmosphere to the surface. On the other hand, Titan's atmosphere is visible through filters centred in the wings of these methane bands, e.g. at 2.12 and 2.17 microns. Eric Gendron of the Paris Observatory in France and leader of the team, is extremely pleased: "We believe that some of these images are the highest-contrast images of Titan ever taken with any ground-based or earth-orbiting telescope." The excellent images of Titan's surface show the location of the Huygens landing site in much detail. In particular, those centred at wavelength 1.6 micron and obtained with the Simultaneous Differential Imager (SDI) on NACO [4] provide the highest contrast and best views. This is firstly because the filters match the 1.6 micron methane window most accurately. Secondly, it is possible to get an even clearer view of the surface by subtracting accurately the simultaneously recorded images of the atmospheric haze, taken at wavelength 1.625 micron. The images show the great complexity of Titan's trailing side, which was earlier thought to be very dark. However, it is now obvious that bright and dark regions cover the field of these images. The best resolution achieved on the surface features is about 0.039 arcsec, corresponding to 200 km on Titan. ESO PR Photo 04c/04 illustrates the striking agreement between the NACO/SDI image taken with the VLT from the ground and the ISS/Cassini map. The images of Titan's atmosphere at 2.12 microns show a still-bright south pole with an additional atmospheric bright feature, which may be clouds or some other meteorological phenomena. The astronomers have followed it since 2002 with NACO and notice that it seems to be fading with time. At 2.17 microns, this feature is not visible and the north-south asymmetry - also known as "Titan's smile" - is clearly in favour in the north. The two filters probe different altitude levels and the images thus provide information about the extent and evolution of the north-south asymmetry. Probing the composition of the surface ESO PR Photo 04e/05 ESO PR Photo 04e/05 Spectrum of Two Regions on Titan (NACO/VLT) [Preview - JPEG: 400 x 623 pix - 44k] [Normal - JPEG: 800 x 1246 pix - 283k] Caption: ESO PR Photo 04e/05 represents two of the many spectra obtained on January 16, 2005 with NACO and covering the 2.02 to 2.53 micron range. The blue spectrum corresponds to the brightest region on Titan's surface within the slit, while the red spectrum corresponds to the dark area around the Huygens landing site. In the methane band, the two spectra are equal, indicating a similar atmospheric content; in the methane window centred at 2.0 microns, the spectra show differences in brightness, but are in phase. This suggests that there is no real variation in the composition beyond different atmospheric mixings. ESO PR Photo 04f/05 ESO PR Photo 04f/05 Imaging Titan with a Tunable Filter (NACO Fabry-Perot/VLT) [Preview - JPEG: 400 x 718 pix - 44k] [Normal - JPEG: 800 x 1435 pix - 326k] Caption: ESO PR Photo 04f/05 presents a series of images of Titan taken around the 2.0 micron methane window probing different layers of the atmosphere and the surface. The images are currently under thorough processing and analysis so as to reveal any subtle variations in wavelength that could be indicative of the spectral response of the various surface components, thus allowing the astronomers to identify them. Because the astronomers have also obtained spectroscopic data at different wavelengths, they will be able to recover useful information on the surface composition. The Cassini/VIMS instrument explores Titan's surface in the infrared range and, being so close to this moon, it obtains spectra with a much better spatial resolution than what is possible with Earth-based telescopes. However, with NACO at the VLT, the astronomers have the advantage of observing Titan with considerably higher spectral resolution, and thus to gain more detailed spectral information about the composition, etc. The observations therefore complement each other. Once the composition of the surface at the location of the Huygens landing is known from the detailed analysis of the in-situ measurements, it should become possible to learn the nature of the surface features elsewhere on Titan by combining the Huygens results with more extended cartography from Cassini as well as from VLT observations to come. More information Results on Titan obtained with data from NACO/VLT are in press in the journal Icarus ("Maps of Titan's surface from 1 to 2.5 micron" by A. Coustenis et al.). Previous images of Titan obtained with NACO and with NACO/SDI are accessible as ESO PR Photos 08/04 and ESO PR Photos 11/04. See also these Press Releases for additional scientific references.

  17. The validity of open-source data when assessing jail suicides.

    PubMed

    Thomas, Amanda L; Scott, Jacqueline; Mellow, Jeff

    2018-05-09

    The Bureau of Justice Statistics' Deaths in Custody Reporting Program is the primary source for jail suicide research, though the data is restricted from general dissemination. This study is the first to examine whether jail suicide data obtained from publicly available sources can help inform our understanding of this serious public health problem. Of the 304 suicides that were reported through the DCRP in 2009, roughly 56 percent (N = 170) of those suicides were identified through the open-source search protocol. Each of the sources was assessed based on how much information was collected on the incident and the types of variables available. A descriptive analysis was then conducted on the variables that were present in both data sources. The four variables present in each data source were: (1) demographic characteristics of the victim, (2) the location of occurrence within the facility, (3) the location of occurrence by state, and (4) the size of the facility. Findings demonstrate that the prevalence and correlates of jail suicides are extremely similar in both open-source and official data. However, for almost every variable measured, open-source data captured as much information as official data did, if not more. Further, variables not found in official data were identified in the open-source database, thus allowing researchers to have a more nuanced understanding of the situational characteristics of the event. This research provides support for the argument in favor of including open-source data in jail suicide research as it illustrates how open-source data can be used to provide additional information not originally found in official data. In sum, this research is vital in terms of possible suicide prevention, which may be directly linked to being able to manipulate environmental factors.

  18. Open source tools and toolkits for bioinformatics: significance, and where are we?

    PubMed

    Stajich, Jason E; Lapp, Hilmar

    2006-09-01

    This review summarizes important work in open-source bioinformatics software that has occurred over the past couple of years. The survey is intended to illustrate how programs and toolkits whose source code has been developed or released under an Open Source license have changed informatics-heavy areas of life science research. Rather than creating a comprehensive list of all tools developed over the last 2-3 years, we use a few selected projects encompassing toolkit libraries, analysis tools, data analysis environments and interoperability standards to show how freely available and modifiable open-source software can serve as the foundation for building important applications, analysis workflows and resources.

  19. Open Source 2010: Reflections on 2007

    ERIC Educational Resources Information Center

    Wheeler, Brad

    2007-01-01

    Colleges and universities and commercial firms have demonstrated great progress in realizing the vision proffered for "Open Source 2007," and 2010 will mark even greater progress. Although much work remains in refining open source for higher education applications, the signals are now clear: the collaborative development of software can provide…

  20. Development and Use of an Open-Source, User-Friendly Package to Simulate Voltammetry Experiments

    ERIC Educational Resources Information Center

    Wang, Shuo; Wang, Jing; Gao, Yanjing

    2017-01-01

    An open-source electrochemistry simulation package has been developed that simulates the electrode processes of four reaction mechanisms and two typical electroanalysis techniques: cyclic voltammetry and chronoamperometry. Unlike other open-source simulation software, this package balances the features with ease of learning and implementation and…

  1. Creating Open Source Conversation

    ERIC Educational Resources Information Center

    Sheehan, Kate

    2009-01-01

    Darien Library, where the author serves as head of knowledge and learning services, launched a new website on September 1, 2008. The website is built with Drupal, an open source content management system (CMS). In this article, the author describes how she and her colleagues overhauled the library's website to provide an open source content…

  2. Integrating an Automatic Judge into an Open Source LMS

    ERIC Educational Resources Information Center

    Georgouli, Katerina; Guerreiro, Pedro

    2011-01-01

    This paper presents the successful integration of the evaluation engine of Mooshak into the open source learning management system Claroline. Mooshak is an open source online automatic judge that has been used for international and national programming competitions. although it was originally designed for programming competitions, Mooshak has also…

  3. 76 FR 75875 - Defense Federal Acquisition Regulation Supplement; Open Source Software Public Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-05

    ... Regulation Supplement; Open Source Software Public Meeting AGENCY: Defense Acquisition Regulations System... initiate a dialogue with industry regarding the use of open source software in DoD contracts. DATES: Public... be held in the General Services Administration (GSA), Central Office Auditorium, 1800 F Street NW...

  4. The open-source movement: an introduction for forestry professionals

    Treesearch

    Patrick Proctor; Paul C. Van Deusen; Linda S. Heath; Jeffrey H. Gove

    2005-01-01

    In recent years, the open-source movement has yielded a generous and powerful suite of software and utilities that rivals those developed by many commercial software companies. Open-source programs are available for many scientific needs: operating systems, databases, statistical analysis, Geographic Information System applications, and object-oriented programming....

  5. Open Source Software Development and Lotka's Law: Bibliometric Patterns in Programming.

    ERIC Educational Resources Information Center

    Newby, Gregory B.; Greenberg, Jane; Jones, Paul

    2003-01-01

    Applies Lotka's Law to metadata on open source software development. Authoring patterns found in software development productivity are found to be comparable to prior studies of Lotka's Law for scientific and scholarly publishing, and offer promise in predicting aggregate behavior of open source developers. (Author/LRW)

  6. Conceptualization and validation of an open-source closed-loop deep brain stimulation system in rat.

    PubMed

    Wu, Hemmings; Ghekiere, Hartwin; Beeckmans, Dorien; Tambuyzer, Tim; van Kuyck, Kris; Aerts, Jean-Marie; Nuttin, Bart

    2015-04-21

    Conventional deep brain stimulation (DBS) applies constant electrical stimulation to specific brain regions to treat neurological disorders. Closed-loop DBS with real-time feedback is gaining attention in recent years, after proved more effective than conventional DBS in terms of pathological symptom control clinically. Here we demonstrate the conceptualization and validation of a closed-loop DBS system using open-source hardware. We used hippocampal theta oscillations as system input, and electrical stimulation in the mesencephalic reticular formation (mRt) as controller output. It is well documented that hippocampal theta oscillations are highly related to locomotion, while electrical stimulation in the mRt induces freezing. We used an Arduino open-source microcontroller between input and output sources. This allowed us to use hippocampal local field potentials (LFPs) to steer electrical stimulation in the mRt. Our results showed that closed-loop DBS significantly suppressed locomotion compared to no stimulation, and required on average only 56% of the stimulation used in open-loop DBS to reach similar effects. The main advantages of open-source hardware include wide selection and availability, high customizability, and affordability. Our open-source closed-loop DBS system is effective, and warrants further research using open-source hardware for closed-loop neuromodulation.

  7. Conceptualization and validation of an open-source closed-loop deep brain stimulation system in rat

    PubMed Central

    Wu, Hemmings; Ghekiere, Hartwin; Beeckmans, Dorien; Tambuyzer, Tim; van Kuyck, Kris; Aerts, Jean-Marie; Nuttin, Bart

    2015-01-01

    Conventional deep brain stimulation (DBS) applies constant electrical stimulation to specific brain regions to treat neurological disorders. Closed-loop DBS with real-time feedback is gaining attention in recent years, after proved more effective than conventional DBS in terms of pathological symptom control clinically. Here we demonstrate the conceptualization and validation of a closed-loop DBS system using open-source hardware. We used hippocampal theta oscillations as system input, and electrical stimulation in the mesencephalic reticular formation (mRt) as controller output. It is well documented that hippocampal theta oscillations are highly related to locomotion, while electrical stimulation in the mRt induces freezing. We used an Arduino open-source microcontroller between input and output sources. This allowed us to use hippocampal local field potentials (LFPs) to steer electrical stimulation in the mRt. Our results showed that closed-loop DBS significantly suppressed locomotion compared to no stimulation, and required on average only 56% of the stimulation used in open-loop DBS to reach similar effects. The main advantages of open-source hardware include wide selection and availability, high customizability, and affordability. Our open-source closed-loop DBS system is effective, and warrants further research using open-source hardware for closed-loop neuromodulation. PMID:25897892

  8. Open Source and ROI: Open Source Has Made Significant Leaps in Recent Years. What Does It Have to Offer Education?

    ERIC Educational Resources Information Center

    Guhlin, Miguel

    2007-01-01

    A switch to free open source software can minimize cost and allow funding to be diverted to equipment and other programs. For instance, the OpenOffice suite is an alternative to expensive basic application programs offered by major vendors. Many such programs on the market offer features seldom used in education but for which educators must pay.…

  9. Feeling the Heat

    NASA Astrophysics Data System (ADS)

    2004-05-01

    Successful "First Light" for the Mid-Infrared VISIR Instrument on the VLT Summary Close to midnight on April 30, 2004, intriguing thermal infrared images of dust and gas heated by invisible stars in a distant region of our Milky Way appeared on a computer screen in the control room of the ESO Very Large Telescope (VLT). These images mark the successful "First Light" of the VLT Imager and Spectrometer in the InfraRed (VISIR), the latest instrument to be installed on this powerful telescope facility at the ESO Paranal Observatory in Chile. The event was greeted with a mixture of delight, satisfaction and some relief by the team of astronomers and engineers from the consortium of French and Dutch Institutes and ESO who have worked on the development of VISIR for around 10 years [1]. Pierre-Olivier Lagage (CEA, France), the Principal Investigator, is content : "This is a wonderful day! A result of many years of dedication by a team of engineers and technicians, who can today be proud of their work. With VISIR, astronomers will have at their disposal a great instrument on a marvellous telescope. And the gain is enormous; 20 minutes of observing with VISIR is equivalent to a whole night of observing on a 3-4m class telescope." Dutch astronomer and co-PI Jan-Willem Pel (Groningen, The Netherlands) adds: "What's more, VISIR features a unique observing mode in the mid-infrared: spectroscopy at a very high spectral resolution. This will open up new possibilities such as the study of warm molecular hydrogen most likely to be an important component of our galaxy." PR Photo 16a/04: VISIR under the Cassegrain focus of the Melipal telescope PR Photo 16b/04: VISIR mounted behind the mirror of the Melipal telescope PR Photo 16c/04: Colour composite of the star forming region G333.6-0.2 PR Photo 16d/04: Colour composite of the Galactic Centre PR Photo 16e/04: The Ant Planetary Nebula at 12.8 μm PR Photo 16f/04: The starburst galaxy He2-10 at 11.3μm PR Photo 16g/04: High-resolution spectrum of G333.6-0.2 around 12.8μm PR Photo 16h/04: High-resolution spectrum of the Ant Planetary Nebula around 12.8μm From cometary tails to centres of galaxies The mid-infrared spectral region extends from a few to a few tens of microns in wavelength and provides a unique view of our Universe. Optical astronomy, that is astronomy at wavelengths to which our eyes are sensitive, is mostly directed towards light emitted by gas, be it in stars, nebulae or galaxies. Mid-Infrared astronomy, however, allows us to also detect solid dust particles at temperatures of -200 to +300 °C. Dust is very abundant in the universe in many different environments, ranging from cometary tails to the centres of galaxies. This dust also often totally absorbs and hence blocks the visible light reaching us from such objects. Red light, and especially infrared light, can propagate much better in dust clouds. Many important astrophysical processes occur in regions of high obscuration by dust, most notably star formation and the late stages of their evolution, when stars that have burnt nearly all their fuel shed much of their outer layers and dust grains form in their "stellar wind". Stars are born in so-called molecular clouds. The proto-stars feed from these clouds and are shielded from the outside by them. Infrared is a tool - very much as ultrasound is for medical inspections - for looking into those otherwise hidden regions to study the stellar "embryos". It is thus crucial to also observe the Universe in the infrared and mid-infrared. Unfortunately, there are also infrared-emitting molecules in the Earth's atmosphere, e.g. water vapour, Nitric Oxides, Ozone, Methane. Because of these gases, the atmosphere is completely opaque at certain wavelengths, except in a few "windows" where the Earth's atmosphere is transparent. Even in these windows, however, the sky and telescope emit radiation in the infrared to an extent that observing in the mid-infrared at night is comparable to trying to do optical astronomy in daytime. Ground-based infrared astronomers have thus become extremely adept at developing special techniques called "chopping' and "nodding" for detecting the extremely faint astronomical signals against this unwanted bright background [3]. VISIR: an extremely complex instrument VISIR - the VLT Imager and Spectrometer in the InfraRed - is a complex multi-mode instrument designed to operate in the 10 and 20 μm atmospheric windows, i.e. at wavelengths up to about 40 times longer than visible light and to provide images as well as spectra at a wide range of resolving power up to ~ 30.000. It can sample images down to the diffraction limit of the 8.2-m Melipal telescope (0.27 arcsec at 10 μm wavelength, i.e. corresponding to a resolution of 500 m on the Moon), which is expected to be reached routinely due to the excellent seeing conditions experienced for a large fraction of the time at the VLT [2]. Because at room temperature the metal and glass of VISIR would emit strongly at exactly the same wavelengths and would swamp any faint mid-infrared astronomical signals, the whole VISIR instrument is cooled to a temperature close to -250° C and its two panoramic 256x256 pixel array detectors to even lower temperatures, only a few degrees above absolute zero. It is also kept in a vacuum tank to avoid the unavoidable condensation of water and icing which would otherwise occur. The complete instrument is mounted on the telescope and must remain rigid to within a few thousandths of a millimetre as the telescope moves to acquire and then track objects anywhere in the sky. Needless to say, this makes for an extremely complex instrument and explains the many years needed to develop and bring it to the telescope on the top of Paranal. VISIR also includes a number of important technological innovations, most notably its unique cryogenic motor drive systems comprising integrated stepper motors, gears and clutches whose shape is similar to that of the box of the famous French Camembert cheese. VISIR is mounted on Melipal ESO PR Photo 16a/04 ESO PR Photo 16a/04 VISIR under the Cassegrain focus of the Melipal telescope [Preview - JPEG: 400 x 476 pix - 271k] [Normal - JPEG: 800 x 951 pix - 600k] ESO PR Photo 16b/04 ESO PR Photo 16b/04 VISIR mounted behind the mirror of the Melipal telescope [Preview - JPEG: 400 x 603 pix - 366k] [Normal - JPEG: 800 x 1206 pix - 945k] Caption: ESO PR Photo 16a/04 shows VISIR about to be attached at the Cassegrain focus of the Melipal telescope. On ESO PR Photo 16b/04, VISIR appears much smaller once mounted behind the enormous 8.2-m diameter mirror of the Melipal telescope. The fully integrated VISIR plus all the associated equipment (amounting to a total of around 8 tons) was air freighted from Paris to Santiago de Chile and arrived at the Paranal Observatory on 25th March after a subsequent 1500 km journey by road. Following tests to confirm that nothing had been damaged, VISIR was mounted on the third VLT telescope "Melipal" on April 27th. PR Photos 16a/04 and 16b/04 show the approximately 1.6 tons of VISIR being mounted at the Cassegrain focus, below the 8.2-m main mirror. First technical light on a star was achieved on April 29th, shortly after VISIR had been cooled down to its operating temperature. This allowed to proceed with the necessary first basic operations, including focusing the telescope, and tests. While telescope focusing was one of the difficult and frequent tasks faced by astronomers in the past, this is no longer so with the active optics feature of the VLT telescopes which, in principle, has to be focused only once after which it will forever be automatically kept in perfect focus. First images and spectra from VISIR ESO PR Photo 16c/04 ESO PR Photo 16c/04 Colour composite of the star forming region G333.6-0.2 [Preview - JPEG: 400 x 477 pix - 78k] [Normal - JPEG: 800 x 954 pix - 191k] ESO PR Photo 16d/04 ESO PR Photo 16d/04 Colour composite of the Galactic Centre [Preview - JPEG: 400 x 478 pix - 159k] [Normal - JPEG: 800 x 955 pix - 348k] Caption: ESO PR Photo 16c/04 is a colour composite image of the visually obscured G333.6-0.2 star-forming region at a distance of nearly 10,000 light-years in our Milky Way galaxy. This image was made by combining three digital images of the intensity of the infrared emission at wavelengths of 11.3μm (one of the Polycyclic Aromatic Hydrocarbon features, coded blue), 12.8 μm (an emission line of [NeII], coded green) and 19μm (warm dust emission, coded red). Each pixel subtends 0.127 arcsec and the total field is ~ 33 x 33 arcsec with North at the top and East to the left. The total integration times were 13 seconds at the shortest and 35 seconds at the longer wavelengths. The brighter spots locate regions where the dust, which obscures all the visible light, has been heated by recently formed stars. ESO PR Photo 16d/04 shows another colour composite, this time of the Galactic Centre at a distance of about 30,000 light-years. It was made by combining images in filters centred at 8.6μm (Polycyclic Aromatic Hydrocarbon molecular feature - coded blue), 12.8μm ([NeII] - coded green) and 19.5μm (coded red). Each pixel subtends 0.127 arcsec and the total field is ~ 33 x 33 arcsec with North at the top and East to the left. Total integration times were 300, 160 and 300 s for the 3 filters, respectively. This region is very rich, full of stars, dust, ionised and molecular gas. One of the scientific goals will be to detect and monitor the signal from the black hole at the centre of our galaxy. ESO PR Photo 16e/04 ESO PR Photo 16e/04 The Ant Planetary Nebula at 12.8 μm [Preview - JPEG: 400 x 477 pix - 77k] [Normal - JPEG: 800 x 954 pix - 182k] Caption: ESO PR Photo 16e/04 is an image of the "Ant" Planetary Nebula (Mz3) in the narrow-band filter centred at wavelength 12.8 μm. The scale is 0.127 arcsec/pixel and the total field-of-view is 33 x 33 arcsec, with North at the top and East to the left. The total integration time was 200 seconds. Note the diffraction rings around the central star which confirm that the maximum spatial resolution possible with the 8.2-m telescope is being achieved. ESO PR Photo 16f/04 ESO PR Photo 16f/04 The starburst galaxy He2-10 at 11.3μm [Preview - JPEG: 400 x 477 pix - 69k] [Normal - JPEG: 800 x 954 pix - 172k] Caption: ESO PR Photo 16f/04 is an image at wavelength 11.3 μm of the "nearby" (distance about 30 million light-years) blue compact galaxy He2-10, which is actively forming stars. The scale is 0.127 arcsec per pixel and the full field covers 15 x 15 arcsec with North at the top and East on the left. The total integration time for this observation is one hour. Several star forming regions are detected, as well as a diffuse emission, which was unknown until these VISIR observations. The star-forming regions on the left of the image are not visible in optical images. ESO PR Photo 16g/04 ESO PR Photo 16g/04 High-resolution spectrum of G333.6-0.2 around 12.8 μm [Preview - JPEG: 652 x 400 pix - 123k] [Normal - JPEG: 1303 x 800 pix - 277k] Caption: ESO PR Photo 16g/04 is a reproduction of a high-resolution spectrum of the Ne II line (ionised Neon) at 12.8135 μm of the star-forming region G333.6-0.2 shown in ESO PR Photo 16c/04. This spectrum reveals the complex motions of the ionized gas in this region. The images are 256 x 256 frames of 50 x 50 micron pixels. The "field" direction is horizontal, with total slit length of 32.5 arcsec; North is left and South is to the right. The dispersion direction is vertical, with the wavelength increasing downward. The total integration time was 80 sec. ESO PR Photo 16h/04 ESO PR Photo 16h/04 High-resolution spectrum of the Ant nebula around 12.8 μm [Preview - JPEG: 610 x 400 pix - 354k] [Normal - JPEG: 1219 x 800 pix - 901k] Caption: ESO PR Photo 16h/04 is a reproduction of a high-resolution spectrum of the Ne II line (ionised Neon) at 12.8135 microns of the Ant Planetary Nebula, also known as Mz-3, shown in ESO PR Photo 16d/04. The technical details are similar to ESO PR Photo 16g/04. The total integration time was 120 sec. The photos above resulted from some of the first observational tests with VISIR. PR Photo 16c/04 shows the scientific "First Light" image, obtained one day later on April 30th, of a visually obscured star forming region nearly 10,000 light-years away in our galaxy, the Milky Way. The picture shown here is a false-colour image made by combining three digital images of the intensity of the infrared emission from this region at wavelengths of 11.3 μm (one of the Polycyclic Aromatic Hydrocarbon - PAH - features), 12.8 μm (an emission line of ionised neon) and 19 μm (cool dust emission). Ten times sharper Until now, an elegant way to avoid the problems caused by the emission and absorption of the atmosphere was to fly infrared telescopes on satellites as was done in the highly successful IRAS and ISO missions and currently the Spitzer observatory. For both technical and cost reasons, however, such telescopes have so far been limited to only 60-85 cm in diameter. While very sensitive therefore, the spatial resolution (sharpness) delivered by these telescopes is 10 times worse than that of the 8.2-m diameter VLT telescopes. They have also not been equipped with the very high spectral resolution capability, a feature of the VISIR instrument, which is thus expected to remain the instrument of choice for a wide range of studies for many years to come despite the competition from space. More information A corresponding [1]: The consortium of institutes responsible for building the VISIR instrument under contract to ESO comprises the CEA/DSM/DAPNIA, Saclay, France - led by the Principal Investigator (PI), Pierre-Olivier Lagage and the Netherlands Foundation for Research in Astronomy/ASTRON - (Dwingeloo, The Netherlands) with Jan-Willem Pel from Groningen University as Co-PI for the spectrometer. [2]: Stellar radiation on its way to the observer is also affected by the turbulence of the Earth's atmosphere. This is the effect which makes the stars twinkle for the human eye. While the general public enjoys this phenomenon as something that makes the night sky interesting and may be entertaining, the twinkling is a major concern for amateur and professional astronomers, as it smears out the optical images. Infrared radiation is less affected by this effect. Therefore an instrument like VISIR can make full use of the extremely high optical quality of modern telescopes, like the VLT. [3]: Observations from the ground at wavelengths of 10 to 20 μm are particularly difficult because this is the wavelength region in which both the telescope and the atmosphere emits most strongly. In order to minimize its effect, the images shown here were made by tilting the telescope secondary mirror every few seconds (chopping) and the whole telescope every minute (nodding) so that this unwanted telescope and sky background emission could be measured and subtracted from the science images faster than it varies.

  10. Open source drug discovery--a new paradigm of collaborative research in tuberculosis drug development.

    PubMed

    Bhardwaj, Anshu; Scaria, Vinod; Raghava, Gajendra Pal Singh; Lynn, Andrew Michael; Chandra, Nagasuma; Banerjee, Sulagna; Raghunandanan, Muthukurussi V; Pandey, Vikas; Taneja, Bhupesh; Yadav, Jyoti; Dash, Debasis; Bhattacharya, Jaijit; Misra, Amit; Kumar, Anil; Ramachandran, Srinivasan; Thomas, Zakir; Brahmachari, Samir K

    2011-09-01

    It is being realized that the traditional closed-door and market driven approaches for drug discovery may not be the best suited model for the diseases of the developing world such as tuberculosis and malaria, because most patients suffering from these diseases have poor paying capacity. To ensure that new drugs are created for patients suffering from these diseases, it is necessary to formulate an alternate paradigm of drug discovery process. The current model constrained by limitations for collaboration and for sharing of resources with confidentiality hampers the opportunities for bringing expertise from diverse fields. These limitations hinder the possibilities of lowering the cost of drug discovery. The Open Source Drug Discovery project initiated by Council of Scientific and Industrial Research, India has adopted an open source model to power wide participation across geographical borders. Open Source Drug Discovery emphasizes integrative science through collaboration, open-sharing, taking up multi-faceted approaches and accruing benefits from advances on different fronts of new drug discovery. Because the open source model is based on community participation, it has the potential to self-sustain continuous development by generating a storehouse of alternatives towards continued pursuit for new drug discovery. Since the inventions are community generated, the new chemical entities developed by Open Source Drug Discovery will be taken up for clinical trial in a non-exclusive manner by participation of multiple companies with majority funding from Open Source Drug Discovery. This will ensure availability of drugs through a lower cost community driven drug discovery process for diseases afflicting people with poor paying capacity. Hopefully what LINUX the World Wide Web have done for the information technology, Open Source Drug Discovery will do for drug discovery. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. State-of-the-practice and lessons learned on implementing open data and open source policies.

    DOT National Transportation Integrated Search

    2012-05-01

    This report describes the current government, academic, and private sector practices associated with open data and open source application development. These practices are identified; and the potential uses with the ITS Programs Data Capture and M...

  12. Your Personal Analysis Toolkit - An Open Source Solution

    NASA Astrophysics Data System (ADS)

    Mitchell, T.

    2009-12-01

    Open source software is commonly known for its web browsers, word processors and programming languages. However, there is a vast array of open source software focused on geographic information management and geospatial application building in general. As geo-professionals, having easy access to tools for our jobs is crucial. Open source software provides the opportunity to add a tool to your tool belt and carry it with you for your entire career - with no license fees, a supportive community and the opportunity to test, adopt and upgrade at your own pace. OSGeo is a US registered non-profit representing more than a dozen mature geospatial data management applications and programming resources. Tools cover areas such as desktop GIS, web-based mapping frameworks, metadata cataloging, spatial database analysis, image processing and more. Learn about some of these tools as they apply to AGU members, as well as how you can join OSGeo and its members in getting the job done with powerful open source tools. If you haven't heard of OSSIM, MapServer, OpenLayers, PostGIS, GRASS GIS or the many other projects under our umbrella - then you need to hear this talk. Invest in yourself - use open source!

  13. All-source Information Management and Integration for Improved Collective Intelligence Production

    DTIC Science & Technology

    2011-06-01

    Intelligence (ELINT) • Open Source Intelligence ( OSINT ) • Technical Intelligence (TECHINT) These intelligence disciplines produce... intelligence , measurement and signature intelligence , signals intelligence , and open - source data, in the production of intelligence . All- source intelligence ...All- Source Information Integration and Management) R&D Project 3 All- Source Intelligence

  14. Compressed domain ECG biometric with two-lead features

    NASA Astrophysics Data System (ADS)

    Lee, Wan-Jou; Chang, Wen-Whei

    2016-07-01

    This study presents a new method to combine ECG biometrics with data compression within a common JPEG2000 framework. We target the two-lead ECG configuration that is routinely used in long-term heart monitoring. Incorporation of compressed-domain biometric techniques enables faster person identification as it by-passes the full decompression. Experiments on public ECG databases demonstrate the validity of the proposed method for biometric identification with high accuracies on both healthy and diseased subjects.

  15. Embedded wavelet packet transform technique for texture compression

    NASA Astrophysics Data System (ADS)

    Li, Jin; Cheng, Po-Yuen; Kuo, C.-C. Jay

    1995-09-01

    A highly efficient texture compression scheme is proposed in this research. With this scheme, energy compaction of texture images is first achieved by the wavelet packet transform, and an embedding approach is then adopted for the coding of the wavelet packet transform coefficients. By comparing the proposed algorithm with the JPEG standard, FBI wavelet/scalar quantization standard and the EZW scheme with extensive experimental results, we observe a significant improvement in the rate-distortion performance and visual quality.

  16. ImageJ: Image processing and analysis in Java

    NASA Astrophysics Data System (ADS)

    Rasband, W. S.

    2012-06-01

    ImageJ is a public domain Java image processing program inspired by NIH Image. It can display, edit, analyze, process, save and print 8-bit, 16-bit and 32-bit images. It can read many image formats including TIFF, GIF, JPEG, BMP, DICOM, FITS and "raw". It supports "stacks", a series of images that share a single window. It is multithreaded, so time-consuming operations such as image file reading can be performed in parallel with other operations.

  17. Research on lossless compression of true color RGB image with low time and space complexity

    NASA Astrophysics Data System (ADS)

    Pan, ShuLin; Xie, ChengJun; Xu, Lin

    2008-12-01

    Eliminating correlated redundancy of space and energy by using a DWT lifting scheme and reducing the complexity of the image by using an algebraic transform among the RGB components. An improved Rice Coding algorithm, in which presents an enumerating DWT lifting scheme that fits any size images by image renormalization has been proposed in this paper. This algorithm has a coding and decoding process without backtracking for dealing with the pixels of an image. It support LOCO-I and it can also be applied to Coder / Decoder. Simulation analysis indicates that the proposed method can achieve a high image compression. Compare with Lossless-JPG, PNG(Microsoft), PNG(Rene), PNG(Photoshop), PNG(Anix PicViewer), PNG(ACDSee), PNG(Ulead photo Explorer), JPEG2000, PNG(KoDa Inc), SPIHT and JPEG-LS, the lossless image compression ratio improved 45%, 29%, 25%, 21%, 19%, 17%, 16%, 15%, 11%, 10.5%, 10% separately with 24 pieces of RGB image provided by KoDa Inc. Accessing the main memory in Pentium IV,CPU2.20GHZ and 256MRAM, the coding speed of the proposed coder can be increased about 21 times than the SPIHT and the efficiency of the performance can be increased 166% or so, the decoder's coding speed can be increased about 17 times than the SPIHT and the efficiency of the performance can be increased 128% or so.

  18. An effective and efficient compression algorithm for ECG signals with irregular periods.

    PubMed

    Chou, Hsiao-Hsuan; Chen, Ying-Jui; Shiau, Yu-Chien; Kuo, Te-Son

    2006-06-01

    This paper presents an effective and efficient preprocessing algorithm for two-dimensional (2-D) electrocardiogram (ECG) compression to better compress irregular ECG signals by exploiting their inter- and intra-beat correlations. To better reveal the correlation structure, we first convert the ECG signal into a proper 2-D representation, or image. This involves a few steps including QRS detection and alignment, period sorting, and length equalization. The resulting 2-D ECG representation is then ready to be compressed by an appropriate image compression algorithm. We choose the state-of-the-art JPEG2000 for its high efficiency and flexibility. In this way, the proposed algorithm is shown to outperform some existing arts in the literature by simultaneously achieving high compression ratio (CR), low percent root mean squared difference (PRD), low maximum error (MaxErr), and low standard derivation of errors (StdErr). In particular, because the proposed period sorting method rearranges the detected heartbeats into a smoother image that is easier to compress, this algorithm is insensitive to irregular ECG periods. Thus either the irregular ECG signals or the QRS false-detection cases can be better compressed. This is a significant improvement over existing 2-D ECG compression methods. Moreover, this algorithm is not tied exclusively to JPEG2000. It can also be combined with other 2-D preprocessing methods or appropriate codecs to enhance the compression performance in irregular ECG cases.

  19. Blocking reduction of Landsat Thematic Mapper JPEG browse images using optimal PSNR estimated spectra adaptive postfiltering

    NASA Technical Reports Server (NTRS)

    Linares, Irving; Mersereau, Russell M.; Smith, Mark J. T.

    1994-01-01

    Two representative sample images of Band 4 of the Landsat Thematic Mapper are compressed with the JPEG algorithm at 8:1, 16:1 and 24:1 Compression Ratios for experimental browsing purposes. We then apply the Optimal PSNR Estimated Spectra Adaptive Postfiltering (ESAP) algorithm to reduce the DCT blocking distortion. ESAP reduces the blocking distortion while preserving most of the image's edge information by adaptively postfiltering the decoded image using the block's spectral information already obtainable from each block's DCT coefficients. The algorithm iteratively applied a one dimensional log-sigmoid weighting function to the separable interpolated local block estimated spectra of the decoded image until it converges to the optimal PSNR with respect to the original using a 2-D steepest ascent search. Convergence is obtained in a few iterations for integer parameters. The optimal logsig parameters are transmitted to the decoder as a negligible byte of overhead data. A unique maxima is guaranteed due to the 2-D asymptotic exponential overshoot shape of the surface generated by the algorithm. ESAP is based on a DFT analysis of the DCT basis functions. It is implemented with pixel-by-pixel spatially adaptive separable FIR postfilters. PSNR objective improvements between 0.4 to 0.8 dB are shown together with their corresponding optimal PSNR adaptive postfiltered images.

  20. Observer performance assessment of JPEG-compressed high-resolution chest images

    NASA Astrophysics Data System (ADS)

    Good, Walter F.; Maitz, Glenn S.; King, Jill L.; Gennari, Rose C.; Gur, David

    1999-05-01

    The JPEG compression algorithm was tested on a set of 529 chest radiographs that had been digitized at a spatial resolution of 100 micrometer and contrast sensitivity of 12 bits. Images were compressed using five fixed 'psychovisual' quantization tables which produced average compression ratios in the range 15:1 to 61:1, and were then printed onto film. Six experienced radiologists read all cases from the laser printed film, in each of the five compressed modes as well as in the non-compressed mode. For comparison purposes, observers also read the same cases with reduced pixel resolutions of 200 micrometer and 400 micrometer. The specific task involved detecting masses, pneumothoraces, interstitial disease, alveolar infiltrates and rib fractures. Over the range of compression ratios tested, for images digitized at 100 micrometer, we were unable to demonstrate any statistically significant decrease (p greater than 0.05) in observer performance as measured by ROC techniques. However, the observers' subjective assessments of image quality did decrease significantly as image resolution was reduced and suggested a decreasing, but nonsignificant, trend as the compression ratio was increased. The seeming discrepancy between our failure to detect a reduction in observer performance, and other published studies, is likely due to: (1) the higher resolution at which we digitized our images; (2) the higher signal-to-noise ratio of our digitized films versus typical CR images; and (3) our particular choice of an optimized quantization scheme.

  1. About a method for compressing x-ray computed microtomography data

    NASA Astrophysics Data System (ADS)

    Mancini, Lucia; Kourousias, George; Billè, Fulvio; De Carlo, Francesco; Fidler, Aleš

    2018-04-01

    The management of scientific data is of high importance especially for experimental techniques that produce big data volumes. Such a technique is x-ray computed tomography (CT) and its community has introduced advanced data formats which allow for better management of experimental data. Rather than the organization of the data and the associated meta-data, the main topic on this work is data compression and its applicability to experimental data collected from a synchrotron-based CT beamline at the Elettra-Sincrotrone Trieste facility (Italy) and studies images acquired from various types of samples. This study covers parallel beam geometry, but it could be easily extended to a cone-beam one. The reconstruction workflow used is the one currently in operation at the beamline. Contrary to standard image compression studies, this manuscript proposes a systematic framework and workflow for the critical examination of different compression techniques and does so by applying it to experimental data. Beyond the methodology framework, this study presents and examines the use of JPEG-XR in combination with HDF5 and TIFF formats providing insights and strategies on data compression and image quality issues that can be used and implemented at other synchrotron facilities and laboratory systems. In conclusion, projection data compression using JPEG-XR appears as a promising, efficient method to reduce data file size and thus to facilitate data handling and image reconstruction.

  2. JPEG2000 Image Compression on Solar EUV Images

    NASA Astrophysics Data System (ADS)

    Fischer, Catherine E.; Müller, Daniel; De Moortel, Ineke

    2017-01-01

    For future solar missions as well as ground-based telescopes, efficient ways to return and process data have become increasingly important. Solar Orbiter, which is the next ESA/NASA mission to explore the Sun and the heliosphere, is a deep-space mission, which implies a limited telemetry rate that makes efficient onboard data compression a necessity to achieve the mission science goals. Missions like the Solar Dynamics Observatory (SDO) and future ground-based telescopes such as the Daniel K. Inouye Solar Telescope, on the other hand, face the challenge of making petabyte-sized solar data archives accessible to the solar community. New image compression standards address these challenges by implementing efficient and flexible compression algorithms that can be tailored to user requirements. We analyse solar images from the Atmospheric Imaging Assembly (AIA) instrument onboard SDO to study the effect of lossy JPEG2000 (from the Joint Photographic Experts Group 2000) image compression at different bitrates. To assess the quality of compressed images, we use the mean structural similarity (MSSIM) index as well as the widely used peak signal-to-noise ratio (PSNR) as metrics and compare the two in the context of solar EUV images. In addition, we perform tests to validate the scientific use of the lossily compressed images by analysing examples of an on-disc and off-limb coronal-loop oscillation time-series observed by AIA/SDO.

  3. Color image lossy compression based on blind evaluation and prediction of noise characteristics

    NASA Astrophysics Data System (ADS)

    Ponomarenko, Nikolay N.; Lukin, Vladimir V.; Egiazarian, Karen O.; Lepisto, Leena

    2011-03-01

    The paper deals with JPEG adaptive lossy compression of color images formed by digital cameras. Adaptation to noise characteristics and blur estimated for each given image is carried out. The dominant factor degrading image quality is determined in a blind manner. Characteristics of this dominant factor are then estimated. Finally, a scaling factor that determines quantization steps for default JPEG table is adaptively set (selected). Within this general framework, two possible strategies are considered. A first one presumes blind estimation for an image after all operations in digital image processing chain just before compressing a given raster image. A second strategy is based on prediction of noise and blur parameters from analysis of RAW image under quite general assumptions concerning characteristics parameters of transformations an image will be subject to at further processing stages. The advantages of both strategies are discussed. The first strategy provides more accurate estimation and larger benefit in image compression ratio (CR) compared to super-high quality (SHQ) mode. However, it is more complicated and requires more resources. The second strategy is simpler but less beneficial. The proposed approaches are tested for quite many real life color images acquired by digital cameras and shown to provide more than two time increase of average CR compared to SHQ mode without introducing visible distortions with respect to SHQ compressed images.

  4. Digital storage and analysis of color Doppler echocardiograms

    NASA Technical Reports Server (NTRS)

    Chandra, S.; Thomas, J. D.

    1997-01-01

    Color Doppler flow mapping has played an important role in clinical echocardiography. Most of the clinical work, however, has been primarily qualitative. Although qualitative information is very valuable, there is considerable quantitative information stored within the velocity map that has not been extensively exploited so far. Recently, many researchers have shown interest in using the encoded velocities to address the clinical problems such as quantification of valvular regurgitation, calculation of cardiac output, and characterization of ventricular filling. In this article, we review some basic physics and engineering aspects of color Doppler echocardiography, as well as drawbacks of trying to retrieve velocities from video tape data. Digital storage, which plays a critical role in performing quantitative analysis, is discussed in some detail with special attention to velocity encoding in DICOM 3.0 (medical image storage standard) and the use of digital compression. Lossy compression can considerably reduce file size with minimal loss of information (mostly redundant); this is critical for digital storage because of the enormous amount of data generated (a 10 minute study could require 18 Gigabytes of storage capacity). Lossy JPEG compression and its impact on quantitative analysis has been studied, showing that images compressed at 27:1 using the JPEG algorithm compares favorably with directly digitized video images, the current goldstandard. Some potential applications of these velocities in analyzing the proximal convergence zones, mitral inflow, and some areas of future development are also discussed in the article.

  5. Southern Fireworks above ESO Telescopes

    NASA Astrophysics Data System (ADS)

    1999-05-01

    New Insights from Observations of Mysterious Gamma-Ray Burst International teams of astronomers are now busy working on new and exciting data obtained during the last week with telescopes at the European Southern Observatory (ESO). Their object of study is the remnant of a mysterious cosmic explosion far out in space, first detected as a gigantic outburst of gamma rays on May 10. Gamma-Ray Bursters (GRBs) are brief flashes of very energetic radiation - they represent by far the most powerful type of explosion known in the Universe and their afterglow in optical light can be 10 million times brighter than the brightest supernovae [1]. The May 10 event ranks among the brightest one hundred of the over 2500 GRB's detected in the last decade. The new observations include detailed images and spectra from the VLT 8.2-m ANTU (UT1) telescope at Paranal, obtained at short notice during a special Target of Opportunity programme. This happened just over one month after that powerful telescope entered into regular service and demonstrates its great potential for exciting science. In particular, in an observational first, the VLT measured linear polarization of the light from the optical counterpart, indicating for the first time that synchrotron radiation is involved . It also determined a staggering distance of more than 7,000 million light-years to this GRB . The astronomers are optimistic that the extensive observations will help them to better understand the true nature of such a dramatic event and thus to bring them nearer to the solution of one of the greatest riddles of modern astrophysics. A prime example of international collaboration The present story is about important new results at the front-line of current research. At the same time, it is also a fine illustration of a successful collaboration among several international teams of astronomers and the very effective way modern science functions. It began on May 10, at 08:49 hrs Universal Time (UT), when the Burst And Transient Source Experiment (BATSE) onboard NASA's Compton Gamma-Ray Observatory (CGRO) high in orbit around the Earth, suddenly registered an intense burst of gamma-ray radiation from a direction less than 10° from the celestial south pole. Independently, the Gamma-Ray Burst Monitor (GRBM) on board the Italian-Dutch BeppoSAX satellite also detected the event (see GCN GRB Observation Report 304 [2]). Following the BATSE alert, the BeppoSAX Wide-Field Cameras (WFC) quickly localized the sky position of the burst within a circle of 3 arcmin radius in the southern constellation Chamaeleon. It was also detected by other satellites, including the ESA/NASA Ulysses spacecraft , since some years in a wide orbit around the Sun. The event was designated GRB 990510 and the measured position was immediately distributed by BeppoSAX Mission Scientist Luigi Piro to a network of astronomers. It was also published on Circular No. 7160 of the International Astronomical Union (IAU). From Amsterdam (The Netherlands), Paul Vreeswijk, Titus Galama , and Evert Rol of the Amsterdam/Huntsville GRB follow-up team (led by Jan van Paradijs ) immediately contacted astronomers at the 1-meter telescope of the South African Astronomical Observatory (SAAO) (Sutherland, South Africa) of the PLANET network microlensing team, an international network led by Penny Sackett in Groningen (The Netherlands). There, John Menzies of SAAO and Karen Pollard (University of Canterbury, New Zealand) were about to begin the last of their 14 nights of observations, part of a continuous world-wide monitoring program looking for evidence of planets around other stars. Other PLANET sites in Australia and Tasmania where it was still nighttime were unfortunately clouded out (some observations were in fact made that night at the Mount Stromlo observatory in Australia, but they were only announced one day later). As soon as possible - immediately after sundown and less than 9 hours after the initial burst was recorded - the PLANET observers turned their telescope and quickly obtained a series of CCD images in visual light of the sky region where the gamma-ray burst was detected, then shipped them off electronically to their Dutch colleagues [3]. Comparing the new photos with earlier ones in the digital sky archive, Vreeswijk, Galama and Rol almost immediately discovered a new, relatively bright visual source in the region of the gamma-ray burst, which they proposed as the optical counterpart of the burst, cf. their dedicated webpage at http://www.astro.uva.nl/~titus/grb990510/. The team then placed a message on the international Gamma-Ray Burster web-noteboard ( GCN Circular 310), thereby alerting their colleagues all over the world. One hour later, the narrow-field instruments on BeppoSax identified a new X-Ray source at the same location ( GCN Circular 311), thus confirming the optical identification. All in all, a remarkable synergy of human and satellite resources! Observations of GRB 990510 at ESO Vreeswijk, Galama and Rol, in collaboration with Nicola Masetti, Eliana Palazzi and Elena Pian of the BeppoSAX GRB optical follow-up team (led by Filippo Frontera ) and the Huntsville optical follow-up team (led by Chryssa Kouveliotou ), also contacted the European Southern Observatory (ESO). Astronomers at this Organization's observatories in Chile were quick to exploit this opportunity and crucial data were soon obtained with several of the main telescopes at La Silla and Paranal, less than 14 hours after the first detection of this event by the satellite. ESO PR Photo 22a/99 ESO PR Photo 22a/99 [Preview - JPEG: 211 x 400 pix - 72k] [Normal - JPEG: 422 x 800 pix - 212k] [High-Res - JPEG: 1582 x 3000 pix - 2.6M] ESO PR Photo 22b/99 ESO PR Photo 22b/99 [Preview - JPEG: 400 x 437 pix - 297k] [Normal - JPEG: 800 x 873 pix - 1.1M] [High-Res - JPEG: 2300 x 2509 pix - 5.9M] Caption to PR Photo 22a/99 : This wide-field photo was obtained with the Wide-Field Imager (WFI) at the MPG/ESO 2.2-m telescope at La Silla on May 11, 1999, at 08:42 UT, under inferior observing conditions (seeing = 1.9 arcsec). The exposure time was 450 sec in a B(lue) filter. The optical image of the afterglow of GRB 990510 is indicated with an arrow in the upper part of the field that measures about 8 x 16 arcmin 2. The original scale is 0.24 pix/arcsec and there are 2k x 4k pixels in the original frame. North is up and East is left. Caption to PR Photo 22b/99 : This is a (false-)colour composite of the area around the optical image of the afterglow of GRB 990510, based on three near-infrared exposures with the SOFI multi-mode instrument at the 3.6-m ESO New Technology Telescope (NTT) at La Silla, obtained on May 10, 1999, between 23:15 and 23:45 UT. The exposure times were 10 min each in the J- (1.2 µm; here rendered in blue), H- (1.6 µm; green) and K-bands (2.2 µm; red); the image quality is excellent (0.6 arcsec). The field measures about 5 x 5 arcmin 2 ; the original pixel size is 0.29 arcsec. North is up and East is left. ESO PR Photo 22c/99 ESO PR Photo 22c/99 [Preview - JPEG: 400 x 235 pix - 81k] [Normal - JPEG: 800 x 469 pix - 244k] [High-Res - JPEG: 2732 x 1603 pix - 2.6M] ESO PR Photo 22d/99 ESO PR Photo 22d/99 [Preview - JPEG: 400 x 441 pix - 154k] [Normal - JPEG: 800 x 887 pix - 561k] [High-Res - JPEG: 2300 x 2537 pix - 2.3M] Caption to PR Photo 22c/99 : To the left is a reproduction of a short (30 sec) centering exposure in the V-band (green-yellow light), obtained with VLT ANTU and the multi-mode FORS1 instrument on May 11, 1999, at 03:48 UT under mediocre observing conditions (image quality 1.0 arcsec).The optical image of the afterglow of GRB 990510 is easily seen in the box, by comparison with an exposure of the same sky field before the explosion, made with the ESO Schmidt Telescope in 1986 (right).The exposure time was 120 min on IIIa-F emulsion behind a R(ed) filter. The field shown measures about 6.2 x 6.2 arcmin 2. North is up and East is left. Caption to PR Photo 22d/99 : Enlargement from the 30 sec V-exposure by the VLT, shown in Photo 22c/99. The field is about 1.9 x 1.9 arcmin 2. North is up and East is left. The data from Chile were sent to Europe where, by quick comparison of images from the Wide-Field Imager (WFI) at the MPG/ESO 2.2-m telescope at La Silla with those from SAAO, the Dutch and Italian astronomers found that the brightness of the suspected optical counterpart was fading rapidly; this was a clear sign that the identification was correct ( GCN Circular 313). With the precise sky position of GRB 990510 now available, the ESO observers at the VLT were informed and, setting other programmes aside under the Target of Opportunity scheme, were then able to obtain polarimetric data as well as a very detailed spectrum of the optical counterpart. Comprehensive early observations of this object were also made at La Silla with the ESO 3.6-m telescope (CCD images in the UBVRI-bands from the ultraviolet to the near-infrared part of the spectrum) and the ESO 3.6-m New Technology Telescope (with the SOFI multimode instrument in the infrared JHK-bands). A series of optical images in the BVRI-bands was secured with the Danish 1.5-m telescope, documenting the rapid fading of the object. Observations at longer wavelengths were made with the 15-m Swedish-ESO Submillimetre Telescope (SEST). All of the involved astronomers concur that a fantastic amount of observations has been obtained. They are still busy analyzing the data, and are confident that much will be learned from this particular burst. The VLT scores a first: Measurement of GRB polarization ESO PR Photo 22e/99 ESO PR Photo 22e/99 [Preview - JPEG: 400 x 434 pix - 92k] [Normal - JPEG: 800 x 867 pix - 228k] Caption to PR Photo 22e/99 : Preliminary polarization measurement of the optical image of the afterglow of GRB 990510, as observed with the VLT 8.2-m ANTU telescope and the multi-mode FORS1 instrument. The abscissa represents the measurement angle; the ordinate the corresponding intensity. The sinusoidal curve shows the best fit to the data points (with error bars); the resulting degree of polarization is 1.7 ± 0.2 percent. A group of Italian astronomers led by Stefano Covino of the Observatory of Brera in Milan, have observed for the first time polarization (some degree of alignment of the electric fields of emitted photons) from the optical afterglow of a gamma-ray burst, see their dedicated webpage at http://www.merate.mi.astro.it/~lazzati/GRB990510/. This yielded a polarization at a level of 1.7 ± 0.2 percent for the optical afterglow of GRB 990510, some 18 hours after the gamma-ray burst event; the magnitude was R = 19.1 at the time of this VLT observation. Independently, the Dutch astronomers Vreeswijk, Galama and Rol measured polarization of the order of 2 percent with another data set from the VLT ANTU and FORS1 obtained during the same night. This important result was made possible by the very large light-gathering power of the 8.2-m VLT-ANTU mirror and the FORS1 imaging polarimeter. Albeit small, the detected degree of polarization is highly significant; it is also one of the most precise measurements of polarization ever made in an object as faint as this one. Most importantly, it provides the strongest evidence to date that the afterglow radiation of gamma-ray bursts is, at least in part, produced by the synchrotron process , i.e. by relativistic electrons spiralling in a magnetized region. This type of process is able to imprint some linear polarization on the produced radiation, if the magnetic field is not completely chaotic. The spectrum ESO PR Photo 22f/99 ESO PR Photo 22f/99 [Preview - JPEG: 400 x 485 pix - 112k] [Normal - JPEG: 800 x 969 pix - 288k] Caption to PR Photo 22f/99 : A spectrum of the afterglow of GRB 990510, obtained with VLT ANTU and the multi-mode FORS1 instrument during the night of May 10-11, 1999. Some of the redshifted absorption lines are identified and the stronger bands from the terrestrial atmosphere are also indicated. A VLT spectrum with the multi-mode FORS1 instrument was obtained a little later and showed a number of absorption lines , e.g. from ionized Aluminium, Chromium and neutral Magnesium. They do not arise in the optical counterpart itself - the gas there is so hot and turbulent that any spectral lines will be extremely broad and hence extremely difficult to identify - but from interstellar gas in a galaxy 'hosting' the GRB source, or from intergalactic clouds along the line of sight. It is possible to measure the distance to this intervening material from the redshift of the lines; astronomers Vreeswijk, Galama and Rol found z = 1.619 ± 0.002 [4]. This allows to establish a lower limit for the distance of the explosion and also its total power. The numbers turn out to be truly enormous. The burst occurred at an epoch corresponding to about one half of the present age of the Universe (at a distance of about 7,000 million light-years [5]), and the total energy of the explosion in gamma-rays must be higher than 1.4 10 53 erg , assuming a spherical emission. This energy corresponds to the entire optical energy emitted by the Milky Way in more than 30 years; yet the gamma-ray burst took less than 100 seconds. Since the optical afterglows of gamma-ray bursts are faint, and their flux decays quite rapidly in time, the combination of large telescopes and fast response through suitable observing programs are crucial and, as demonstrated here, ESO's VLT is ideally suited to this goal! The lightcurve Combining results from a multitude of telescopes has provided most useful information. Interestingly, a "break" was observed in the light curve (the way the light of the optical counterpart fades) of the afterglow. Some 1.5 - 2 days after the explosion, the brightness began to decrease more rapidly; this is well documented with the CCD images from the Danish 1.5-m telescope at La Silla and the corresponding diagrams are available on a dedicated webpage at http://www.astro.ku.dk/~jens/grb990510/ at the Copenhagen University Observatory. Complete, regularly updated lightcurves with all published measurements, also from other observatories, may be found at another webpage in Milan at http://www.merate.mi.astro.it/~gabriele/990510/ . This may happen if the explosion emits radiation in a beam which is pointed towards the Earth. Such beams are predicted by some models for the production of gamma-ray bursts. They are also favoured by many astronomers, because they can overcome the fundamental problem that gamma-ray bursts simply produce too much energy. If the energy is not emitted equally in all directions ("isotropically"), but rather in a preferred one along a beam, less energy is needed to produce the observed phenomenon. Such a break has been observed before, but this time it occurred at a very favourable moment, when the source was still relatively bright so that high-quality spectroscopic and multi-colour information could be obtained with the ESO telescopes. Together, these observations may provide an answer to the question whether beams exist in gamma-ray bursts and thus further help us to understand the as yet unknown cause of these mysterious explosions. Latest News ESO PR Photo 22g/99 ESO PR Photo 22g/99 [Normal - JPEG: 453 x 585 pix - 304k] Caption to PR Photo 22g/99 : V(isual) image of the sky field around GRB 990510 (here denoted "OT"), as obtained with the VLT ANTU telescope and FORS1 on May 18 UT during a 20 min exposure in 0.9 arcsec seeing conditions. The reproduction is in false colours to better show differences in intensity. North is up and east is left. Further photometric and spectroscopic observations with the ESO VLT, performed by Klaus Beuermann, Frederic Hessman and Klaus Reinsch of the Göttingen group of the FORS instrument team (Germany), have revealed the character of some of the objects that are seen close to the image of the afterglow of GRB 990510 (also referred to as the "Optical Transient" - OT). Two objects to the North are cool foreground stars of spectral types dM0 and about dM3, respectively; they are located in our Milky Way Galaxy. The object just to the South of the OT is probably also a star. A V(isual)-band image (PR Photo 22g/99) taken during the night between May 17 and 18 with the VLT/ANTU telescope and FORS1 now shows the OT at magnitude V = 24.5, with still no evidence for the host galaxy that is expected to appear when the afterglow has faded sufficiently. Outlook The great distances (high redshifts) of Gamma-Ray Bursts, plus the fact that a 9th magnitude optical flash was seen when another GRB exploded on January 23 this year, has attracted the attention of astronomers outside the GRB field. In fact, GRBs may soon become a very powerful tool to probe the early universe by guiding us to regions of very early star formation and the (proto)-galaxies and (proto)-clusters of which they are part. They will also allow the study of the chemical composition of absorbing clouds at very large distances. At the end of this year, the NASA satellite HETE-II will be launched, which is expected to provide about 50 GRB alerts per year and, most importantly, accurate localisations in the sky that will allow very fast follow-up observations, while the optical counterparts are still quite bright. It will then be possible to obtain more spectra, also of extremely distant bursts, and many new distance determinations can be made, revealing the distribution of intrinsic brightness of GRB's (the "luminosity function"). Other types of observations (e.g. polarimetry, as above) will also profit, leading to a progressive refinement of the available data. Thus there is good hope that astronomers will soon come closer to identifying the progenitors of these enormous explosions and to understand what is really going on. In this process, the huge light-collecting power of the VLT and the many other facilities at the ESO observatories will undoubtedly play an important role. Notes [1] Gamma-Ray Bursts are brief flashes of high-energy radiation. Satellites in orbit around the Earth and spacecraft in interplanetary orbits have detected several thousand such events since they were first discovered in the late 1960s. Earlier investigations established that they were so evenly distributed in the sky that they must be very distant (and hence very powerful) outbursts of some kind. Only in 1997 it became possible to observe the fading "afterglow" of one of these explosions in visible light, thanks to accurate positions available from the BeppoSAX satellite. Soon thereafter, another optical afterglow was detected; it was located in a faint galaxy whose distance could be measured. In 1998, a gamma-ray burst was detected in a galaxy over 8,300 million light-years away. Even the most exotic ideas proposed for these explosions, e.g. supergiant stars collapsing to black holes, black holes merging with neutron stars or other black holes, and other weird and wonderful notions have trouble accounting for explosions with the power of 10,000 million million suns. [2] The various reports issued by astronomers working on this and other gamma-ray burst events are available as GCN Circulars on the GRB Coordinates Network web-noteboard. [3] See also the Press Release, issued by SAAO on this occasion. [4] In astronomy, the redshift (z) denotes the fraction by which the lines in the spectrum of an object are shifted towards longer wavelengths. The observed redshift of a distant galaxy or intergalactic cloud gives a direct estimate of the universal expansion (i.e. the "recession velocity"). The detailed relation between redshift and distance depends on such quantities as the Hubble Constant, the average density of the universe, and the 'cosmological' Constant. For a standard cosmological model, redshift z = 1.6 corresponds to a distance of about 7,000 million light-years. [5] Assuming a Hubble Constant H 0 = 70 km/s/Mpc, mean density Omega 0 = 0.3 and a Cosmological Constant Lambda = 0. How to obtain ESO Press Information ESO Press Information is made available on the World-Wide Web (URL: http://www.eso.org../ ). ESO Press Photos may be reproduced, if credit is given to the European Southern Observatory.

  6. Open source EMR software: profiling, insights and hands-on analysis.

    PubMed

    Kiah, M L M; Haiqi, Ahmed; Zaidan, B B; Zaidan, A A

    2014-11-01

    The use of open source software in health informatics is increasingly advocated by authors in the literature. Although there is no clear evidence of the superiority of the current open source applications in the healthcare field, the number of available open source applications online is growing and they are gaining greater prominence. This repertoire of open source options is of a great value for any future-planner interested in adopting an electronic medical/health record system, whether selecting an existent application or building a new one. The following questions arise. How do the available open source options compare to each other with respect to functionality, usability and security? Can an implementer of an open source application find sufficient support both as a user and as a developer, and to what extent? Does the available literature provide adequate answers to such questions? This review attempts to shed some light on these aspects. The objective of this study is to provide more comprehensive guidance from an implementer perspective toward the available alternatives of open source healthcare software, particularly in the field of electronic medical/health records. The design of this study is twofold. In the first part, we profile the published literature on a sample of existent and active open source software in the healthcare area. The purpose of this part is to provide a summary of the available guides and studies relative to the sampled systems, and to identify any gaps in the published literature with respect to our research questions. In the second part, we investigate those alternative systems relative to a set of metrics, by actually installing the software and reporting a hands-on experience of the installation process, usability, as well as other factors. The literature covers many aspects of open source software implementation and utilization in healthcare practice. Roughly, those aspects could be distilled into a basic taxonomy, making the literature landscape more perceivable. Nevertheless, the surveyed articles fall short of fulfilling the targeted objective of providing clear reference to potential implementers. The hands-on study contributed a more detailed comparative guide relative to our set of assessment measures. Overall, no system seems to satisfy an industry-standard measure, particularly in security and interoperability. The systems, as software applications, feel similar from a usability perspective and share a common set of functionality, though they vary considerably in community support and activity. More detailed analysis of popular open source software can benefit the potential implementers of electronic health/medical records systems. The number of examined systems and the measures by which to compare them vary across studies, but still rewarding insights start to emerge. Our work is one step toward that goal. Our overall conclusion is that open source options in the medical field are still far behind the highly acknowledged open source products in other domains, e.g. operating systems market share. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  7. Getting Open Source Software into Schools: Strategies and Challenges

    ERIC Educational Resources Information Center

    Hepburn, Gary; Buley, Jan

    2006-01-01

    In this article Gary Hepburn and Jan Buley outline different approaches to implementing open source software (OSS) in schools; they also address the challenges that open source advocates should anticipate as they try to convince educational leaders to adopt OSS. With regard to OSS implementation, they note that schools have a flexible range of…

  8. Open Source Library Management Systems: A Multidimensional Evaluation

    ERIC Educational Resources Information Center

    Balnaves, Edmund

    2008-01-01

    Open source library management systems have improved steadily in the last five years. They now present a credible option for small to medium libraries and library networks. An approach to their evaluation is proposed that takes account of three additional dimensions that only open source can offer: the developer and support community, the source…

  9. Open Source as Appropriate Technology for Global Education

    ERIC Educational Resources Information Center

    Carmichael, Patrick; Honour, Leslie

    2002-01-01

    Economic arguments for the adoption of "open source" software in business have been widely discussed. In this paper we draw on personal experience in the UK, South Africa and Southeast Asia to forward compelling reasons why open source software should be considered as an appropriate and affordable alternative to the currently prevailing…

  10. Government Technology Acquisition Policy: The Case of Proprietary versus Open Source Software

    ERIC Educational Resources Information Center

    Hemphill, Thomas A.

    2005-01-01

    This article begins by explaining the concepts of proprietary and open source software technology, which are now competing in the marketplace. A review of recent individual and cooperative technology development and public policy advocacy efforts, by both proponents of open source software and advocates of proprietary software, subsequently…

  11. Open Source Communities in Technical Writing: Local Exigence, Global Extensibility

    ERIC Educational Resources Information Center

    Conner, Trey; Gresham, Morgan; McCracken, Jill

    2011-01-01

    By offering open-source software (OSS)-based networks as an affordable technology alternative, we partnered with a nonprofit community organization. In this article, we narrate the client-based experiences of this partnership, highlighting the ways in which OSS and open-source culture (OSC) transformed our students' and our own expectations of…

  12. Personal Electronic Devices and the ISR Data Explosion: The Impact of Cyber Cameras on the Intelligence Community

    DTIC Science & Technology

    2015-06-01

    ground.aspx?p=1 Texas Tech Security Group, “Automated Open Source Intelligence ( OSINT ) Using APIs.” RaiderSec, Sunday 30 December 2012, http...Open Source Intelligence ( OSINT ) Using APIs,” RaiderSec, Sunday 30 December 2012, http://raidersec.blogspot.com/2012/12/automated-open- source

  13. Open-Source Unionism: New Workers, New Strategies

    ERIC Educational Resources Information Center

    Schmid, Julie M.

    2004-01-01

    In "Open-Source Unionism: Beyond Exclusive Collective Bargaining," published in fall 2002 in the journal Working USA, labor scholars Richard B. Freeman and Joel Rogers use the term "open-source unionism" to describe a form of unionization that uses Web technology to organize in hard-to-unionize workplaces. Rather than depend on the traditional…

  14. Perceptions of Open Source versus Commercial Software: Is Higher Education Still on the Fence?

    ERIC Educational Resources Information Center

    van Rooij, Shahron Williams

    2007-01-01

    This exploratory study investigated the perceptions of technology and academic decision-makers about open source benefits and risks versus commercial software applications. The study also explored reactions to a concept for outsourcing campus-wide deployment and maintenance of open source. Data collected from telephone interviews were analyzed,…

  15. Open Source for Knowledge and Learning Management: Strategies beyond Tools

    ERIC Educational Resources Information Center

    Lytras, Miltiadis, Ed.; Naeve, Ambjorn, Ed.

    2007-01-01

    In the last years, knowledge and learning management have made a significant impact on the IT research community. "Open Source for Knowledge and Learning Management: Strategies Beyond Tools" presents learning and knowledge management from a point of view where the basic tools and applications are provided by open source technologies.…

  16. Open-Source Learning Management Systems: A Predictive Model for Higher Education

    ERIC Educational Resources Information Center

    van Rooij, S. Williams

    2012-01-01

    The present study investigated the role of pedagogical, technical, and institutional profile factors in an institution of higher education's decision to select an open-source learning management system (LMS). Drawing on the results of previous research that measured patterns of deployment of open-source software (OSS) in US higher education and…

  17. An Embedded Systems Course for Engineering Students Using Open-Source Platforms in Wireless Scenarios

    ERIC Educational Resources Information Center

    Rodriguez-Sanchez, M. C.; Torrado-Carvajal, Angel; Vaquero, Joaquin; Borromeo, Susana; Hernandez-Tamames, Juan A.

    2016-01-01

    This paper presents a case study analyzing the advantages and disadvantages of using project-based learning (PBL) combined with collaborative learning (CL) and industry best practices, integrated with information communication technologies, open-source software, and open-source hardware tools, in a specialized microcontroller and embedded systems…

  18. Technology collaboration by means of an open source government

    NASA Astrophysics Data System (ADS)

    Berardi, Steven M.

    2009-05-01

    The idea of open source software originally began in the early 1980s, but it never gained widespread support until recently, largely due to the explosive growth of the Internet. Only the Internet has made this kind of concept possible, bringing together millions of software developers from around the world to pool their knowledge. The tremendous success of open source software has prompted many corporations to adopt the culture of open source and thus share information they previously held secret. The government, and specifically the Department of Defense (DoD), could also benefit from adopting an open source culture. In acquiring satellite systems, the DoD often builds walls between program offices, but installing doors between programs can promote collaboration and information sharing. This paper addresses the challenges and consequences of adopting an open source culture to facilitate technology collaboration for DoD space acquisitions. DISCLAIMER: The views presented here are the views of the author, and do not represent the views of the United States Government, United States Air Force, or the Missile Defense Agency.

  19. Open source software integrated into data services of Japanese planetary explorations

    NASA Astrophysics Data System (ADS)

    Yamamoto, Y.; Ishihara, Y.; Otake, H.; Imai, K.; Masuda, K.

    2015-12-01

    Scientific data obtained by Japanese scientific satellites and lunar and planetary explorations are archived in DARTS (Data ARchives and Transmission System). DARTS provides the data with a simple method such as HTTP directory listing for long-term preservation while DARTS tries to provide rich web applications for ease of access with modern web technologies based on open source software. This presentation showcases availability of open source software through our services. KADIAS is a web-based application to search, analyze, and obtain scientific data measured by SELENE(Kaguya), a Japanese lunar orbiter. KADIAS uses OpenLayers to display maps distributed from Web Map Service (WMS). As a WMS server, open source software MapServer is adopted. KAGUYA 3D GIS (KAGUYA 3D Moon NAVI) provides a virtual globe for the SELENE's data. The main purpose of this application is public outreach. NASA World Wind Java SDK is used to develop. C3 (Cross-Cutting Comparisons) is a tool to compare data from various observations and simulations. It uses Highcharts to draw graphs on web browsers. Flow is a tool to simulate a Field-Of-View of an instrument onboard a spacecraft. This tool itself is open source software developed by JAXA/ISAS, and the license is BSD 3-Caluse License. SPICE Toolkit is essential to compile FLOW. SPICE Toolkit is also open source software developed by NASA/JPL, and the website distributes many spacecrafts' data. Nowadays, open source software is an indispensable tool to integrate DARTS services.

  20. Embracing Open Source for NASA's Earth Science Data Systems

    NASA Technical Reports Server (NTRS)

    Baynes, Katie; Pilone, Dan; Boller, Ryan; Meyer, David; Murphy, Kevin

    2017-01-01

    The overarching purpose of NASAs Earth Science program is to develop a scientific understanding of Earth as a system. Scientific knowledge is most robust and actionable when resulting from transparent, traceable, and reproducible methods. Reproducibility includes open access to the data as well as the software used to arrive at results. Additionally, software that is custom-developed for NASA should be open to the greatest degree possible, to enable re-use across Federal agencies, reduce overall costs to the government, remove barriers to innovation, and promote consistency through the use of uniform standards. Finally, Open Source Software (OSS) practices facilitate collaboration between agencies and the private sector. To best meet these ends, NASAs Earth Science Division promotes the full and open sharing of not only all data, metadata, products, information, documentation, models, images, and research results but also the source code used to generate, manipulate and analyze them. This talk focuses on the challenges to open sourcing NASA developed software within ESD and the growing pains associated with establishing policies running the gamut of tracking issues, properly documenting build processes, engaging the open source community, maintaining internal compliance, and accepting contributions from external sources. This talk also covers the adoption of existing open source technologies and standards to enhance our custom solutions and our contributions back to the community. Finally, we will be introducing the most recent OSS contributions from NASA Earth Science program and promoting these projects for wider community review and adoption.

  1. Open source Modeling and optimization tools for Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peles, S.

    Open source modeling and optimization tools for planning The existing tools and software used for planning and analysis in California are either expensive, difficult to use, or not generally accessible to a large number of participants. These limitations restrict the availability of participants for larger scale energy and grid studies in the state. The proposed initiative would build upon federal and state investments in open source software, and create and improve open source tools for use in the state planning and analysis activities. Computational analysis and simulation frameworks in development at national labs and universities can be brought forward tomore » complement existing tools. An open source platform would provide a path for novel techniques and strategies to be brought into the larger community and reviewed by a broad set of stakeholders.« less

  2. Limitations of Phased Array Beamforming in Open Rotor Noise Source Imaging

    NASA Technical Reports Server (NTRS)

    Horvath, Csaba; Envia, Edmane; Podboy, Gary G.

    2013-01-01

    Phased array beamforming results of the F31/A31 historical baseline counter-rotating open rotor blade set were investigated for measurement data taken on the NASA Counter-Rotating Open Rotor Propulsion Rig in the 9- by 15-Foot Low-Speed Wind Tunnel of NASA Glenn Research Center as well as data produced using the LINPROP open rotor tone noise code. The planar microphone array was positioned broadside and parallel to the axis of the open rotor, roughly 2.3 rotor diameters away. The results provide insight as to why the apparent noise sources of the blade passing frequency tones and interaction tones appear at their nominal Mach radii instead of at the actual noise sources, even if those locations are not on the blades. Contour maps corresponding to the sound fields produced by the radiating sound waves, taken from the simulations, are used to illustrate how the interaction patterns of circumferential spinning modes of rotating coherent noise sources interact with the phased array, often giving misleading results, as the apparent sources do not always show where the actual noise sources are located. This suggests that a more sophisticated source model would be required to accurately locate the sources of each tone. The results of this study also have implications with regard to the shielding of open rotor sources by airframe empennages.

  3. High-resolution seismic-reflection data offshore of Dana Point, southern California borderland

    USGS Publications Warehouse

    Sliter, Ray W.; Ryan, Holly F.; Triezenberg, Peter J.

    2010-01-01

    The U.S. Geological Survey collected high-resolution shallow seismic-reflection profiles in September 2006 in the offshore area between Dana Point and San Mateo Point in southern Orange and northern San Diego Counties, California. Reflection profiles were located to image folds and reverse faults associated with the San Mateo fault zone and high-angle strike-slip faults near the shelf break (the Newport-Inglewood fault zone) and at the base of the slope. Interpretations of these data were used to update the USGS Quaternary fault database and in shaking hazard models for the State of California developed by the Working Group for California Earthquake Probabilities. This cruise was funded by the U.S. Geological Survey Coastal and Marine Catastrophic Hazards project. Seismic-reflection data were acquired aboard the R/V Sea Explorer, which is operated by the Ocean Institute at Dana Point. A SIG ELC820 minisparker seismic source and a SIG single-channel streamer were used. More than 420 km of seismic-reflection data were collected. This report includes maps of the seismic-survey sections, linked to Google Earth? software, and digital data files showing images of each transect in SEG-Y, JPEG, and TIFF formats.

  4. Develop Direct Geo-referencing System Based on Open Source Software and Hardware Platform

    NASA Astrophysics Data System (ADS)

    Liu, H. S.; Liao, H. M.

    2015-08-01

    Direct geo-referencing system uses the technology of remote sensing to quickly grasp images, GPS tracks, and camera position. These data allows the construction of large volumes of images with geographic coordinates. So that users can be measured directly on the images. In order to properly calculate positioning, all the sensor signals must be synchronized. Traditional aerial photography use Position and Orientation System (POS) to integrate image, coordinates and camera position. However, it is very expensive. And users could not use the result immediately because the position information does not embed into image. To considerations of economy and efficiency, this study aims to develop a direct geo-referencing system based on open source software and hardware platform. After using Arduino microcontroller board to integrate the signals, we then can calculate positioning with open source software OpenCV. In the end, we use open source panorama browser, panini, and integrate all these to open source GIS software, Quantum GIS. A wholesome collection of data - a data processing system could be constructed.

  5. Development of an Open Source, Air-Deployable Weather Station

    NASA Astrophysics Data System (ADS)

    Krejci, A.; Lopez Alcala, J. M.; Nelke, M.; Wagner, J.; Udell, C.; Higgins, C. W.; Selker, J. S.

    2017-12-01

    We created a packaged weather station intended to be deployed in the air on tethered systems. The device incorporates lightweight sensors and parts and runs for up to 24 hours off of lithium polymer batteries, allowing the entire package to be supported by a thin fiber. As the fiber does not provide a stable platform, additional data (pitch and roll) from typical weather parameters (e.g. temperature, pressure, humidity, wind speed, and wind direction) are determined using an embedded inertial motion unit. All designs are open sourced including electronics, CAD drawings, and descriptions of assembly and can be found on the OPEnS lab website at http://www.open-sensing.org/lowcost-weather-station/. The Openly Published Environmental Sensing Lab (OPEnS: Open-Sensing.org) expands the possibilities of scientific observation of our Earth, transforming the technology, methods, and culture by combining open-source development and cutting-edge technology. New OPEnS labs are now being established in India, France, Switzerland, the Netherlands, and Ghana.

  6. Software for Real-Time Analysis of Subsonic Test Shot Accuracy

    DTIC Science & Technology

    2014-03-01

    used the C++ programming language, the Open Source Computer Vision ( OpenCV ®) software library, and Microsoft Windows® Application Programming...video for comparison through OpenCV image analysis tools. Based on the comparison, the software then computed the coordinates of each shot relative to...DWB researchers wanted to use the Open Source Computer Vision ( OpenCV ) software library for capturing and analyzing frames of video. OpenCV contains

  7. Journal of Chemical Education on CD-ROM, 1999

    NASA Astrophysics Data System (ADS)

    1999-12-01

    The Journal of Chemical Education on CD-ROM contains the text and graphics for all the articles, features, and reviews published in the Journal of Chemical Education. This 1999 issue of the JCE CD series includes all twelve issues of 1999, as well as all twelve issues from 1998 and from 1997, and the September-December issues from 1996. Journal of Chemical Education on CD-ROM is formatted so that all articles on the CD retain as much as possible of their original appearance. Each article file begins with an abstract/keyword page followed by the article pages. All pages of the Journal that contain editorial content, including the front covers, table of contents, letters, and reviews, are included. Also included are abstracts (when available), keywords for all articles, and supplementary materials. The Journal of Chemical Education on CD-ROM has proven to be a useful tool for chemical educators. Like the Computerized Index to the Journal of Chemical Education (1) it will help you to locate articles on a particular topic or written by a particular author. In addition, having the complete article on the CD-ROM provides added convenience. It is no longer necessary to go to the library, locate the Journal issue, and read it while sitting in an uncomfortable chair. With a few clicks of the mouse, you can scan an article on your computer monitor, print it if it proves interesting, and read it in any setting you choose. Searching and Linking JCE CD is fully searchable for any word, partial word, or phrase. Successful searches produce a listing of articles that contain the requested text. Individual articles can be quickly accessed from this list. The Table of Contents of each issue is linked to individual articles listed. There are also links from the articles to any supplementary materials. References in the Chemical Education Today section (found in the front of each issue) to articles elsewhere in the issue are also linked to the article, as are WWW addresses and email addresses. If you have Internet access and a WWW browser and email utility, you can go directly to the Web site or prepare to send a message with a single mouse click. Full-text searching of the entire CD enables you to find the articles you want. Price and Ordering An order form is inserted in this issue that provides prices and other ordering information. If this insert is not available or if you need additional information, contact: JCE Software, University of Wisconsin-Madison, 1101 University Avenue, Madison, WI 53706-1396; phone: 608/262-5153 or 800/991-5534; fax: 608/265-8094; email: jcesoft@chem.wisc.edu. Information about all our publications (including abstracts, descriptions, updates) is available from our World Wide Web site at: http://jchemed.chem.wisc.edu/JCESoft/. Hardware and Software Requirements Hardware and software requirements for JCE CD 1999 are listed in the table below: Literature Cited 1. Schatz, P. F. Computerized Index, Journal of Chemical Education; J. Chem. Educ. Software 1993, SP 5-M. Schatz, P. F.; Jacobsen, J. J. Computerized Index, Journal of Chemical Education; J. Chem. Educ. Software 1993, SP 5-W.

  8. Open source electronic health records and chronic disease management.

    PubMed

    Goldwater, Jason C; Kwon, Nancy J; Nathanson, Ashley; Muckle, Alison E; Brown, Alexa; Cornejo, Kerri

    2014-02-01

    To study and report on the use of open source electronic health records (EHR) to assist with chronic care management within safety net medical settings, such as community health centers (CHC). The study was conducted by NORC at the University of Chicago from April to September 2010. The NORC team undertook a comprehensive environmental scan, including a literature review, a dozen key informant interviews using a semistructured protocol, and a series of site visits to CHC that currently use an open source EHR. Two of the sites chosen by NORC were actively using an open source EHR to assist in the redesign of their care delivery system to support more effective chronic disease management. This included incorporating the chronic care model into an CHC and using the EHR to help facilitate its elements, such as care teams for patients, in addition to maintaining health records on indigent populations, such as tuberculosis status on homeless patients. The ability to modify the open-source EHR to adapt to the CHC environment and leverage the ecosystem of providers and users to assist in this process provided significant advantages in chronic care management. Improvements in diabetes management, controlled hypertension and increases in tuberculosis vaccinations were assisted through the use of these open source systems. The flexibility and adaptability of open source EHR demonstrated its utility and viability in the provision of necessary and needed chronic disease care among populations served by CHC.

  9. What an open source clinical trial community can learn from hackers

    PubMed Central

    Dunn, Adam G.; Day, Richard O.; Mandl, Kenneth D.; Coiera, Enrico

    2014-01-01

    Summary Open sharing of clinical trial data has been proposed as a way to address the gap between the production of clinical evidence and the decision-making of physicians. Since a similar gap has already been addressed in the software industry by the open source software movement, we examine how the social and technical principles of the movement can be used to guide the growth of an open source clinical trial community. PMID:22553248

  10. An Evaluation of Open Source Learning Management Systems According to Administration Tools and Curriculum Design

    ERIC Educational Resources Information Center

    Ozdamli, Fezile

    2007-01-01

    Distance education is becoming more important in the universities and schools. The aim of this research is to evaluate the current existing Open Source Learning Management Systems according to Administration tool and Curriculum Design. For this, seventy two Open Source Learning Management Systems have been subjected to a general evaluation. After…

  11. Evaluating Open Source Software for Use in Library Initiatives: A Case Study Involving Electronic Publishing

    ERIC Educational Resources Information Center

    Samuels, Ruth Gallegos; Griffy, Henry

    2012-01-01

    This article discusses best practices for evaluating open source software for use in library projects, based on the authors' experience evaluating electronic publishing solutions. First, it presents a brief review of the literature, emphasizing the need to evaluate open source solutions carefully in order to minimize Total Cost of Ownership. Next,…

  12. A Requirements-Based Exploration of Open-Source Software Development Projects--Towards a Natural Language Processing Software Analysis Framework

    ERIC Educational Resources Information Center

    Vlas, Radu Eduard

    2012-01-01

    Open source projects do have requirements; they are, however, mostly informal, text descriptions found in requests, forums, and other correspondence. Understanding such requirements provides insight into the nature of open source projects. Unfortunately, manual analysis of natural language requirements is time-consuming, and for large projects,…

  13. Open Source Meets Virtual Reality--An Instructor's Journey Unearths New Opportunities for Learning, Community, and Academia

    ERIC Educational Resources Information Center

    O'Connor, Eileen A.

    2015-01-01

    Opening with the history, recent advances, and emerging ways to use avatar-based virtual reality, an instructor who has used virtual environments since 2007 shares how these environments bring more options to community building, teaching, and education. With the open-source movement, where the source code for virtual environments was made…

  14. The Implications of Incumbent Intellectual Property Strategies for Open Source Software Success and Commercialization

    ERIC Educational Resources Information Center

    Wen, Wen

    2012-01-01

    While open source software (OSS) emphasizes open access to the source code and avoids the use of formal appropriability mechanisms, there has been little understanding of how the existence and exercise of formal intellectual property rights (IPR) such as patents influence the direction of OSS innovation. This dissertation seeks to bridge this gap…

  15. Migrations of the Mind: The Emergence of Open Source Education

    ERIC Educational Resources Information Center

    Glassman, Michael; Bartholomew, Mitchell; Jones, Travis

    2011-01-01

    The authors describe an Open Source approach to education. They define Open Source Education (OSE) as a teaching and learning framework where the use and presentation of information is non-hierarchical, malleable, and subject to the needs and contributions of students as they become "co-owners" of the course. The course transforms itself into an…

  16. Prepare for Impact

    ERIC Educational Resources Information Center

    Waters, John K.

    2010-01-01

    Open source software is poised to make a profound impact on K-12 education. For years industry experts have been predicting the widespread adoption of open source tools by K-12 school districts. They're about to be proved right. The impact may not yet have been profound, but it's fair to say that some open source systems and non-proprietary…

  17. 7 Questions to Ask Open Source Vendors

    ERIC Educational Resources Information Center

    Raths, David

    2012-01-01

    With their budgets under increasing pressure, many campus IT directors are considering open source projects for the first time. On the face of it, the savings can be significant. Commercial emergency-planning software can cost upward of six figures, for example, whereas the open source Kuali Ready might run as little as $15,000 per year when…

  18. Cognitive Readiness Assessment and Reporting: An Open Source Mobile Framework for Operational Decision Support and Performance Improvement

    ERIC Educational Resources Information Center

    Heric, Matthew; Carter, Jenn

    2011-01-01

    Cognitive readiness (CR) and performance for operational time-critical environments are continuing points of focus for military and academic communities. In response to this need, we designed an open source interactive CR assessment application as a highly adaptive and efficient open source testing administration and analysis tool. It is capable…

  19. Finland to Join ESO

    NASA Astrophysics Data System (ADS)

    2004-02-01

    Finland will become the eleventh member state of the European Southern Observatory (ESO) [1]. Today, during a ceremony at the ESO Headquarters in Garching (Germany), a corresponding Agreement was signed by the Finnish Minister of Education and Science, Ms. Tuula Haatainen and the ESO Director General, Dr. Catherine Cesarsky, in the presence of other high officials from Finland and the ESO member states (see Video Clip 02/04 below). Following subsequent ratification by the Finnish Parliament of the ESO Convention and the associated protocols [2], it is foreseen that Finland will formally join ESO on July 1, 2004. Uniting European Astronomy ESO PR Photo 03/04 ESO PR Photo 03/04 Caption : Signing of the Finland-ESO Agreement on February 9, 2004, at the ESO Headquarters in Garching (Germany). At the table, the ESO Director General, Dr. Catherine Cesarsky, and the Finnish Minister of Education and Science, Ms. Tuula Haatainen . [Preview - JPEG: 400 x 499 pix - 52k] [Normal - JPEG: 800 x 997 pix - 720k] [Full Res - JPEG: 2126 x 2649 pix - 2.9M] The Finnish Minister of Education and Science, Ms. Tuula Haatainen, began her speech with these words: "On behalf of Finland, I am happy and proud that we are now joining the European Southern Observatory, one of the most successful megaprojects of European science. ESO is an excellent example of the potential of European cooperation in science, and along with the ALMA project, more and more of global cooperation as well." She also mentioned that besides science ESO offers many technological challenges and opportunities. And she added: "In Finland we will try to promote also technological and industrial cooperation with ESO, and we hope that the ESO side will help us to create good working relations. I am confident that Finland's membership in ESO will be beneficial to both sides." Dr. Catherine Cesarsky, ESO Director General, warmly welcomed the Finnish intention to join ESO. "With the accession of their country to ESO, Finnish astronomers, renowned for their expertise in many frontline areas, will have new, exciting opportunities for working on research programmes at the frontiers of modern astrophysics." "This is indeed the right time to join ESO", she added. "The four 8.2-m VLT Unit Telescopes with their many first-class instruments are working with unsurpassed efficiency at Paranal, probing the near and distant Universe and providing European astronomers with a goldmine of unique astronomical data. The implementation of the VLT Interferometer is progressing well and last year we entered into the construction phase of the intercontinental millimetre- and submillimetre-band Atacama Large Millimeter Array. And the continued design studies for gigantic optical/infrared telescopes like OWL are progressing fast. Wonderful horizons are indeed opening for the coming generations of European astronomers!" She was seconded by the President of the ESO Council, Professor Piet van der Kruit, "This is a most important step in the continuing evolution of ESO. By having Finland become a member of ESO, we welcome a country that has put in place a highly efficient and competitive innovation system with one of the fastest growths of research investment in the EU area. I have no doubt that the Finnish astronomers will not only make the best scientific use of ESO facilities but that they will also greatly contribute through their high quality R&D to technological developments which will benefit the whole ESO community. " Notes [1]: Current ESO member countries are Belgium, Denmark, France, Germany, Italy, the Netherlands, Portugal, Sweden, Switzerland and the United Kindgdom. [2]: The ESO Convention was established in 1962 and specifies the goals of ESO and the means to achieve these, e.g., "The Governments of the States parties to this convention... desirous of jointly creating an observatory equipped with powerful instruments in the Southern hemisphere and accordingly promoting and organizing co-operation in astronomical research..." (from the Preamble to the ESO Convention).

  20. Open source IPSEC software in manned and unmanned space missions

    NASA Astrophysics Data System (ADS)

    Edwards, Jacob

    Network security is a major topic of research because cyber attackers pose a threat to national security. Securing ground-space communications for NASA missions is important because attackers could endanger mission success and human lives. This thesis describes how an open source IPsec software package was used to create a secure and reliable channel for ground-space communications. A cost efficient, reproducible hardware testbed was also created to simulate ground-space communications. The testbed enables simulation of low-bandwidth and high latency communications links to experiment how the open source IPsec software reacts to these network constraints. Test cases were built that allowed for validation of the testbed and the open source IPsec software. The test cases also simulate using an IPsec connection from mission control ground routers to points of interest in outer space. Tested open source IPsec software did not meet all the requirements. Software changes were suggested to meet requirements.

  1. Upon the Shoulders of Giants: Open-Source Hardware and Software in Analytical Chemistry.

    PubMed

    Dryden, Michael D M; Fobel, Ryan; Fobel, Christian; Wheeler, Aaron R

    2017-04-18

    Isaac Newton famously observed that "if I have seen further it is by standing on the shoulders of giants." We propose that this sentiment is a powerful motivation for the "open-source" movement in scientific research, in which creators provide everything needed to replicate a given project online, as well as providing explicit permission for users to use, improve, and share it with others. Here, we write to introduce analytical chemists who are new to the open-source movement to best practices and concepts in this area and to survey the state of open-source research in analytical chemistry. We conclude by considering two examples of open-source projects from our own research group, with the hope that a description of the process, motivations, and results will provide a convincing argument about the benefits that this movement brings to both creators and users.

  2. Open-Source 3-D Platform for Low-Cost Scientific Instrument Ecosystem.

    PubMed

    Zhang, C; Wijnen, B; Pearce, J M

    2016-08-01

    The combination of open-source software and hardware provides technically feasible methods to create low-cost, highly customized scientific research equipment. Open-source 3-D printers have proven useful for fabricating scientific tools. Here the capabilities of an open-source 3-D printer are expanded to become a highly flexible scientific platform. An automated low-cost 3-D motion control platform is presented that has the capacity to perform scientific applications, including (1) 3-D printing of scientific hardware; (2) laboratory auto-stirring, measuring, and probing; (3) automated fluid handling; and (4) shaking and mixing. The open-source 3-D platform not only facilities routine research while radically reducing the cost, but also inspires the creation of a diverse array of custom instruments that can be shared and replicated digitally throughout the world to drive down the cost of research and education further. © 2016 Society for Laboratory Automation and Screening.

  3. OpenSesame: an open-source, graphical experiment builder for the social sciences.

    PubMed

    Mathôt, Sebastiaan; Schreij, Daniel; Theeuwes, Jan

    2012-06-01

    In the present article, we introduce OpenSesame, a graphical experiment builder for the social sciences. OpenSesame is free, open-source, and cross-platform. It features a comprehensive and intuitive graphical user interface and supports Python scripting for complex tasks. Additional functionality, such as support for eyetrackers, input devices, and video playback, is available through plug-ins. OpenSesame can be used in combination with existing software for creating experiments.

  4. GTOOLS: an Interactive Computer Program to Process Gravity Data for High-Resolution Applications

    NASA Astrophysics Data System (ADS)

    Battaglia, M.; Poland, M. P.; Kauahikaua, J. P.

    2012-12-01

    An interactive computer program, GTOOLS, has been developed to process gravity data acquired by the Scintrex CG-5 and LaCoste & Romberg EG, G and D gravity meters. The aim of GTOOLS is to provide a validated methodology for computing relative gravity values in a consistent way accounting for as many environmental factors as possible (e.g., tides, ocean loading, solar constraints, etc.), as well as instrument drift. The program has a modular architecture. Each processing step is implemented in a tool (function) that can be either run independently or within an automated task. The tools allow the user to (a) read the gravity data acquired during field surveys completed using different types of gravity meters; (b) compute Earth tides using an improved version of Longman's (1959) model; (c) compute ocean loading using the HARDISP code by Petit and Luzum (2010) and ocean loading harmonics from the TPXO7.2 ocean tide model; (d) estimate the instrument drift using linear functions as appropriate; and (e) compute the weighted least-square-adjusted gravity values and their errors. The corrections are performed up to microGal ( μGal) precision, in accordance with the specifications of high-resolution surveys. The program has the ability to incorporate calibration factors that allow for surveys done using different gravimeters to be compared. Two additional tools (functions) allow the user to (1) estimate the instrument calibration factor by processing data collected by a gravimeter on a calibration range; (2) plot gravity time-series at a chosen benchmark. The interactive procedures and the program output (jpeg plots and text files) have been designed to ease data handling and archiving, to provide useful information for future data interpretation or modeling, and facilitate comparison of gravity surveys conducted at different times. All formulas have been checked for typographical errors in the original reference. GTOOLS, developed using Matlab, is open source and machine independent. We will demonstrate program use and utility with data from multiple microgravity surveys at Kilauea volcano, Hawai'i.

  5. The Privacy and Security Implications of Open Data in Healthcare.

    PubMed

    Kobayashi, Shinji; Kane, Thomas B; Paton, Chris

    2018-04-22

     The International Medical Informatics Association (IMIA) Open Source Working Group (OSWG) initiated a group discussion to discuss current privacy and security issues in the open data movement in the healthcare domain from the perspective of the OSWG membership.  Working group members independently reviewed the recent academic and grey literature and sampled a number of current large-scale open data projects to inform the working group discussion.  This paper presents an overview of open data repositories and a series of short case reports to highlight relevant issues present in the recent literature concerning the adoption of open approaches to sharing healthcare datasets. Important themes that emerged included data standardisation, the inter-connected nature of the open source and open data movements, and how publishing open data can impact on the ethics, security, and privacy of informatics projects.  The open data and open source movements in healthcare share many common philosophies and approaches including developing international collaborations across multiple organisations and domains of expertise. Both movements aim to reduce the costs of advancing scientific research and improving healthcare provision for people around the world by adopting open intellectual property licence agreements and codes of practice. Implications of the increased adoption of open data in healthcare include the need to balance the security and privacy challenges of opening data sources with the potential benefits of open data for improving research and healthcare delivery. Georg Thieme Verlag KG Stuttgart.

  6. Recent Advances in Compressed Sensing: Discrete Uncertainty Principles and Fast Hyperspectral Imaging

    DTIC Science & Technology

    2015-03-26

    Fourier Analysis and Applications, vol. 14, pp. 838–858, 2008. 11. D. J. Cooke, “A discrete X - ray transform for chromotomographic hyperspectral imaging ... medical imaging , e.g., magnetic resonance imaging (MRI). Since the early 1980s, MRI has granted doctors the ability to distinguish between healthy tissue...i.e., at most K entries of x are nonzero. In many settings, this is a valid signal model; for example, JPEG2000 exploits the fact that natural images

  7. The effect of lossy image compression on image classification

    NASA Technical Reports Server (NTRS)

    Paola, Justin D.; Schowengerdt, Robert A.

    1995-01-01

    We have classified four different images, under various levels of JPEG compression, using the following classification algorithms: minimum-distance, maximum-likelihood, and neural network. The training site accuracy and percent difference from the original classification were tabulated for each image compression level, with maximum-likelihood showing the poorest results. In general, as compression ratio increased, the classification retained its overall appearance, but much of the pixel-to-pixel detail was eliminated. We also examined the effect of compression on spatial pattern detection using a neural network.

  8. Simulation of partially coherent light propagation using parallel computing devices

    NASA Astrophysics Data System (ADS)

    Magalhães, Tiago C.; Rebordão, José M.

    2017-08-01

    Light acquires or loses coherence and coherence is one of the few optical observables. Spectra can be derived from coherence functions and understanding any interferometric experiment is also relying upon coherence functions. Beyond the two limiting cases (full coherence or incoherence) the coherence of light is always partial and it changes with propagation. We have implemented a code to compute the propagation of partially coherent light from the source plane to the observation plane using parallel computing devices (PCDs). In this paper, we restrict the propagation in free space only. To this end, we used the Open Computing Language (OpenCL) and the open-source toolkit PyOpenCL, which gives access to OpenCL parallel computation through Python. To test our code, we chose two coherence source models: an incoherent source and a Gaussian Schell-model source. In the former case, we divided into two different source shapes: circular and rectangular. The results were compared to the theoretical values. Our implemented code allows one to choose between the PyOpenCL implementation and a standard one, i.e using the CPU only. To test the computation time for each implementation (PyOpenCL and standard), we used several computer systems with different CPUs and GPUs. We used powers of two for the dimensions of the cross-spectral density matrix (e.g. 324, 644) and a significant speed increase is observed in the PyOpenCL implementation when compared to the standard one. This can be an important tool for studying new source models.

  9. An Open Source Simulation System

    NASA Technical Reports Server (NTRS)

    Slack, Thomas

    2005-01-01

    An investigation into the current state of the art of open source real time programming practices. This document includes what technologies are available, how easy is it to obtain, configure, and use them, and some performance measures done on the different systems. A matrix of vendors and their products is included as part of this investigation, but this is not an exhaustive list, and represents only a snapshot of time in a field that is changing rapidly. Specifically, there are three approaches investigated: 1. Completely open source on generic hardware, downloaded from the net. 2. Open source packaged by a vender and provided as free evaluation copy. 3. Proprietary hardware with pre-loaded proprietary source available software provided by the vender as for our evaluation.

  10. A clinic compatible, open source electrophysiology system.

    PubMed

    Hermiz, John; Rogers, Nick; Kaestner, Erik; Ganji, Mehran; Cleary, Dan; Snider, Joseph; Barba, David; Dayeh, Shadi; Halgren, Eric; Gilja, Vikash

    2016-08-01

    Open source electrophysiology (ephys) recording systems have several advantages over commercial systems such as customization and affordability enabling more researchers to conduct ephys experiments. Notable open source ephys systems include Open-Ephys, NeuroRighter and more recently Willow, all of which have high channel count (64+), scalability, and advanced software to develop on top of. However, little work has been done to build an open source ephys system that is clinic compatible, particularly in the operating room where acute human electrocorticography (ECoG) research is performed. We developed an affordable (<; $10,000) and open system for research purposes that features power isolation for patient safety, compact and water resistant enclosures and 256 recording channels sampled up to 20ksam/sec, 16-bit. The system was validated by recording ECoG with a high density, thin film device for an acute, awake craniotomy study at UC San Diego, Thornton Hospital Operating Room.

  11. Freeing Worldview's development process: Open source everything!

    NASA Astrophysics Data System (ADS)

    Gunnoe, T.

    2016-12-01

    Freeing your code and your project are important steps for creating an inviting environment for collaboration, with the added side effect of keeping a good relationship with your users. NASA Worldview's codebase was released with the open source NOSA (NASA Open Source Agreement) license in 2014, but this is only the first step. We also have to free our ideas, empower our users by involving them in the development process, and open channels that lead to the creation of a community project. There are many highly successful examples of Free and Open Source Software (FOSS) projects of which we can take note: the Linux kernel, Debian, GNOME, etc. These projects owe much of their success to having a passionate mix of developers/users with a great community and a common goal in mind. This presentation will describe the scope of this openness and how Worldview plans to move forward with a more community-inclusive approach.

  12. OpenFLUID: an open-source software environment for modelling fluxes in landscapes

    NASA Astrophysics Data System (ADS)

    Fabre, Jean-Christophe; Rabotin, Michaël; Crevoisier, David; Libres, Aline; Dagès, Cécile; Moussa, Roger; Lagacherie, Philippe; Raclot, Damien; Voltz, Marc

    2013-04-01

    Integrative landscape functioning has become a common concept in environmental management. Landscapes are complex systems where many processes interact in time and space. In agro-ecosystems, these processes are mainly physical processes, including hydrological-processes, biological processes and human activities. Modelling such systems requires an interdisciplinary approach, coupling models coming from different disciplines, developed by different teams. In order to support collaborative works, involving many models coupled in time and space for integrative simulations, an open software modelling platform is a relevant answer. OpenFLUID is an open source software platform for modelling landscape functioning, mainly focused on spatial fluxes. It provides an advanced object-oriented architecture allowing to i) couple models developed de novo or from existing source code, and which are dynamically plugged to the platform, ii) represent landscapes as hierarchical graphs, taking into account multi-scale, spatial heterogeneities and landscape objects connectivity, iii) run and explore simulations in many ways : using the OpenFLUID software interfaces for users (command line interface, graphical user interface), or using external applications such as GNU R through the provided ROpenFLUID package. OpenFLUID is developed in C++ and relies on open source libraries only (Boost, libXML2, GLib/GTK, OGR/GDAL, …). For modelers and developers, OpenFLUID provides a dedicated environment for model development, which is based on an open source toolchain, including the Eclipse editor, the GCC compiler and the CMake build system. OpenFLUID is distributed under the GPLv3 open source license, with a special exception allowing to plug existing models licensed under any license. It is clearly in the spirit of sharing knowledge and favouring collaboration in a community of modelers. OpenFLUID has been involved in many research applications, such as modelling of hydrological network transfer, diagnosis and prediction of water quality taking into account human activities, study of the effect of spatial organization on hydrological fluxes, modelling of surface-subsurface water exchanges, … At LISAH research unit, OpenFLUID is the supporting development platform of the MHYDAS model, which is a distributed model for agrosystems (Moussa et al., 2002, Hydrological Processes, 16, 393-412). OpenFLUID web site : http://www.openfluid-project.org

  13. Interim Open Source Software (OSS) Policy

    EPA Pesticide Factsheets

    This interim Policy establishes a framework to implement the requirements of the Office of Management and Budget's (OMB) Federal Source Code Policy to achieve efficiency, transparency and innovation through reusable and open source software.

  14. Open Source Molecular Modeling

    PubMed Central

    Pirhadi, Somayeh; Sunseri, Jocelyn; Koes, David Ryan

    2016-01-01

    The success of molecular modeling and computational chemistry efforts are, by definition, dependent on quality software applications. Open source software development provides many advantages to users of modeling applications, not the least of which is that the software is free and completely extendable. In this review we categorize, enumerate, and describe available open source software packages for molecular modeling and computational chemistry. PMID:27631126

  15. Open Source Software Development Experiences on the Students' Resumes: Do They Count?--Insights from the Employers' Perspectives

    ERIC Educational Resources Information Center

    Long, Ju

    2009-01-01

    Open Source Software (OSS) is a major force in today's Information Technology (IT) landscape. Companies are increasingly using OSS in mission-critical applications. The transparency of the OSS technology itself with openly available source codes makes it ideal for students to participate in the OSS project development. OSS can provide unique…

  16. Open Source Initiative Powers Real-Time Data Streams

    NASA Technical Reports Server (NTRS)

    2014-01-01

    Under an SBIR contract with Dryden Flight Research Center, Creare Inc. developed a data collection tool called the Ring Buffered Network Bus. The technology has now been released under an open source license and is hosted by the Open Source DataTurbine Initiative. DataTurbine allows anyone to stream live data from sensors, labs, cameras, ocean buoys, cell phones, and more.

  17. Xtreme Learning Control: Examples of the Open Source Movement's Impact on Our Educational Practice in a University Setting.

    ERIC Educational Resources Information Center

    Dunlap, Joanna C.; Wilson, Brent G.; Young, David L.

    This paper describes how Open Source philosophy, a movement that has developed in opposition to the proprietary software industry, has influenced educational practice in the pursuit of scholarly freedom and authentic learning activities for students and educators. This paper provides a brief overview of the Open Source movement, and describes…

  18. Adopting Open-Source Software Applications in U. S. Higher Education: A Cross-Disciplinary Review of the Literature

    ERIC Educational Resources Information Center

    van Rooij, Shahron Williams

    2009-01-01

    Higher Education institutions in the United States are considering Open Source software applications such as the Moodle and Sakai course management systems and the Kuali financial system to build integrated learning environments that serve both academic and administrative needs. Open Source is presumed to be more flexible and less costly than…

  19. Assessing the Impact of Security Behavior on the Awareness of Open-Source Intelligence: A Quantitative Study of IT Knowledge Workers

    ERIC Educational Resources Information Center

    Daniels, Daniel B., III

    2014-01-01

    There is a lack of literature linking end-user behavior to the availability of open-source intelligence (OSINT). Most OSINT literature has been focused on the use and assessment of open-source intelligence, not the proliferation of personally or organizationally identifiable information (PII/OII). Additionally, information security studies have…

  20. Looking toward the Future: A Case Study of Open Source Software in the Humanities

    ERIC Educational Resources Information Center

    Quamen, Harvey

    2006-01-01

    In this article Harvey Quamen examines how the philosophy of open source software might be of particular benefit to humanities scholars in the near future--particularly for academic journals with limited financial resources. To this end he provides a case study in which he describes his use of open source technology (MySQL database software and…

  1. Preparing a scientific manuscript in Linux: Today's possibilities and limitations.

    PubMed

    Tchantchaleishvili, Vakhtang; Schmitto, Jan D

    2011-10-22

    Increasing number of scientists are enthusiastic about using free, open source software for their research purposes. Authors' specific goal was to examine whether a Linux-based operating system with open source software packages would allow to prepare a submission-ready scientific manuscript without the need to use the proprietary software. Preparation and editing of scientific manuscripts is possible using Linux and open source software. This letter to the editor describes key steps for preparation of a publication-ready scientific manuscript in a Linux-based operating system, as well as discusses the necessary software components. This manuscript was created using Linux and open source programs for Linux.

  2. Open Source Service Agent (OSSA) in the intelligence community's Open Source Architecture

    NASA Technical Reports Server (NTRS)

    Fiene, Bruce F.

    1994-01-01

    The Community Open Source Program Office (COSPO) has developed an architecture for the intelligence community's new Open Source Information System (OSIS). The architecture is a multi-phased program featuring connectivity, interoperability, and functionality. OSIS is based on a distributed architecture concept. The system is designed to function as a virtual entity. OSIS will be a restricted (non-public), user configured network employing Internet communications. Privacy and authentication will be provided through firewall protection. Connection to OSIS can be made through any server on the Internet or through dial-up modems provided the appropriate firewall authentication system is installed on the client.

  3. Exploring the Role of Value Networks for Software Innovation

    NASA Astrophysics Data System (ADS)

    Morgan, Lorraine; Conboy, Kieran

    This paper describes a research-in-progress that aims to explore the applicability and implications of open innovation practices in two firms - one that employs agile development methods and another that utilizes open source software. The open innovation paradigm has a lot in common with open source and agile development methodologies. A particular strength of agile approaches is that they move away from 'introverted' development, involving only the development personnel, and intimately involves the customer in all areas of software creation, supposedly leading to the development of a more innovative and hence more valuable information system. Open source software (OSS) development also shares two key elements of the open innovation model, namely the collaborative development of the technology and shared rights to the use of the technology. However, one shortfall with agile development in particular is the narrow focus on a single customer representative. In response to this, we argue that current thinking regarding innovation needs to be extended to include multiple stakeholders both across and outside the organization. Additionally, for firms utilizing open source, it has been found that their position in a network of potential complementors determines the amount of superior value they create for their customers. Thus, this paper aims to get a better understanding of the applicability and implications of open innovation practices in firms that employ open source and agile development methodologies. In particular, a conceptual framework is derived for further testing.

  4. Design and Deployment of a General Purpose, Open Source LoRa to Wi-Fi Hub and Data Logger

    NASA Astrophysics Data System (ADS)

    DeBell, T. C.; Udell, C.; Kwon, M.; Selker, J. S.; Lopez Alcala, J. M.

    2017-12-01

    Methods and technologies facilitating internet connectivity and near-real-time status updates for in site environmental sensor data are of increasing interest in Earth Science. However, Open Source, Do-It-Yourself technologies that enable plug and play functionality for web-connected sensors and devices remain largely inaccessible for typical researchers in our community. The Openly Published Environmental Sensing Lab at Oregon State University (OPEnS Lab) constructed an Open Source 900 MHz Long Range Radio (LoRa) receiver hub with SD card data logger, Ethernet and Wi-Fi shield, and 3D printed enclosure that dynamically uploads transmissions from multiple wirelessly-connected environmental sensing devices. Data transmissions may be received from devices up to 20km away. The hub time-stamps, saves to SD card, and uploads all transmissions to a Google Drive spreadsheet to be accessed in near-real-time by researchers and GeoVisualization applications (such as Arc GIS) for access, visualization, and analysis. This research expands the possibilities of scientific observation of our Earth, transforming the technology, methods, and culture by combining open-source development and cutting edge technology. This poster details our methods and evaluates the application of using 3D printing, Arduino Integrated Development Environment (IDE), Adafruit's Open-Hardware Feather development boards, and the WIZNET5500 Ethernet shield for designing this open-source, general purpose LoRa to Wi-Fi data logger.

  5. "First Light" for the VLT Interferometer

    NASA Astrophysics Data System (ADS)

    2001-03-01

    Excellent Fringes From Bright Stars Prove VLTI Concept Summary Following the "First Light" for the fourth of the 8.2-m telescopes of the VLT Observatory on Paranal in September 2000, ESO scientists and engineers have just successfully accomplished the next major step of this large project. On March 17, 2001, "First Fringes" were obtained with the VLT Interferometer (VLTI) - this important event corresponds to the "First Light" for an astronomical telescope. At the VLTI, it occurred when the infrared light from the bright star Sirius was captured by two small telescopes and the two beams were successfully combined in the subterranean Interferometric Laboratory to form the typical pattern of dark and bright lines known as " interferometric fringes ". This proves the success of the robust VLTI concept, in particular of the "Delay Line". On the next night, the VLTI was used to perform a scientific measurement of the angular diameter of another comparatively bright star, Alpha Hydrae ( Alphard ); it was found to be 0.00929±0.00017 arcsec . This corresponds to the angular distance between the two headlights of a car as seen from a distance of approx. 35,000 kilometres. The excellent result was obtained during a series of observations, each lasting 2 minutes, and fully confirming the impressive predicted abilities of the VLTI . This first observation with the VLTI is a monumental technological achievement, especially in terms of accuracy and stability . It crucially depends on the proper combination and functioning of a large number of individual opto-mechnical and electronic elements. This includes the test telescopes that capture the starlight, continuous and extremely precise adjustment of the various mirrors that deflect the light beams as well as the automatic positioning and motion of the Delay Line carriages and, not least, the optimal tuning of the VLT INterferometer Commissionning Instrument (VINCI). These initial observations prove the overall concept for the VLTI . It was first envisaged in the early 1980's and has been continuously updated, as new technologies and materials became available during the intervening period. The present series of functional tests will go on for some time and involve many different configurations of the small telescopes and the instrument. It is then expected that the first combination of light beams from two of the VLT 8.2-m telescopes will take place in late 2001 . According to current plans, regular science observations will start from 2002, when the European and international astronomical community will have access to the full interferometric facility and the specially developed VLTI instrumentation now under construction. A wide range of scientific investigations will then become possible, from the search for planets around nearby stars, to the study of energetic processes at the cores of distant galaxies. With its superior angular resolution (image sharpness), the VLT is now beginning to open a new era in observational optical and infrared astronomy. The ambition of ESO is to make this type of observations available to all astronomers, not just the interferometry specialists. Video Clip 03/01 : Various video scenes related to the VLTI and the "First Fringes". PR Photo 10a/01 : "First Fringes" from the VLTI on the computer screen. PR Photo 10b/01 : Celebrating the VLTI "First Fringes" . PR Photo 10c/01 : Overview of the VLT Interferometer . PR Photo 10d/01 : Interferometric observations: Fringes from two stars of different angular size . PR Photo 10e/01 : Interferometric observations: Change of fringes with increasing baseline . PR Photo 10f/01 : Aerial view of the installations for the VLTI on the Paranal platform. PR Photo 10g/01 : Stations for the VLTI Auxiliary Telescopes. PR Photo 10h/01 : A test siderostat in place for observations. PR Photo 10i/01 : A test siderostat ( close-up ). PR Photo 10j/01 : One of the Delay Line carriages in the Interferometric Tunnel. PR Photo 10k/01 : The VINCI instrument in the Interferometric Laboratory. PR Photo 10l/01 : The VLTI Control Room . "First Fringes at the VLTI": A great moment! First light of the VLT Interferometer - PR Video Clip 03/01 [MPEG - x.xMb] ESO PR Video Clip 03/01 "First Light of the VLT Interferometer" (March 2001) (5025 frames/3:21x min) [MPEG Video+Audio; 144x112 pix; 6.9Mb] [MPEG Video+Audio; 320x240 pix; 13.7Mb] [RealMedia; streaming; 34kps] [RealMedia; streaming; 200kps] ESO Video Clip 03/01 provides a quick overview of the various elements of the VLT Interferometer and the important achievement of "First Fringes". The sequence is: General view of the Paranal observing platform. The "stations" for the VLTI Auxiliary Telescopes. Statement by the Manager of the VLT project, Massimo Tarenghi . One of the VLTI test telescopes ("siderostats") is being readied for observations. The Delay Line carriages in the Interferometric Tunnel move. The VINCI instrument in the Interferometric Laboratory is adjusted. Platform at sunset, before the observations. Astronomers and engineers prepare for the first observations in the VLTI Control Room in the Interferometric Building. "Interferometric Fringes" on the computer screen. Concluding statements by Andreas Glindemann , VLTI Project Leader, and Massimo Tarenghi . Distant view of the installations at Paranal at sunset (on March 1, 2001). The moment of "First Fringes" at the VLTI occurred in the evening of March 17, 2001 . The bright star Sirius was observed with two small telescopes ("siderostats"), specially constructed for this purpose during the early VLTI test phases. ESO PR Video Clip 03/01 includes related scenes and is based on a more comprehensive documentation, now available as ESO Video News Reel No. 12. The star was tracked by the two telescopes and the light beams were guided via the Delay Lines in the Interferometric Tunnel to the VINCI instrument [1] at the Interferometric Laboratory. The path lengths were continuously adjusted and it was possible to keep them stable to within 1 wavelength (2.2 µm, or 0.0022 mm) over a period of at least 2 min. Next night, several other stars were observed, enabling the ESO astronomers and engineers in the Control Room to obtain stable fringe patterns more routinely. With the special software developed, they also obtained 'on-line' an accurate measurement of the angular diameter of a star. This means that the VLTI delivered its first valid scientific result, already during this first test . First observation with the VLTI ESO PR Photo 10a/01 ESO PR Photo 10a/01 [Preview - JPEG: 400 x 315 pix - 96k] [Normal - JPEG: 800 x 630 pix - 256k] [Hi-Res - JPEG: 3000 x 2400 pix - 1.7k] ESO PR Photo 10b/01 ESO PR Photo 10b/01 [Preview - JPEG: 400 x 218 pix - 80k] [Normal - JPEG: 800 x 436 pix - 204k] Caption : PR Photo 10a/01 The "first fringes" obtained with the VLTI, as seen on the computer screen during the observation (upper right window). The fringe pattern arises when the light beams from two small telescopes are brought together in the VINCI instrument. The pattern itself contains information about the angular extension of the observed object, here the bright star Sirius . More details about the interpretation of this pattern is given in Appendix A. PR Photo 10b/01 : Celebrating the moment of "First Fringes" at the VLTI. At the VLTI control console (left to right): Pierre Kervella , Vincent Coudé du Foresto , Philippe Gitton , Andreas Glindemann , Massimo Tarenghi , Anders Wallander , Roberto Gilmozzi , Markus Schoeller and Bill Cotton . Bertrand Koehler was also present and took the photo. Technical information about PR Photo 10a/01 is available below. Following careful adjustment of all of the various components of the VLTI, the first attempt to perform a real observation was initiated during the night of March 16-17, 2001. "Fringes" were actually acquired during several seconds, leading to further optimization of the Delay Line optics. The next night, March 17-18, stable fringes were obtained on the bright stars Sirius and Lambda Velorum . The following night, the first scientifically valid results were obtained during a series of observations of six stars. One of these, Alpha Hydrae , was measured twice, with an interval of 15 minutes between the 2-min integrations. The measured diameters were highly consistent, with a mean of 0.00929±0.00017 arcsec. This new VLTI measurement is in full agreement with indirect (photometric) estimates of about 0.009 arcsec. The overall performance of the VLTI was excellent already in this early stage. For example, the interferometric efficiency ('contrast' on a stellar point source) was measured to be 87% and stable to within 1.3% over several days. This performance will be further improved following additional tuning. The entire operation of the VLTI was performed remotely from the Control Room, as this will also be the case in the future. Another great advantage of the VLTI concept is the possibility to analyse the data at the control console. This is one of the key features of the VLTI that contributes to make it a very user-friendly facility. Overview of the VLT Interferometer ESO PR Photo 10c/01 ESO PR Photo 10c/01 [Preview - JPEG: 400 x 410 pix - 60k] [Normal - JPEG: 800 x 820 pix - 124k] [Hi-Res - JPEG: 3000 x 3074 pix - 680k] Caption : PR Photo 10c/01 Overview of the VLT Interferometer, with the various elements indicated. In this case, the light beams from two of the 8.2-m telescopes are combined. The VINCI instrument that was used for the present test, is located at the common focus in the Interferometric Laboratory. The interferometric principle is based on the phase-stable combination of light beams from two or more telescopes at a common interferometric focus , cf. PR Photo 10c/01 . The light from a celestial object is captured simultaneously by two or more telescopes. For the first tests, two "siderostats" with 40-cm aperture are used; later on, two or more 8.2-m Unit Telescopes will be used, as well as several moving 1.8-m Auxiliary Telescopes (ATs), now under construction at the AMOS factory in Belgium. Via several mirrors and through the Delay Line, that continuously compensates for changes in the path length introduced by the Earth's rotation as well as by other effects (e.g., atmospheric turbulence), the light beams are guided towards the interferometric instrument VINCI at the common interferometric focus. It is located in the subterranean Interferometric Laboratory , at the centre of the observing platform on the top of the Paranal mountain. Photos of some of the VLTI elements are shown in Appendix B. The interferometric technique allows achieving images, as sharp as those of a telescope with a diameter equivalent to the largest distance between the telescopes in the interferometer. For the VLTI, this distance is about 200 metres, resulting in a resolution of 0.001 arcsec in the near-infrared spectral region (at 1 µm wavelength), or 0.0005 arcsec in visual light (500 nm). The latter measure corresponds to about 2 metres on the surface of the Moon. The VLTI instruments The installation and putting into operation of the VLTI at Paranal is a gradual process that will take several years. While the present "First Fringe" event is of crucial importance, the full potential of the VLTI will only be reached some years from now. This will happen with the successive installation of a number of highly specialised instruments, like the near-infrared/red VLTI focal instrument (AMBER) , the Mid-Infrared interferometric instrument for the VLTI (MIDI) and the instrument for Phase-Referenced Imaging and Microarcsecond Astrometry (PRIMA). Already next year, the three 1.8-m Auxiliary Telescopes that will be fully devoted to interferometric observations, will arrive at Paranal. Ultimately, it will be possible to combine the light beams from all the large and small telescopes. Great research promises Together, they will be able to achieve an unprecedented image sharpness (angular resolution) in the optical/infrared wavelength region, and thanks to the great light-collecting ability of the VLT Unit Telescopes, also for observations of quite faint objects. This will make it possible to carry out many different front-line scientific studies, beyond the reach of other instruments. There are many promising research fields that will profit from VLTI observations, of which the following serve as particularly interesting examples: * The structure and composition of the outer solar system, by studies of individual moons, Trans-Neptunian Objects and comets. * The direct detection and imaging of exoplanets in orbit around other stars. * The formation of star clusters and their evolution, from images and spectra of very young objects. * Direct views of the surface structures of stars other than the Sun. * Measuring accurate distances to the most prominent "stepping stones" in the extragalactic distance scale, e.g., galactic Cepheid stars, the Large Magellanic Cloud and globular clusters. * Direct investigations of the physical mechanisms responsible for stellar pulsation, mass loss and dust formation in stellar envelopes and evolution to the Planetary Nebula and White Dwarf stages. * Close-up studies of interacting binary stars to better understand their mass transfer mechanisms and evolution. * Studies of the structure of the circum-stellar environment of stellar black holes and neutron stars. * The evolution of the expanding shells of unstable stars like novae and supernovae and their interaction with the interstellar medium. * Studying the structure and evolution of stellar and galactic nuclear accretion disks and the associated features, e.g., jets and dust tori. * With images and spectra of the innermost regions of the Milky Way galaxy, to investigate the nature of the nucleus surrounding the central black hole. Clearly, there will be no lack of opportunities for trailblazing research with the VLTI. The "First Fringes" constitute a very important milestone in this direction. Appendix A: How does it work? ESO PR Photo 10d/01 ESO PR Photo 10d/01 [Preview - JPEG: 400 x 290 pix - 24k] [Normal - JPEG: 800 x 579 pix - 68k] [Hi-Res - JPEG: 3000 x 2170 pix - 412k] ESO PR Photo 10e/01 ESO PR Photo 10e/01 [Preview - JPEG: 400 x 219 pix - 32k] [Normal - JPEG: 800 x 438 pix - 64k] [Hi-Res - JPEG: 3000 x 1644 pix - 336k] Caption : PR Photo 10d/01 demonstrates in a schematic way, how the images of two stars of different angular size (left) will look like, with a single telescope (middle) and with an interferometer like the VLTI (right). Whereas there is little difference with one telescope, the fringe patterns at the interferometer are quite different. Conversely, the appearance of this pattern provides a measure of the star's angular diameter. In PR Photo 10e/01 , interferometric observations of a single star are shown, as the distance between the two telescopes is gradually increased. The observed pattern at the focal plane clearly changes, and the "fringes" disappear completely. See the text for more details. The principle behind interferometry is the "coherent optical interference" of light beams from two or more telescopes, due to the wave nature of light. The above illustrations serve to explain what the astronomers observe in the simplest case, that of a single star with a certain angular size, and how this can be translated into a measurement of this size. In PR Photo 10d/01 , the difference between two stars of different diameter is illustrated. While the image of the smaller star displays strong interference effects (i.e., a well visible fringe pattern), those of the larger star are much less prominent. The "visibility" of the fringes is therefore a direct measure of the size; the stronger they appear (the "larger the contrast"), the smaller is the star. If the distance between the two telescopes is increased when a particular star is observed ( PR Photo 10e/01 ), then the fringes become less and less prominent. At a certain distance, the fringe pattern disppears completely. This distance is directly related to the angular size of the star. Appendix B: Elements of the VLT Interferometer Contrary to other large astronomical telescopes, the VLT was designed from the beginning with the use of interferometry as a major goal . For this reason, the four 8.2-m Unit Telescopes were positioned in a quasi-trapezoidal configuration and several moving 1.8-m telescopes were included into the overall VLT concept, cf. PR Photo 10f/01 . The photos below show some of the key elements of the VLT Interferometer during the present observations. They include the siderostats , 40-cm telescopes that serve to capture the light from a comparatively bright star ( Photos 10g-i/01 ), the Delay Lines ( Photo 10j/01 ), and the VINCI instrument ( Photo 10k/01) Earlier information about the development and construction of the individual elements of the VLTI is available as ESO PR 04/98 , ESO PR 14/00 and ESO PR Photos 26a-e/00.

  6. The use of open source electronic health records within the federal safety net.

    PubMed

    Goldwater, Jason C; Kwon, Nancy J; Nathanson, Ashley; Muckle, Alison E; Brown, Alexa; Cornejo, Kerri

    2014-01-01

    To conduct a federally funded study that examines the acquisition, implementation and operation of open source electronic health records (EHR) within safety net medical settings, such as federally qualified health centers (FQHC). The study was conducted by the National Opinion Research Center (NORC) at the University of Chicago from April to September 2010. The NORC team undertook a comprehensive environmental scan, including a literature review, a dozen key informant interviews using a semistructured protocol, and a series of site visits to West Virginia, California and Arizona FQHC that were currently using an open source EHR. Five of the six sites that were chosen as part of the study found a number of advantages in the use of their open source EHR system, such as utilizing a large community of users and developers to modify their EHR to fit the needs of their provider and patient communities, and lower acquisition and implementation costs as compared to a commercial system. Despite these advantages, many of the informants and site visit participants felt that widespread dissemination and use of open source was restrained due to a negative connotation regarding this type of software. In addition, a number of participants stated that there is a necessary level of technical acumen needed within the FQHC to make an open source EHR effective. An open source EHR provides advantages for FQHC that have limited resources to acquire and implement an EHR, but additional study is needed to evaluate its overall effectiveness.

  7. Open source electronic health records and chronic disease management

    PubMed Central

    Goldwater, Jason C; Kwon, Nancy J; Nathanson, Ashley; Muckle, Alison E; Brown, Alexa; Cornejo, Kerri

    2014-01-01

    Objective To study and report on the use of open source electronic health records (EHR) to assist with chronic care management within safety net medical settings, such as community health centers (CHC). Methods and Materials The study was conducted by NORC at the University of Chicago from April to September 2010. The NORC team undertook a comprehensive environmental scan, including a literature review, a dozen key informant interviews using a semistructured protocol, and a series of site visits to CHC that currently use an open source EHR. Results Two of the sites chosen by NORC were actively using an open source EHR to assist in the redesign of their care delivery system to support more effective chronic disease management. This included incorporating the chronic care model into an CHC and using the EHR to help facilitate its elements, such as care teams for patients, in addition to maintaining health records on indigent populations, such as tuberculosis status on homeless patients. Discussion The ability to modify the open-source EHR to adapt to the CHC environment and leverage the ecosystem of providers and users to assist in this process provided significant advantages in chronic care management. Improvements in diabetes management, controlled hypertension and increases in tuberculosis vaccinations were assisted through the use of these open source systems. Conclusions The flexibility and adaptability of open source EHR demonstrated its utility and viability in the provision of necessary and needed chronic disease care among populations served by CHC. PMID:23813566

  8. Optimizing Cloud Based Image Storage, Dissemination and Processing Through Use of Mrf and Lerc

    NASA Astrophysics Data System (ADS)

    Becker, Peter; Plesea, Lucian; Maurer, Thomas

    2016-06-01

    The volume and numbers of geospatial images being collected continue to increase exponentially with the ever increasing number of airborne and satellite imaging platforms, and the increasing rate of data collection. As a result, the cost of fast storage required to provide access to the imagery is a major cost factor in enterprise image management solutions to handle, process and disseminate the imagery and information extracted from the imagery. Cloud based object storage offers to provide significantly lower cost and elastic storage for this imagery, but also adds some disadvantages in terms of greater latency for data access and lack of traditional file access. Although traditional file formats geoTIF, JPEG2000 and NITF can be downloaded from such object storage, their structure and available compression are not optimum and access performance is curtailed. This paper provides details on a solution by utilizing a new open image formats for storage and access to geospatial imagery optimized for cloud storage and processing. MRF (Meta Raster Format) is optimized for large collections of scenes such as those acquired from optical sensors. The format enables optimized data access from cloud storage, along with the use of new compression options which cannot easily be added to existing formats. The paper also provides an overview of LERC a new image compression that can be used with MRF that provides very good lossless and controlled lossy compression.

  9. OpenCFU, a new free and open-source software to count cell colonies and other circular objects.

    PubMed

    Geissmann, Quentin

    2013-01-01

    Counting circular objects such as cell colonies is an important source of information for biologists. Although this task is often time-consuming and subjective, it is still predominantly performed manually. The aim of the present work is to provide a new tool to enumerate circular objects from digital pictures and video streams. Here, I demonstrate that the created program, OpenCFU, is very robust, accurate and fast. In addition, it provides control over the processing parameters and is implemented in an intuitive and modern interface. OpenCFU is a cross-platform and open-source software freely available at http://opencfu.sourceforge.net.

  10. Using Open Source Software in Visual Simulation Development

    DTIC Science & Technology

    2005-09-01

    increased the use of the technology in training activities. Using open source/free software tools in the process can expand these possibilities...resulting in even greater cost reduction and allowing the flexibility needed in a training environment. This thesis presents a configuration and architecture...to be used when developing training visual simulations using both personal computers and open source tools. Aspects of the requirements needed in a

  11. Open-Source Intelligence in the Czech Military: Knowledge System and Process Design

    DTIC Science & Technology

    2002-06-01

    in Open-Source Intelligence OSINT, as one of the intelligence disciplines, bears some of the general problems of intelligence " business " OSINT...ADAPTING KNOWLEDGE MANAGEMENT THEORY TO THE CZECH MILITARY INTELLIGENCE Knowledge work is the core business of the military intelligence . As...NAVAL POSTGRADUATE SCHOOL Monterey, California THESIS Approved for public release; distribution is unlimited OPEN-SOURCE INTELLIGENCE IN THE

  12. Writing in the Disciplines versus Corporate Workplaces: On the Importance of Conflicting Disciplinary Discourses in the Open Source Movement and the Value of Intellectual Property

    ERIC Educational Resources Information Center

    Ballentine, Brian D.

    2009-01-01

    Writing programs and more specifically, Writing in the Disciplines (WID) initiatives have begun to embrace the use of and the ideology inherent to, open source software. The Conference on College Composition and Communication has passed a resolution stating that whenever feasible educators and their institutions consider open source applications.…

  13. Anatomy of BioJS, an open source community for the life sciences.

    PubMed

    Yachdav, Guy; Goldberg, Tatyana; Wilzbach, Sebastian; Dao, David; Shih, Iris; Choudhary, Saket; Crouch, Steve; Franz, Max; García, Alexander; García, Leyla J; Grüning, Björn A; Inupakutika, Devasena; Sillitoe, Ian; Thanki, Anil S; Vieira, Bruno; Villaveces, José M; Schneider, Maria V; Lewis, Suzanna; Pettifer, Steve; Rost, Burkhard; Corpas, Manuel

    2015-07-08

    BioJS is an open source software project that develops visualization tools for different types of biological data. Here we report on the factors that influenced the growth of the BioJS user and developer community, and outline our strategy for building on this growth. The lessons we have learned on BioJS may also be relevant to other open source software projects.

  14. Build, Buy, Open Source, or Web 2.0?: Making an Informed Decision for Your Library

    ERIC Educational Resources Information Center

    Fagan, Jody Condit; Keach, Jennifer A.

    2010-01-01

    When improving a web presence, today's libraries have a choice: using a free Web 2.0 application, opting for open source, buying a product, or building a web application. This article discusses how to make an informed decision for one's library. The authors stress that deciding whether to use a free Web 2.0 application, to choose open source, to…

  15. Expanding Human Capabilities through the Adoption and Utilization of Free, Libre, and Open Source Software

    ERIC Educational Resources Information Center

    Simpson, James Daniel

    2014-01-01

    Free, libre, and open source software (FLOSS) is software that is collaboratively developed. FLOSS provides end-users with the source code and the freedom to adapt or modify a piece of software to fit their needs (Deek & McHugh, 2008; Stallman, 2010). FLOSS has a 30 year history that dates to the open hacker community at the Massachusetts…

  16. A Framework for the Systematic Collection of Open Source Intelligence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pouchard, Line Catherine; Trien, Joseph P; Dobson, Jonathan D

    2009-01-01

    Following legislative directions, the Intelligence Community has been mandated to make greater use of Open Source Intelligence (OSINT). Efforts are underway to increase the use of OSINT but there are many obstacles. One of these obstacles is the lack of tools helping to manage the volume of available data and ascertain its credibility. We propose a unique system for selecting, collecting and storing Open Source data from the Web and the Open Source Center. Some data management tasks are automated, document source is retained, and metadata containing geographical coordinates are added to the documents. Analysts are thus empowered to search,more » view, store, and analyze Web data within a single tool. We present ORCAT I and ORCAT II, two implementations of the system.« less

  17. The open-source neutral-mass spectrometer on Atmosphere Explorer-C, -D, and -E.

    NASA Technical Reports Server (NTRS)

    Nier, A. O.; Potter, W. E.; Hickman, D. R.; Mauersberger, K.

    1973-01-01

    The open-source mass spectrometer will be used to obtain the number densities of the neutral atmospheric gases in the mass range 1 to 48 amu at the satellite location. The ion source has been designed to allow gas particles to enter the ionizing region with the minimum practicable number of prior collisions with surfaces. This design minimizes the loss of atomic oxygen and other reactive species due to reactions with the walls of the ion source. The principal features of the open-source spectrometer and the laboratory calibration system are discussed.

  18. A Clinician-Centered Evaluation of the Usability of AHLTA and Automated Clinical Practice Guidelines at TAMC

    DTIC Science & Technology

    2011-03-31

    evidence based medicine into clinical practice. It will decrease costs and enable multiple stakeholders to work in an open content/source environment to exchange clinical content, develop and test technology and explore processes in applied CDS. Design: Comparative study between the KMR infrastructure and capabilities developed as an open source, vendor agnostic solution for aCPG execution within AHLTA and the current DoD/MHS standard evaluating: H1: An open source, open standard KMR and Clinical Decision Support Engine can enable organizations to share domain

  19. Preparing a scientific manuscript in Linux: Today's possibilities and limitations

    PubMed Central

    2011-01-01

    Background Increasing number of scientists are enthusiastic about using free, open source software for their research purposes. Authors' specific goal was to examine whether a Linux-based operating system with open source software packages would allow to prepare a submission-ready scientific manuscript without the need to use the proprietary software. Findings Preparation and editing of scientific manuscripts is possible using Linux and open source software. This letter to the editor describes key steps for preparation of a publication-ready scientific manuscript in a Linux-based operating system, as well as discusses the necessary software components. This manuscript was created using Linux and open source programs for Linux. PMID:22018246

  20. Open source bioimage informatics for cell biology.

    PubMed

    Swedlow, Jason R; Eliceiri, Kevin W

    2009-11-01

    Significant technical advances in imaging, molecular biology and genomics have fueled a revolution in cell biology, in that the molecular and structural processes of the cell are now visualized and measured routinely. Driving much of this recent development has been the advent of computational tools for the acquisition, visualization, analysis and dissemination of these datasets. These tools collectively make up a new subfield of computational biology called bioimage informatics, which is facilitated by open source approaches. We discuss why open source tools for image informatics in cell biology are needed, some of the key general attributes of what make an open source imaging application successful, and point to opportunities for further operability that should greatly accelerate future cell biology discovery.

  1. Implementation, reliability, and feasibility test of an Open-Source PACS.

    PubMed

    Valeri, Gianluca; Zuccaccia, Matteo; Badaloni, Andrea; Ciriaci, Damiano; La Riccia, Luigi; Mazzoni, Giovanni; Maggi, Stefania; Giovagnoni, Andrea

    2015-12-01

    To implement a hardware and software system able to perform the major functions of an Open-Source PACS, and to analyze it in a simulated real-world environment. A small home network was implemented, and the Open-Source operating system Ubuntu 11.10 was installed in a laptop containing the Dcm4chee suite with the software devices needed. The Open-Source PACS implemented is compatible with Linux OS, Microsoft OS, and Mac OS X; furthermore, it was used with operating systems that guarantee the operation in portable devices (smartphone, tablet) Android and iOS. An OSS PACS is useful for making tutorials and workshops on post-processing techniques for educational and training purposes.

  2. Adopting Open Source Software to Address Software Risks during the Scientific Data Life Cycle

    NASA Astrophysics Data System (ADS)

    Vinay, S.; Downs, R. R.

    2012-12-01

    Software enables the creation, management, storage, distribution, discovery, and use of scientific data throughout the data lifecycle. However, the capabilities offered by software also present risks for the stewardship of scientific data, since future access to digital data is dependent on the use of software. From operating systems to applications for analyzing data, the dependence of data on software presents challenges for the stewardship of scientific data. Adopting open source software provides opportunities to address some of the proprietary risks of data dependence on software. For example, in some cases, open source software can be deployed to avoid licensing restrictions for using, modifying, and transferring proprietary software. The availability of the source code of open source software also enables the inclusion of modifications, which may be contributed by various community members who are addressing similar issues. Likewise, an active community that is maintaining open source software can be a valuable source of help, providing an opportunity to collaborate to address common issues facing adopters. As part of the effort to meet the challenges of software dependence for scientific data stewardship, risks from software dependence have been identified that exist during various times of the data lifecycle. The identification of these risks should enable the development of plans for mitigating software dependencies, where applicable, using open source software, and to improve understanding of software dependency risks for scientific data and how they can be reduced during the data life cycle.

  3. Open source data assimilation framework for hydrological modeling

    NASA Astrophysics Data System (ADS)

    Ridler, Marc; Hummel, Stef; van Velzen, Nils; Katrine Falk, Anne; Madsen, Henrik

    2013-04-01

    An open-source data assimilation framework is proposed for hydrological modeling. Data assimilation (DA) in hydrodynamic and hydrological forecasting systems has great potential to improve predictions and improve model result. The basic principle is to incorporate measurement information into a model with the aim to improve model results by error minimization. Great strides have been made to assimilate traditional in-situ measurements such as discharge, soil moisture, hydraulic head and snowpack into hydrologic models. More recently, remotely sensed data retrievals of soil moisture, snow water equivalent or snow cover area, surface water elevation, terrestrial water storage and land surface temperature have been successfully assimilated in hydrological models. The assimilation algorithms have become increasingly sophisticated to manage measurement and model bias, non-linear systems, data sparsity (time & space) and undetermined system uncertainty. It is therefore useful to use a pre-existing DA toolbox such as OpenDA. OpenDA is an open interface standard for (and free implementation of) a set of tools to quickly implement DA and calibration for arbitrary numerical models. The basic design philosophy of OpenDA is to breakdown DA into a set of building blocks programmed in object oriented languages. To implement DA, a model must interact with OpenDA to create model instances, propagate the model, get/set variables (or parameters) and free the model once DA is completed. An open-source interface for hydrological models exists capable of all these tasks: OpenMI. OpenMI is an open source standard interface already adopted by key hydrological model providers. It defines a universal approach to interact with hydrological models during simulation to exchange data during runtime, thus facilitating the interactions between models and data sources. The interface is flexible enough so that models can interact even if the model is coded in a different language, represent processes from a different domain or have different spatial and temporal resolutions. An open source framework that bridges OpenMI and OpenDA is presented. The framework provides a generic and easy means for any OpenMI compliant model to assimilate observation measurements. An example test case will be presented using MikeSHE, and OpenMI compliant fully coupled integrated hydrological model that can accurately simulate the feedback dynamics of overland flow, unsaturated zone and saturated zone.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gearhart, Jared Lee; Adair, Kristin Lynn; Durfee, Justin David.

    When developing linear programming models, issues such as budget limitations, customer requirements, or licensing may preclude the use of commercial linear programming solvers. In such cases, one option is to use an open-source linear programming solver. A survey of linear programming tools was conducted to identify potential open-source solvers. From this survey, four open-source solvers were tested using a collection of linear programming test problems and the results were compared to IBM ILOG CPLEX Optimizer (CPLEX) [1], an industry standard. The solvers considered were: COIN-OR Linear Programming (CLP) [2], [3], GNU Linear Programming Kit (GLPK) [4], lp_solve [5] and Modularmore » In-core Nonlinear Optimization System (MINOS) [6]. As no open-source solver outperforms CPLEX, this study demonstrates the power of commercial linear programming software. CLP was found to be the top performing open-source solver considered in terms of capability and speed. GLPK also performed well but cannot match the speed of CLP or CPLEX. lp_solve and MINOS were considerably slower and encountered issues when solving several test problems.« less

  5. Open source OCR framework using mobile devices

    NASA Astrophysics Data System (ADS)

    Zhou, Steven Zhiying; Gilani, Syed Omer; Winkler, Stefan

    2008-02-01

    Mobile phones have evolved from passive one-to-one communication device to powerful handheld computing device. Today most new mobile phones are capable of capturing images, recording video, and browsing internet and do much more. Exciting new social applications are emerging on mobile landscape, like, business card readers, sing detectors and translators. These applications help people quickly gather the information in digital format and interpret them without the need of carrying laptops or tablet PCs. However with all these advancements we find very few open source software available for mobile phones. For instance currently there are many open source OCR engines for desktop platform but, to our knowledge, none are available on mobile platform. Keeping this in perspective we propose a complete text detection and recognition system with speech synthesis ability, using existing desktop technology. In this work we developed a complete OCR framework with subsystems from open source desktop community. This includes a popular open source OCR engine named Tesseract for text detection & recognition and Flite speech synthesis module, for adding text-to-speech ability.

  6. Open-source colorimeter.

    PubMed

    Anzalone, Gerald C; Glover, Alexandra G; Pearce, Joshua M

    2013-04-19

    The high cost of what have historically been sophisticated research-related sensors and tools has limited their adoption to a relatively small group of well-funded researchers. This paper provides a methodology for applying an open-source approach to design and development of a colorimeter. A 3-D printable, open-source colorimeter utilizing only open-source hardware and software solutions and readily available discrete components is discussed and its performance compared to a commercial portable colorimeter. Performance is evaluated with commercial vials prepared for the closed reflux chemical oxygen demand (COD) method. This approach reduced the cost of reliable closed reflux COD by two orders of magnitude making it an economic alternative for the vast majority of potential users. The open-source colorimeter demonstrated good reproducibility and serves as a platform for further development and derivation of the design for other, similar purposes such as nephelometry. This approach promises unprecedented access to sophisticated instrumentation based on low-cost sensors by those most in need of it, under-developed and developing world laboratories.

  7. Open-Source Colorimeter

    PubMed Central

    Anzalone, Gerald C.; Glover, Alexandra G.; Pearce, Joshua M.

    2013-01-01

    The high cost of what have historically been sophisticated research-related sensors and tools has limited their adoption to a relatively small group of well-funded researchers. This paper provides a methodology for applying an open-source approach to design and development of a colorimeter. A 3-D printable, open-source colorimeter utilizing only open-source hardware and software solutions and readily available discrete components is discussed and its performance compared to a commercial portable colorimeter. Performance is evaluated with commercial vials prepared for the closed reflux chemical oxygen demand (COD) method. This approach reduced the cost of reliable closed reflux COD by two orders of magnitude making it an economic alternative for the vast majority of potential users. The open-source colorimeter demonstrated good reproducibility and serves as a platform for further development and derivation of the design for other, similar purposes such as nephelometry. This approach promises unprecedented access to sophisticated instrumentation based on low-cost sensors by those most in need of it, under-developed and developing world laboratories. PMID:23604032

  8. OpenMebius: an open source software for isotopically nonstationary 13C-based metabolic flux analysis.

    PubMed

    Kajihata, Shuichi; Furusawa, Chikara; Matsuda, Fumio; Shimizu, Hiroshi

    2014-01-01

    The in vivo measurement of metabolic flux by (13)C-based metabolic flux analysis ((13)C-MFA) provides valuable information regarding cell physiology. Bioinformatics tools have been developed to estimate metabolic flux distributions from the results of tracer isotopic labeling experiments using a (13)C-labeled carbon source. Metabolic flux is determined by nonlinear fitting of a metabolic model to the isotopic labeling enrichment of intracellular metabolites measured by mass spectrometry. Whereas (13)C-MFA is conventionally performed under isotopically constant conditions, isotopically nonstationary (13)C metabolic flux analysis (INST-(13)C-MFA) has recently been developed for flux analysis of cells with photosynthetic activity and cells at a quasi-steady metabolic state (e.g., primary cells or microorganisms under stationary phase). Here, the development of a novel open source software for INST-(13)C-MFA on the Windows platform is reported. OpenMebius (Open source software for Metabolic flux analysis) provides the function of autogenerating metabolic models for simulating isotopic labeling enrichment from a user-defined configuration worksheet. Analysis using simulated data demonstrated the applicability of OpenMebius for INST-(13)C-MFA. Confidence intervals determined by INST-(13)C-MFA were less than those determined by conventional methods, indicating the potential of INST-(13)C-MFA for precise metabolic flux analysis. OpenMebius is the open source software for the general application of INST-(13)C-MFA.

  9. Simulation for Dynamic Situation Awareness and Prediction III

    DTIC Science & Technology

    2010-03-01

    source Java ™ library for capturing and sending network packets; 4) Groovy – an open source, Java -based scripting language (version 1.6 or newer). Open...DMOTH Analyzer application. Groovy is an open source dynamic scripting language for the Java Virtual Machine. It is consistent with Java syntax...between temperature, pressure, wind and relative humidity, and 3) a precipitation editing algorithm. The Editor can be used to prepare scripted changes

  10. Transforming High School Classrooms with Free/Open Source Software: "It's Time for an Open Source Software Revolution"

    ERIC Educational Resources Information Center

    Pfaffman, Jay

    2008-01-01

    Free/Open Source Software (FOSS) applications meet many of the software needs of high school science classrooms. In spite of the availability and quality of FOSS tools, they remain unknown to many teachers and utilized by fewer still. In a world where most software has restrictions on copying and use, FOSS is an anomaly, free to use and to…

  11. Managing Digital Archives Using Open Source Software Tools

    NASA Astrophysics Data System (ADS)

    Barve, S.; Dongare, S.

    2007-10-01

    This paper describes the use of open source software tools such as MySQL and PHP for creating database-backed websites. Such websites offer many advantages over ones built from static HTML pages. This paper will discuss how OSS tools are used and their benefits, and after the successful implementation of these tools how the library took the initiative in implementing an institutional repository using DSpace open source software.

  12. Open source tools for fluorescent imaging.

    PubMed

    Hamilton, Nicholas A

    2012-01-01

    As microscopy becomes increasingly automated and imaging expands in the spatial and time dimensions, quantitative analysis tools for fluorescent imaging are becoming critical to remove both bottlenecks in throughput as well as fully extract and exploit the information contained in the imaging. In recent years there has been a flurry of activity in the development of bio-image analysis tools and methods with the result that there are now many high-quality, well-documented, and well-supported open source bio-image analysis projects with large user bases that cover essentially every aspect from image capture to publication. These open source solutions are now providing a viable alternative to commercial solutions. More importantly, they are forming an interoperable and interconnected network of tools that allow data and analysis methods to be shared between many of the major projects. Just as researchers build on, transmit, and verify knowledge through publication, open source analysis methods and software are creating a foundation that can be built upon, transmitted, and verified. Here we describe many of the major projects, their capabilities, and features. We also give an overview of the current state of open source software for fluorescent microscopy analysis and the many reasons to use and develop open source methods. Copyright © 2012 Elsevier Inc. All rights reserved.

  13. moocRP: Enabling Open Learning Analytics with an Open Source Platform for Data Distribution, Analysis, and Visualization

    ERIC Educational Resources Information Center

    Pardos, Zachary A.; Whyte, Anthony; Kao, Kevin

    2016-01-01

    In this paper, we address issues of transparency, modularity, and privacy with the introduction of an open source, web-based data repository and analysis tool tailored to the Massive Open Online Course community. The tool integrates data request/authorization and distribution workflow features as well as provides a simple analytics module upload…

  14. Open Drug Discovery Toolkit (ODDT): a new open-source player in the drug discovery field.

    PubMed

    Wójcikowski, Maciej; Zielenkiewicz, Piotr; Siedlecki, Pawel

    2015-01-01

    There has been huge progress in the open cheminformatics field in both methods and software development. Unfortunately, there has been little effort to unite those methods and software into one package. We here describe the Open Drug Discovery Toolkit (ODDT), which aims to fulfill the need for comprehensive and open source drug discovery software. The Open Drug Discovery Toolkit was developed as a free and open source tool for both computer aided drug discovery (CADD) developers and researchers. ODDT reimplements many state-of-the-art methods, such as machine learning scoring functions (RF-Score and NNScore) and wraps other external software to ease the process of developing CADD pipelines. ODDT is an out-of-the-box solution designed to be easily customizable and extensible. Therefore, users are strongly encouraged to extend it and develop new methods. We here present three use cases for ODDT in common tasks in computer-aided drug discovery. Open Drug Discovery Toolkit is released on a permissive 3-clause BSD license for both academic and industrial use. ODDT's source code, additional examples and documentation are available on GitHub (https://github.com/oddt/oddt).

  15. The use of open source electronic health records within the federal safety net

    PubMed Central

    Goldwater, Jason C; Kwon, Nancy J; Nathanson, Ashley; Muckle, Alison E; Brown, Alexa; Cornejo, Kerri

    2014-01-01

    Objective To conduct a federally funded study that examines the acquisition, implementation and operation of open source electronic health records (EHR) within safety net medical settings, such as federally qualified health centers (FQHC). Methods and materials The study was conducted by the National Opinion Research Center (NORC) at the University of Chicago from April to September 2010. The NORC team undertook a comprehensive environmental scan, including a literature review, a dozen key informant interviews using a semistructured protocol, and a series of site visits to West Virginia, California and Arizona FQHC that were currently using an open source EHR. Results Five of the six sites that were chosen as part of the study found a number of advantages in the use of their open source EHR system, such as utilizing a large community of users and developers to modify their EHR to fit the needs of their provider and patient communities, and lower acquisition and implementation costs as compared to a commercial system. Discussion Despite these advantages, many of the informants and site visit participants felt that widespread dissemination and use of open source was restrained due to a negative connotation regarding this type of software. In addition, a number of participants stated that there is a necessary level of technical acumen needed within the FQHC to make an open source EHR effective. Conclusions An open source EHR provides advantages for FQHC that have limited resources to acquire and implement an EHR, but additional study is needed to evaluate its overall effectiveness. PMID:23744787

  16. The Atacama Large Millimeter Array (ALMA)

    NASA Astrophysics Data System (ADS)

    1999-06-01

    The Atacama Large Millimeter Array (ALMA) is the new name [2] for a giant millimeter-wavelength telescope project. As described in the accompanying joint press release by ESO and the U.S. National Science Foundation , the present design and development phase is now a Europe-U.S. collaboration, and may soon include Japan. ALMA may become the largest ground-based astronomy project of the next decade after VLT/VLTI, and one of the major new facilities for world astronomy. ALMA will make it possible to study the origins of galaxies, stars and planets. As presently envisaged, ALMA will be comprised of up to 64 12-meter diameter antennas distributed over an area 10 km across. ESO PR Photo 24a/99 shows an artist's concept of a portion of the array in a compact configuration. ESO PR Video Clip 03/99 illustrates how all the antennas will move in unison to point to a single astronomical object and follow it as it traverses the sky. In this way the combined telescope will produce astronomical images of great sharpness and sensitivity [3]. An exceptional site For such observations to be possible the atmosphere above the telescope must be transparent at millimeter and submillimeter wavelengths. This requires a site that is high and dry, and a high plateau in the Atacama desert of Chile, probably the world's driest, is ideal - the next best thing to outer space for these observations. ESO PR Photo 24b/99 shows the location of the chosen site at Chajnantor, at 5000 meters altitude and 60 kilometers east of the village of San Pedro de Atacama, as seen from the Space Shuttle during a servicing mission of the Hubble Space Telescope. ESO PR Photo 24c/99 and ESO PR Photo 24d/99 show a satellite image of the immediate vicinity and the site marked on a map of northern Chile. ALMA will be the highest continuously operated observatory in the world. The stark nature of this extreme site is well illustrated by the panoramic view in ESO PR Photo 24e/99. High sensitivity and sharp images ALMA will be extremely sensitive to radiation at milllimeter and submillimeter wavelengths. The large number of antennas gives a total collecting area of over 7000 square meters, larger than a football field. At the same time, the shape of the surface of each antenna must be extremely precise under all conditions; the overall accuracy over the entire 12-m diameter must be better than 0.025 millimeters (25µm), or one-third of the diameter of a human hair. The combination of large collecting area and high precision results in extremely high sensitivity to faint cosmic signals. The telescope must also be able to resolve the fine details of the objects it detects. In order to do this at millimeter wavelengths the effective diameter of the overall telescope must be very large - about 10 km. As it is impossible to build a single antenna with this diameter, an array of antennas is used instead, with the outermost antennas being 10 km apart. By combining the signals from all antennas together in a large central computer, it is possible to synthesize the effect of a single dish 10 km across. The resulting angular resolution is about 10 milli-arcseconds, less than one-thousandth the angular size of Saturn. Exciting research perspectives The scientific case for this revolutionary telescope is overwhelming. ALMA will make it possible to witness the formation of the earliest and most distant galaxies. It will also look deep into the dust-obscured regions where stars are born, to examine the details of star and planet formation. But ALMA will go far beyond these main science drivers, and will have a major impact on virtually all areas of astronomy. It will be a millimeter-wave counterpart to the most powerful optical/infrared telescopes such as ESO's Very Large Telescope (VLT) and the Hubble Space Telescope, with the additional advantage of being unhindered by cosmic dust opacity. The first galaxies in the Universe are expected to become rapidly enshrouded in the dust produced by the first stars. The dust can dim the galaxies at optical wavelengths, but the same dust radiates brightly at longer wavelengths. In addition, the expansion of the Universe causes the radiation from distant galaxies to be shifted to longer wavelengths. For both reasons, the earliest galaxies at the epoch of first light can be found with ALMA, and the subsequent evolution of galaxies can be mapped over cosmic time. ALMA will be of great importance for our understanding of the origins of stars and planetary systems. Stellar nurseries are completely obscured at optical wavelengths by dense "cocoons" of dust and gas, but ALMA can probe deep into these regions and study the fundamental processes by which stars are assembled. Moreover, it can observe the major reservoirs of biogenic elements (carbon, oxygen, nitrogen) and follow their incorporation into new planetary systems. A particularly exciting prospect for ALMA is to use its exceptionally sharp images to obtain evidence for planet formation by the presence of gaps in dusty disks around young stars, cleared by large bodies coalescing around the stars. Equally fundamental are observations of the dying gasps of stars at the other end of the stellar lifecycle, when they are often surrounded by shells of molecules and dust enriched in heavy elements produced by the nuclear fires now slowly dying. ALMA will offer exciting new views of our solar system. Studies of the molecular content of planetary atmospheres with ALMA's high resolving power will provide detailed weather maps of Mars, Jupiter, and the other planets and even their satellites. Studies of comets with ALMA will be particularly interesting. The molecular ices of these visitors from the outer reaches of the solar system have a composition that is preserved from ages when the solar system was forming. They evaporate when the comet comes close to the sun, and studies of the resulting gases with ALMA will allow accurate analysis of the chemistry of the presolar nebula. The road ahead The three-year design and development phase of the project is now underway as a collaboration between Europe and the U.S., and Japan may also join in this effort. Assuming the construction phase begins about two years from now, limited operations of the array may begin in 2005 and the full array may become operational by 2009. Notes [1] Press Releases about this event have also been issued by some of the other organisations participating in this project: * CNRS (in French) * MPG (in German) * NOVA (in Dutch) * NRAO * NSF (ASCII and HTML versions) * PPARC [2] "ALMA" means "soul" in Spanish. [3] Additional information about ALMA is available on the web: * Articles in the ESO Messenger - "The Large Southern Array" (March 1998), "European Site Testing at Chajnantor" (December 1998) and "The ALMA Project" (June 1999), cf. http://www.eso.org/gen-fac/pubs/messenger/ * ALMA website at ESO at http://www.eso.org/projects/alma/ * ALMA website at the U.S. National Radio Astronomy Observatory (NRAO) at http://www.mma.nrao.edu/ * ALMA website in The Netherlands about the detectors at http://www.sron.rug.nl/alma/ ALMA/Chajnantor Video Clip and Photos ESO PR Video Clip 03/99 [MPEG-version] ESO PR Video Clip 03/99 (2450 frames/1:38 min) [MPEG Video; 160x120 pix; 2.1Mb] [MPEG Video; 320x240 pix; 10.0Mb] [RealMedia; streaming; 700k] [RealMedia; streaming; 2.3M] About ESO Video Clip 03/99 : This video clip about the ALMA project contains two sequences. The first shows a panoramic scan of the Chajnantor plain from approx. north-east to north-west. The Chajnantor mountain passes through the field-of-view and the perfect cone of the Licancabur volcano (5900 m) on the Bolivian border is seen at the end (compare also with ESO PR 24e/99 below. The second is a 52-sec animation with a change of viewing perspective of the array and during which the antennas move in unison. For convenience, the clip is available in four versions: two MPEG files of different sizes and two streamer-versions of different quality that require RealPlayer software. There is no audio. Note that ESO Video News Reel No. 5 with more related scenes and in professional format with complete shot list is also available. ESO PR Photo 24b/99 ESO PR Photo 24b/99 [Preview - JPEG: 400 x 446 pix - 184k] [Normal - JPEG: 800 x 892 pix - 588k] [High-Res - JPEG: 3000 x 3345 pix - 5.4M] Caption to ESO PR Photo 24b/99 : View of Northern Chile, as seen from the NASA Space Shuttle during a servicing mission to the Hubble Space Telescope (partly visible to the left). The Atacama Desert, site of the ESO VLT at Paranal Observatory and the proposed location for ALMA at Chajnantor, is seen from North (foreground) to South. The two sites are only a few hundred km distant from each other. Few clouds are seen in this extremely dry area, due to the influence of the cold Humboldt Stream along the Chilean Pacific coast (right) and the high Andes mountains (left) that act as a barrier. Photo courtesy ESA astronaut Claude Nicollier. ESO PR Photo 24c/99 ESO PR Photo 24c/99 [Preview - JPEG: 400 x 318 pix - 212k] [Normal - JPEG: 800 x 635 pix - 700k] [High-Res - JPEG: 3000 x 2382 pix - 5.9M] Caption to ESO PR Photo 24c/99 : This satellite image of the Chajnantor area was produced in 1998 at Cornell University (USA), by Jennifer Yu, Jeremy Darling and Riccardo Giovanelli, using the Thematic Mapper data base maintained at the Geology Department laboratory directed by Bryan Isacks. It is a composite of three exposures in spectral bands at 1.6 µm (rendered as red), 1.0 µm (green) and 0.5 µm (blue). The horizontal resolution of the false-colour image is about 30 meters. North is at the top of the photo. ESO PR Photo 24d/99 ESO PR Photo 24d/99 [Preview - JPEG: 400 x 381 pix - 108k] [Normal - JPEG: 800 x 762 pix - 240k] [High-Res - JPEG: 2300 x 2191 pix - 984k] Caption to ESO PR Photo 24d/99 : Geographical map with the sites of the VLT and ALMA indicated. ESO PR Photo 24e/99 ESO PR Photo 24e/99 [Preview - JPEG: 400 x 238 pix - 93k] [Normal - JPEG: 800 x 475 pix - 279k] [High-Res - JPEG: 2862 x 1701 pix - 4.2M] Caption to ESO PR Photo 24e/99 : Panoramic view of the proposed site for ALMA at Chajnantor. This high-altitude plain (elevation 5000 m) in the Chilean Andes mountains is an ideal site for ALMA. In this view towards the north, the Chajnantor mountain (5600 m) is in the foreground, left of the centre. The perfect cone of the Licancabur volcano (5900 m) on the Bolivian border is in the background further to the left. This image is a wide-angle composite (140° x 70°) of three photos (Hasselblad 6x6 with SWC 1:4.5/38 mm Biogon), obtained in December 1998. How to obtain ESO Press Information ESO Press Information is made available on the World-Wide Web (URL: http://www.eso.org../ ). ESO Press Photos may be reproduced, if credit is given to the European Southern Observatory.

  17. APPLYING OPEN-PATH OPTICAL SPECTROSCOPY TO HEAVY-DUTY DIESEL EMISSIONS

    EPA Science Inventory

    Non-dispersive infrared absorption has been used to measure gaseous emissions for both stationary and mobile sources. Fourier transform infrared spectroscopy has been used for stationary sources as both extractive and open-path methods. We have applied the open-path method for bo...

  18. Note: Tormenta: An open source Python-powered control software for camera based optical microscopy.

    PubMed

    Barabas, Federico M; Masullo, Luciano A; Stefani, Fernando D

    2016-12-01

    Until recently, PC control and synchronization of scientific instruments was only possible through closed-source expensive frameworks like National Instruments' LabVIEW. Nowadays, efficient cost-free alternatives are available in the context of a continuously growing community of open-source software developers. Here, we report on Tormenta, a modular open-source software for the control of camera-based optical microscopes. Tormenta is built on Python, works on multiple operating systems, and includes some key features for fluorescence nanoscopy based on single molecule localization.

  19. Note: Tormenta: An open source Python-powered control software for camera based optical microscopy

    NASA Astrophysics Data System (ADS)

    Barabas, Federico M.; Masullo, Luciano A.; Stefani, Fernando D.

    2016-12-01

    Until recently, PC control and synchronization of scientific instruments was only possible through closed-source expensive frameworks like National Instruments' LabVIEW. Nowadays, efficient cost-free alternatives are available in the context of a continuously growing community of open-source software developers. Here, we report on Tormenta, a modular open-source software for the control of camera-based optical microscopes. Tormenta is built on Python, works on multiple operating systems, and includes some key features for fluorescence nanoscopy based on single molecule localization.

  20. OpenCFU, a New Free and Open-Source Software to Count Cell Colonies and Other Circular Objects

    PubMed Central

    Geissmann, Quentin

    2013-01-01

    Counting circular objects such as cell colonies is an important source of information for biologists. Although this task is often time-consuming and subjective, it is still predominantly performed manually. The aim of the present work is to provide a new tool to enumerate circular objects from digital pictures and video streams. Here, I demonstrate that the created program, OpenCFU, is very robust, accurate and fast. In addition, it provides control over the processing parameters and is implemented in an intuitive and modern interface. OpenCFU is a cross-platform and open-source software freely available at http://opencfu.sourceforge.net. PMID:23457446

  1. Utilization of open source electronic health record around the world: A systematic review.

    PubMed

    Aminpour, Farzaneh; Sadoughi, Farahnaz; Ahamdi, Maryam

    2014-01-01

    Many projects on developing Electronic Health Record (EHR) systems have been carried out in many countries. The current study was conducted to review the published data on the utilization of open source EHR systems in different countries all over the world. Using free text and keyword search techniques, six bibliographic databases were searched for related articles. The identified papers were screened and reviewed during a string of stages for the irrelevancy and validity. The findings showed that open source EHRs have been wildly used by source limited regions in all continents, especially in Sub-Saharan Africa and South America. It would create opportunities to improve national healthcare level especially in developing countries with minimal financial resources. Open source technology is a solution to overcome the problems of high-costs and inflexibility associated with the proprietary health information systems.

  2. A New Architecture for Visualization: Open Mission Control Technologies

    NASA Technical Reports Server (NTRS)

    Trimble, Jay

    2017-01-01

    Open Mission Control Technologies (MCT) is a new architecture for visualisation of mission data. Driven by requirements for new mission capabilities, including distributed mission operations, access to data anywhere, customization by users, synthesis of multiple data sources, and flexibility for multi-mission adaptation, Open MCT provides users with an integrated customizable environment. Developed at NASAs Ames Research Center (ARC), in collaboration with NASAs Advanced Multimission Operations System (AMMOS) and NASAs Jet Propulsion Laboratory (JPL), Open MCT is getting its first mission use on the Jason 3 Mission, and is also available in the testbed for the Mars 2020 Rover and for development use for NASAs Resource Prospector Lunar Rover. The open source nature of the project provides for use outside of space missions, including open source contributions from a community of users. The defining features of Open MCT for mission users are data integration, end user composition and multiple views. Data integration provides access to mission data across domains in one place, making data such as activities, timelines, telemetry, imagery, event timers and procedures available in one place, without application switching. End user composition provides users with layouts, which act as a canvas to assemble visualisations. Multiple views provide the capability to view the same data in different ways, with live switching of data views in place. Open MCT is browser based, and works on the desktop as well as tablets and phones, providing access to data anywhere. An early use case for mobile data access took place on the Resource Prospector (RP) Mission Distributed Operations Test, in which rover engineers in the field were able to view telemetry on their phones. We envision this capability providing decision support to on console operators from off duty personnel. The plug-in architecture also allows for adaptation for different mission capabilities. Different data types and capabilities may be added or removed using plugins. An API provides a means to write new capabilities and to create data adaptors. Data plugins exist for mission data sources for NASA missions. Adaptors have been written by international and commercial users. Open MCT is open source. Open source enables collaborative development across organizations and also makes the product available outside of the space community, providing a potential source of usage and ideas to drive product design and development. The combination of open source with an Apache 2 license, and distribution on GitHub, has enabled an active community of users and contributors. The spectrum of users for Open MCT is, to our knowledge, unprecedented for mission software. In addition to our NASA users, we have, through open source, had users and inquires on projects ranging from Internet of Things, to radio hobbyists, to farming projects. We have an active community of contributors, enabling a flow of ideas inside and outside of the space community.

  3. Open source tools for ATR development and performance evaluation

    NASA Astrophysics Data System (ADS)

    Baumann, James M.; Dilsavor, Ronald L.; Stubbles, James; Mossing, John C.

    2002-07-01

    Early in almost every engineering project, a decision must be made about tools; should I buy off-the-shelf tools or should I develop my own. Either choice can involve significant cost and risk. Off-the-shelf tools may be readily available, but they can be expensive to purchase and to maintain licenses, and may not be flexible enough to satisfy all project requirements. On the other hand, developing new tools permits great flexibility, but it can be time- (and budget-) consuming, and the end product still may not work as intended. Open source software has the advantages of both approaches without many of the pitfalls. This paper examines the concept of open source software, including its history, unique culture, and informal yet closely followed conventions. These characteristics influence the quality and quantity of software available, and ultimately its suitability for serious ATR development work. We give an example where Python, an open source scripting language, and OpenEV, a viewing and analysis tool for geospatial data, have been incorporated into ATR performance evaluation projects. While this case highlights the successful use of open source tools, we also offer important insight into risks associated with this approach.

  4. Open Source Hbim for Cultural Heritage: a Project Proposal

    NASA Astrophysics Data System (ADS)

    Diara, F.; Rinaudo, F.

    2018-05-01

    Actual technologies are changing Cultural Heritage research, analysis, conservation and development ways, allowing new innovative approaches. The possibility of integrating Cultural Heritage data, like archaeological information, inside a three-dimensional environment system (like a Building Information Modelling) involve huge benefits for its management, monitoring and valorisation. Nowadays there are many commercial BIM solutions. However, these tools are thought and developed mostly for architecture design or technical installations. An example of better solution could be a dynamic and open platform that might consider Cultural Heritage needs as priority. Suitable solution for better and complete data usability and accessibility could be guaranteed by open source protocols. This choice would allow adapting software to Cultural Heritage needs and not the opposite, thus avoiding methodological stretches. This work will focus exactly on analysis and experimentations about specific characteristics of these kind of open source software (DBMS, CAD, Servers) applied to a Cultural Heritage example, in order to verifying their flexibility, reliability and then creating a dynamic HBIM open source prototype. Indeed, it might be a starting point for a future creation of a complete HBIM open source solution that we could adapt to others Cultural Heritage researches and analysis.

  5. Broadview Radar Altimetry Toolbox

    NASA Astrophysics Data System (ADS)

    Garcia-Mondejar, Albert; Escolà, Roger; Moyano, Gorka; Roca, Mònica; Terra-Homem, Miguel; Friaças, Ana; Martinho, Fernando; Schrama, Ernst; Naeije, Marc; Ambrózio, Américo; Restano, Marco; Benveniste, Jérôme

    2017-04-01

    The universal altimetry toolbox, BRAT (Broadview Radar Altimetry Toolbox) which can read all previous and current altimetry missions' data, incorporates now the capability to read the upcoming Sentinel3 L1 and L2 products. ESA endeavoured to develop and supply this capability to support the users of the future Sentinel3 SAR Altimetry Mission. BRAT is a collection of tools and tutorial documents designed to facilitate the processing of radar altimetry data. This project started in 2005 from the joint efforts of ESA (European Space Agency) and CNES (Centre National d'Etudes Spatiales), and it is freely available at http://earth.esa.int/brat. The tools enable users to interact with the most common altimetry data formats. The BratGUI is the frontend for the powerful command line tools that are part of the BRAT suite. BRAT can also be used in conjunction with MATLAB/IDL (via reading routines) or in C/C++/Fortran via a programming API, allowing the user to obtain desired data, bypassing the dataformatting hassle. BRAT can be used simply to visualise data quickly, or to translate the data into other formats such as NetCDF, ASCII text files, KML (Google Earth) and raster images (JPEG, PNG, etc.). Several kinds of computations can be done within BRAT involving combinations of data fields that the user can save for posterior reuse or using the already embedded formulas that include the standard oceanographic altimetry formulas. The Radar Altimeter Tutorial, that contains a strong introduction to altimetry, shows its applications in different fields such as Oceanography, Cryosphere, Geodesy, Hydrology among others. Included are also "use cases", with step-by-step examples, on how to use the toolbox in the different contexts. The Sentinel3 SAR Altimetry Toolbox shall benefit from the current BRAT version. While developing the toolbox we will revamp of the Graphical User Interface and provide, among other enhancements, support for reading the upcoming S3 datasets and specific "use cases" for SAR altimetry in order to train the users and make them aware of the great potential of SAR altimetry for coastal and inland applications. As for any open source framework, contributions from users having developed their own functions are welcome. The Broadview Radar Altimetry Toolbox is a continuation of the Basic Radar Altimetry Toolbox. While developing the new toolbox we will revamp of the Graphical User Interface and provide, among other enhancements, support for reading the upcoming S3 datasets and specific "use cases" for SAR altimetry in order to train the users and make them aware of the great potential of SAR altimetry for coastal and inland applications. As for any open source framework, contributions from users having developed their own functions are welcome. The first release of the new Radar Altimetry Toolbox was published in September 2015. It incorporates the capability to read S3 products as well as the new CryoSat2 Baseline C. The second release of the Toolbox, published in October 2016, has a new graphical user interface and other visualisation improvements. The third release (January 2017) includes more features and solves issues from the previous versions.

  6. Open Source Clinical NLP - More than Any Single System.

    PubMed

    Masanz, James; Pakhomov, Serguei V; Xu, Hua; Wu, Stephen T; Chute, Christopher G; Liu, Hongfang

    2014-01-01

    The number of Natural Language Processing (NLP) tools and systems for processing clinical free-text has grown as interest and processing capability have surged. Unfortunately any two systems typically cannot simply interoperate, even when both are built upon a framework designed to facilitate the creation of pluggable components. We present two ongoing activities promoting open source clinical NLP. The Open Health Natural Language Processing (OHNLP) Consortium was originally founded to foster a collaborative community around clinical NLP, releasing UIMA-based open source software. OHNLP's mission currently includes maintaining a catalog of clinical NLP software and providing interfaces to simplify the interaction of NLP systems. Meanwhile, Apache cTAKES aims to integrate best-of-breed annotators, providing a world-class NLP system for accessing clinical information within free-text. These two activities are complementary. OHNLP promotes open source clinical NLP activities in the research community and Apache cTAKES bridges research to the health information technology (HIT) practice.

  7. The Use of Open Source Software in the Global Land Ice Measurements From Space (GLIMS) Project, and the Relevance to Institutional Cooperation

    Treesearch

    Christopher W. Helm

    2006-01-01

    GLIMS is a NASA funded project that utilizes Open-Source Software to achieve its goal of creating a globally complete inventory of glaciers. The participation of many international institutions and the development of on-line mapping applications to provide access to glacial data have both been enhanced by Open-Source GIS capabilities and play a crucial role in the...

  8. Meteorological Error Budget Using Open Source Data

    DTIC Science & Technology

    2016-09-01

    ARL-TR-7831 ● SEP 2016 US Army Research Laboratory Meteorological Error Budget Using Open- Source Data by J Cogan, J Smith, P...needed. Do not return it to the originator. ARL-TR-7831 ● SEP 2016 US Army Research Laboratory Meteorological Error Budget Using...Error Budget Using Open-Source Data 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) J Cogan, J Smith, P Haines

  9. Open source bioimage informatics for cell biology

    PubMed Central

    Swedlow, Jason R.; Eliceiri, Kevin W.

    2009-01-01

    Significant technical advances in imaging, molecular biology and genomics have fueled a revolution in cell biology, in that the molecular and structural processes of the cell are now visualized and measured routinely. Driving much of this recent development has been the advent of computational tools for the acquisition, visualization, analysis and dissemination of these datasets. These tools collectively make up a new subfield of computational biology called bioimage informatics, which is facilitated by open source approaches. We discuss why open source tools for image informatics in cell biology are needed, some of the key general attributes of what make an open source imaging application successful, and point to opportunities for further operability that should greatly accelerate future cell biology discovery. PMID:19833518

  10. Numerical Simulation of Dispersion from Urban Greenhouse Gas Sources

    NASA Astrophysics Data System (ADS)

    Nottrott, Anders; Tan, Sze; He, Yonggang; Winkler, Renato

    2017-04-01

    Cities are characterized by complex topography, inhomogeneous turbulence, and variable pollutant source distributions. These features create a scale separation between local sources and urban scale emissions estimates known as the Grey-Zone. Modern computational fluid dynamics (CFD) techniques provide a quasi-deterministic, physically based toolset to bridge the scale separation gap between source level dynamics, local measurements, and urban scale emissions inventories. CFD has the capability to represent complex building topography and capture detailed 3D turbulence fields in the urban boundary layer. This presentation discusses the application of OpenFOAM to urban CFD simulations of natural gas leaks in cities. OpenFOAM is an open source software for advanced numerical simulation of engineering and environmental fluid flows. When combined with free or low cost computer aided drawing and GIS, OpenFOAM generates a detailed, 3D representation of urban wind fields. OpenFOAM was applied to model scalar emissions from various components of the natural gas distribution system, to study the impact of urban meteorology on mobile greenhouse gas measurements. The numerical experiments demonstrate that CH4 concentration profiles are highly sensitive to the relative location of emission sources and buildings. Sources separated by distances of 5-10 meters showed significant differences in vertical dispersion of plumes, due to building wake effects. The OpenFOAM flow fields were combined with an inverse, stochastic dispersion model to quantify and visualize the sensitivity of point sensors to upwind sources in various built environments. The Boussinesq approximation was applied to investigate the effects of canopy layer temperature gradients and convection on sensor footprints.

  11. Titanic Weather Forecasting

    NASA Astrophysics Data System (ADS)

    2004-04-01

    New Detailed VLT Images of Saturn's Largest Moon Optimizing space missions Titan, the largest moon of Saturn was discovered by Dutch astronomer Christian Huygens in 1655 and certainly deserves its name. With a diameter of no less than 5,150 km, it is larger than Mercury and twice as large as Pluto. It is unique in having a hazy atmosphere of nitrogen, methane and oily hydrocarbons. Although it was explored in some detail by the NASA Voyager missions, many aspects of the atmosphere and surface still remain unknown. Thus, the existence of seasonal or diurnal phenomena, the presence of clouds, the surface composition and topography are still under debate. There have even been speculations that some kind of primitive life (now possibly extinct) may be found on Titan. Titan is the main target of the NASA/ESA Cassini/Huygens mission, launched in 1997 and scheduled to arrive at Saturn on July 1, 2004. The ESA Huygens probe is designed to enter the atmosphere of Titan, and to descend by parachute to the surface. Ground-based observations are essential to optimize the return of this space mission, because they will complement the information gained from space and add confidence to the interpretation of the data. Hence, the advent of the adaptive optics system NAOS-CONICA (NACO) [1] in combination with ESO's Very Large Telescope (VLT) at the Paranal Observatory in Chile now offers a unique opportunity to study the resolved disc of Titan with high sensitivity and increased spatial resolution. Adaptive Optics (AO) systems work by means of a computer-controlled deformable mirror that counteracts the image distortion induced by atmospheric turbulence. It is based on real-time optical corrections computed from image data obtained by a special camera at very high speed, many hundreds of times each second (see e.g. ESO Press Release 25/01 , ESO PR Photos 04a-c/02, ESO PR Photos 19a-c/02, ESO PR Photos 21a-c/02, ESO Press Release 17/02, and ESO Press Release 26/03 for earlier NACO images, and ESO Press Release 11/03 for MACAO-VLTI results.) The southern smile ESO PR Photo 08a/04 ESO PR Photo 08a/04 Images of Titan on November 20, 25 and 26, 2002 Through Five Filters (VLT YEPUN + NACO) [Preview - JPEG: 522 x 400 pix - 40k] [Normal - JPEG: 1043 x 800 pix - 340k] [Hires - JPEG: 2875 x 2205 pix - 1.2M] Caption: ESO PR Photo 08a/04 shows Titan (apparent visual magnitude 8.05, apparent diameter 0.87 arcsec) as observed with the NAOS/CONICA instrument at VLT Yepun (Paranal Observatory, Chile) on November 20, 25 and 26, 2003, between 6.00 UT and 9.00 UT. The median seeing values were 1.1 arcsec and 1.5 arcsec respectively for the 20th and 25th. Deconvoluted ("sharpened") images of Titan are shown through 5 different narrow-band filters - they allow to probe in some detail structures at different altitudes and on the surface. Depending on the filter, the integration time varies from 10 to 100 seconds. While Titan shows its leading hemisphere (i.e. the one observed when Titan moves towards us) on Nov. 20, the trailing side (i.e the one we see when Titan moves away from us in its course around Saturn) - which displays less bright surface features - is observed on the last two dates. ESO PR Photo 08b/04 ESO PR Photo 08b/04 Titan Observed Through Nine Different Filters on November 26, 2002 [Preview - JPEG: 480 x 400 pix - 36k] [Normal - JPEG: 960 x 800 pix - 284k] Caption: ESO PR Photo 08b/04: Images of Titan taken on November 26, 2002 through nine different filters to probe different altitudes, ranging from the stratosphere to the surface. On this night, a stable "seeing" (image quality before adaptive optics correction) of 0.9 arcsec allowed the astronomers to attain the diffraction limit of the telescope (0.032 arcsec resolution). Due to these good observing conditions, Titan's trailing hemisphere was observed with contrasts of about 40%, allowing the detection of several bright features on this surface region, once thought to be quite dark and featureless. ESO PR Photo 08c/04 ESO PR Photo 08c/04 Titan Surface Projections [Preview - JPEG: 601 x 400 pix - 64k] [Normal - JPEG: 1201 x 800 pix - 544k] Caption: ESO PR Photo 08c/04 : Titan images obtained with NACO on November 26th, 2002. Left: Titan's surface projection on the trailing hemisphere as observed at 1.3 μm, revealing a complex brightness structure thanks to the high image contrast of about 40%. Right: a new, possibly meteorological, phenomenon observed at 2.12 μm in Titan's atmosphere, in the form of a bright feature revolving around the South Pole. A team of French astronomers [2] have recently used the NACO state-of-the-art adaptive optics system on the fourth 8.2-m VLT unit telescope, Yepun, to map the surface of Titan by means of near-infrared images and to search for changes in the dense atmosphere. These extraordinary images have a nominal resolution of 1/30th arcsec and show details of the order of 200 km on the surface of Titan. To provide the best possible views, the raw data from the instrument were subjected to deconvolution (image sharpening). Images of Titan were obtained through 9 narrow-band filters, sampling near-infrared wavelengths with large variations in methane opacity. This permits sounding of different altitudes ranging from the stratosphere to the surface. Titan harbours at 1.24 and 2.12 μm a "southern smile", that is a north-south asymmetry, while the opposite situation is observed with filters probing higher altitudes, such as 1.64, 1.75 and 2.17 μm. A high-contrast bright feature is observed at the South Pole and is apparently caused by a phenomenon in the atmosphere, at an altitude below 140 km or so. This feature was found to change its location on the images from one side of the south polar axis to the other during the week of observations. Outlook An additional series of NACO observations of Titan is foreseen later this month (April 2004). These will be a great asset in helping optimize the return of the Cassini/Huygens mission. Several of the instruments aboard the spacecraft depend on such ground-based data to better infer the properties of Titan's surface and lower atmosphere. Although the astronomers have yet to model and interpret the physical and geophysical phenomena now observed and to produce a full cartography of the surface, this first analysis provides a clear demonstration of the marvellous capabilities of the NACO imaging system. More examples of the exciting science possible with this facility will be found in a series of five papers published today in the European research journal Astronomy & Astrophysics (Vol. 47, L1 to L24).

  12. Application of Open Source Software by the Lunar Mapping and Modeling Project

    NASA Astrophysics Data System (ADS)

    Ramirez, P.; Goodale, C. E.; Bui, B.; Chang, G.; Kim, R. M.; Law, E.; Malhotra, S.; Rodriguez, L.; Sadaqathullah, S.; Mattmann, C. A.; Crichton, D. J.

    2011-12-01

    The Lunar Mapping and Modeling Project (LMMP), led by the Marshall Space Flight center (MSFC), is responsible for the development of an information system to support lunar exploration, decision analysis, and release of lunar data to the public. The data available through the lunar portal is predominantly derived from present lunar missions (e.g., the Lunar Reconnaissance Orbiter (LRO)) and from historical missions (e.g., Apollo). This project has created a gold source of data, models, and tools for lunar explorers to exercise and incorporate into their activities. At Jet Propulsion Laboratory (JPL), we focused on engineering and building the infrastructure to support cataloging, archiving, accessing, and delivery of lunar data. We decided to use a RESTful service-oriented architecture to enable us to abstract from the underlying technology choices and focus on interfaces to be used internally and externally. This decision allowed us to leverage several open source software components and integrate them by either writing a thin REST service layer or relying on the API they provided; the approach chosen was dependent on the targeted consumer of a given interface. We will discuss our varying experience using open source products; namely Apache OODT, Oracle Berkley DB XML, Apache Solr, and Oracle OpenSSO (now named OpenAM). Apache OODT, developed at NASA's Jet Propulsion Laboratory and recently migrated over to Apache, provided the means for ingestion and cataloguing of products within the infrastructure. Its usage was based upon team experience with the project and past benefit received on other projects internal and external to JPL. Berkeley DB XML, distributed by Oracle for both commercial and open source use, was the storage technology chosen for our metadata. This decision was in part based on our use Federal Geographic Data Committee (FGDC) Metadata, which is expressed in XML, and the desire to keep it in its native form and exploit other technologies built on top of XML. Apache Solr, an open source search engine, was used to drive our search interface and as way to store references to metadata and data exposed via REST endpoints. As was the case with Apache OODT there was team experience with this component that helped drive this choice. Lastly, OpenSSO, an open source single sign on service, was used to secure and provide access constraints to our REST based services. For this product there was little past experience but given our service based approach seemed to be a natural fit. Given our exposure to open source we will discuss the tradeoffs and benefits received by the choices made. Moreover, we will dive into the context of how the software packages were used and the impact of their design and extensibility had on the construction of the infrastructure. Finally, we will compare our encounter across open source solutions and attributes that can vary the impression one will get. This comprehensive account of our endeavor should aid others in their assessment and use of open source.

  13. Building CHAOS: An Operating System for Livermore Linux Clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garlick, J E; Dunlap, C M

    2003-02-21

    The Livermore Computing (LC) Linux Integration and Development Project (the Linux Project) produces and supports the Clustered High Availability Operating System (CHAOS), a cluster operating environment based on Red Hat Linux. Each CHAOS release begins with a set of requirements and ends with a formally tested, packaged, and documented release suitable for use on LC's production Linux clusters. One characteristic of CHAOS is that component software packages come from different sources under varying degrees of project control. Some are developed by the Linux Project, some are developed by other LC projects, some are external open source projects, and some aremore » commercial software packages. A challenge to the Linux Project is to adhere to release schedules and testing disciplines in a diverse, highly decentralized development environment. Communication channels are maintained for externally developed packages in order to obtain support, influence development decisions, and coordinate/understand release schedules. The Linux Project embraces open source by releasing locally developed packages under open source license, by collaborating with open source projects where mutually beneficial, and by preferring open source over proprietary software. Project members generally use open source development tools. The Linux Project requires system administrators and developers to work together to resolve problems that arise in production. This tight coupling of production and development is a key strategy for making a product that directly addresses LC's production requirements. It is another challenge to balance support and development activities in such a way that one does not overwhelm the other.« less

  14. 10 CFR 39.43 - Inspection, maintenance, and opening of a source or source holder.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 1 2011-01-01 2011-01-01 false Inspection, maintenance, and opening of a source or source holder. 39.43 Section 39.43 Energy NUCLEAR REGULATORY COMMISSION LICENSES AND RADIATION SAFETY..., for defects before each use to ensure that the equipment is in good working condition and that...

  15. 10 CFR 39.43 - Inspection, maintenance, and opening of a source or source holder.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 1 2013-01-01 2013-01-01 false Inspection, maintenance, and opening of a source or source holder. 39.43 Section 39.43 Energy NUCLEAR REGULATORY COMMISSION LICENSES AND RADIATION SAFETY..., for defects before each use to ensure that the equipment is in good working condition and that...

  16. 10 CFR 39.43 - Inspection, maintenance, and opening of a source or source holder.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 1 2014-01-01 2014-01-01 false Inspection, maintenance, and opening of a source or source holder. 39.43 Section 39.43 Energy NUCLEAR REGULATORY COMMISSION LICENSES AND RADIATION SAFETY..., for defects before each use to ensure that the equipment is in good working condition and that...

  17. 10 CFR 39.43 - Inspection, maintenance, and opening of a source or source holder.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 1 2012-01-01 2012-01-01 false Inspection, maintenance, and opening of a source or source holder. 39.43 Section 39.43 Energy NUCLEAR REGULATORY COMMISSION LICENSES AND RADIATION SAFETY..., for defects before each use to ensure that the equipment is in good working condition and that...

  18. 10 CFR 39.43 - Inspection, maintenance, and opening of a source or source holder.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 1 2010-01-01 2010-01-01 false Inspection, maintenance, and opening of a source or source holder. 39.43 Section 39.43 Energy NUCLEAR REGULATORY COMMISSION LICENSES AND RADIATION SAFETY..., for defects before each use to ensure that the equipment is in good working condition and that...

  19. Sharing Lessons-Learned on Effective Open Data, Open-Source Practices from OpenAQ, a Global Open Air Quality Community.

    NASA Astrophysics Data System (ADS)

    Hasenkopf, C. A.

    2017-12-01

    Increasingly, open data, open-source projects are unearthing rich datasets and tools, previously impossible for more traditional avenues to generate. These projects are possible, in part, because of the emergence of online collaborative and code-sharing tools, decreasing costs of cloud-based services to fetch, store, and serve data, and increasing interest of individuals to contribute their time and skills to 'open projects.' While such projects have generated palpable enthusiasm from many sectors, many of these projects face uncharted paths for sustainability, visibility, and acceptance. Our project, OpenAQ, is an example of an open-source, open data community that is currently forging its own uncharted path. OpenAQ is an open air quality data platform that aggregates and universally formats government and research-grade air quality data from 50 countries across the world. To date, we make available more than 76 million air quality (PM2.5, PM10, SO2, NO2, O3, CO and black carbon) data points through an open Application Programming Interface (API) and a user-customizable download interface at https://openaq.org. The goal of the platform is to enable an ecosystem of users to advance air pollution efforts from science to policy to the private sector. The platform is also an open-source project (https://github.com/openaq) and has only been made possible through the coding and data contributions of individuals around the world. In our first two years of existence, we have seen requests for data to our API skyrocket to more than 6 million datapoints per month, and use-cases as varied as ingesting data aggregated from our system into real-time models of wildfires to building open-source statistical packages (e.g. ropenaq and py-openaq) on top of the platform to creating public-friendly apps and chatbots. We will share a whirl-wind trip through our evolution and the many lessons learned so far related to platform structure, community engagement, organizational model type and sustainability.

  20. Comprehensive Routing Security Development and Deployment for the Internet

    DTIC Science & Technology

    2015-02-01

    feature enhancement and bug fixes. • MySQL : MySQL is a widely used and popular open source database package. It was chosen for database support in the...RPSTIR depends on several other open source packages. • MySQL : MySQL is used for the the local RPKI database cache. • OpenSSL: OpenSSL is used for...cryptographic libraries for X.509 certificates. • ODBC mySql Connector: ODBC (Open Database Connectivity) is a standard programming interface (API) for

  1. Models for Deploying Open Source and Commercial Software to Support Earth Science Data Processing and Distribution

    NASA Astrophysics Data System (ADS)

    Yetman, G.; Downs, R. R.

    2011-12-01

    Software deployment is needed to process and distribute scientific data throughout the data lifecycle. Developing software in-house can take software development teams away from other software development projects and can require efforts to maintain the software over time. Adopting and reusing software and system modules that have been previously developed by others can reduce in-house software development and maintenance costs and can contribute to the quality of the system being developed. A variety of models are available for reusing and deploying software and systems that have been developed by others. These deployment models include open source software, vendor-supported open source software, commercial software, and combinations of these approaches. Deployment in Earth science data processing and distribution has demonstrated the advantages and drawbacks of each model. Deploying open source software offers advantages for developing and maintaining scientific data processing systems and applications. By joining an open source community that is developing a particular system module or application, a scientific data processing team can contribute to aspects of the software development without having to commit to developing the software alone. Communities of interested developers can share the work while focusing on activities that utilize in-house expertise and addresses internal requirements. Maintenance is also shared by members of the community. Deploying vendor-supported open source software offers similar advantages to open source software. However, by procuring the services of a vendor, the in-house team can rely on the vendor to provide, install, and maintain the software over time. Vendor-supported open source software may be ideal for teams that recognize the value of an open source software component or application and would like to contribute to the effort, but do not have the time or expertise to contribute extensively. Vendor-supported software may also have the additional benefits of guaranteed up-time, bug fixes, and vendor-added enhancements. Deploying commercial software can be advantageous for obtaining system or software components offered by a vendor that meet in-house requirements. The vendor can be contracted to provide installation, support and maintenance services as needed. Combining these options offers a menu of choices, enabling selection of system components or software modules that meet the evolving requirements encountered throughout the scientific data lifecycle.

  2. GIS-Based Noise Simulation Open Source Software: N-GNOIS

    NASA Astrophysics Data System (ADS)

    Vijay, Ritesh; Sharma, A.; Kumar, M.; Shende, V.; Chakrabarti, T.; Gupta, Rajesh

    2015-12-01

    Geographical information system (GIS)-based noise simulation software (N-GNOIS) has been developed to simulate the noise scenario due to point and mobile sources considering the impact of geographical features and meteorological parameters. These have been addressed in the software through attenuation modules of atmosphere, vegetation and barrier. N-GNOIS is a user friendly, platform-independent and open geospatial consortia (OGC) compliant software. It has been developed using open source technology (QGIS) and open source language (Python). N-GNOIS has unique features like cumulative impact of point and mobile sources, building structure and honking due to traffic. Honking is the most common phenomenon in developing countries and is frequently observed on any type of roads. N-GNOIS also helps in designing physical barrier and vegetation cover to check the propagation of noise and acts as a decision making tool for planning and management of noise component in environmental impact assessment (EIA) studies.

  3. Utilization of open source electronic health record around the world: A systematic review

    PubMed Central

    Aminpour, Farzaneh; Sadoughi, Farahnaz; Ahamdi, Maryam

    2014-01-01

    Many projects on developing Electronic Health Record (EHR) systems have been carried out in many countries. The current study was conducted to review the published data on the utilization of open source EHR systems in different countries all over the world. Using free text and keyword search techniques, six bibliographic databases were searched for related articles. The identified papers were screened and reviewed during a string of stages for the irrelevancy and validity. The findings showed that open source EHRs have been wildly used by source limited regions in all continents, especially in Sub-Saharan Africa and South America. It would create opportunities to improve national healthcare level especially in developing countries with minimal financial resources. Open source technology is a solution to overcome the problems of high-costs and inflexibility associated with the proprietary health information systems. PMID:24672566

  4. Using applet-servlet communication for optimizing window, level and crop for DICOM to JPEG conversion.

    PubMed

    Kamauu, Aaron W C; DuVall, Scott L; Wiggins, Richard H; Avrin, David E

    2008-09-01

    In the creation of interesting radiological cases in a digital teaching file, it is necessary to adjust the window and level settings of an image to effectively display the educational focus. The web-based applet described in this paper presents an effective solution for real-time window and level adjustments without leaving the picture archiving and communications system workstation. Optimized images are created, as user-defined parameters are passed between the applet and a servlet on the Health Insurance Portability and Accountability Act-compliant teaching file server.

  5. Sedimentological and radiochemical characteristics of marsh deposits from Assateague Island and the adjacent vicinity, Maryland and Virginia, following Hurricane Sandy

    USGS Publications Warehouse

    Smith, Christopher G.; Marot, Marci E.; Ellis, Alisha M.; Wheaton, Cathryn J.; Bernier, Julie C.; Adams, C. Scott

    2015-09-15

    This report serves as an archive for sedimentological and radiochemical data derived from the surface sediments and marsh cores collected March 26–April 4, 2014. Select surficial data are available for the additional sampling periods October 21–30, 2014. Downloadable data are available as Excel spreadsheets and as JPEG files. Additional files include: Field documentation, x-radiographs, photographs, detailed results of sediment grain size analyses, and formal Federal Geographic Data Committee metadata (data downloads).

  6. Evaluating the potential effects of hurricanes on long-term sediment accumulation in two micro-tidal sub-estuaries: Barnegat Bay and Little Egg Harbor, New Jersey, U.S.A.

    USGS Publications Warehouse

    Marot, Marci E.; Smith, Christopher G.; Ellis, Alisha M.; Wheaton, Cathryn J.

    2016-06-23

    This report serves as an archive for sedimentological and radiochemical data derived from the surface sediments and box cores. Downloadable data are available as Excel spreadsheets, PDF files, and JPEG files, and include sediment core data plots and x-radiographs, as well as physical-properties, grain-size, alpha-spectroscopy, and gamma-spectroscopy data. Federal Geographic Data Committee metadata are available for analytical datasets in the data downloads page of this report.

  7. Demonstration of Inexact Computing Implemented in the JPEG Compression Algorithm using Probabilistic Boolean Logic applied to CMOS Components

    DTIC Science & Technology

    2015-12-24

    Ripple-Carry RCA Ripple-Carry Adder RF Radio Frequency RMS Root-Mean-Square SEU Single Event Upset SIPI Signal and Image Processing Institute SNR...correctness, where 0.5 < p < 1, and a probability (1−p) of error. Errors could be caused by noise, radio frequency (RF) interference, crosstalk...utilized in the Apollo Guidance Computer is the three input NOR Gate. . . At the time that the decision was made to use in- 11 tegrated circuits, the

  8. Demonstration of Inexact Computing Implemented in the JPEG Compression Algorithm Using Probabilistic Boolean Logic Applied to CMOS Components

    DTIC Science & Technology

    2015-12-24

    Ripple-Carry RCA Ripple-Carry Adder RF Radio Frequency RMS Root-Mean-Square SEU Single Event Upset SIPI Signal and Image Processing Institute SNR...correctness, where 0.5 < p < 1, and a probability (1−p) of error. Errors could be caused by noise, radio frequency (RF) interference, crosstalk...utilized in the Apollo Guidance Computer is the three input NOR Gate. . . At the time that the decision was made to use in- 11 tegrated circuits, the

  9. Using Photo Story Lectures in an Online Astronomy Class

    NASA Astrophysics Data System (ADS)

    Caffey, James F.

    2008-05-01

    Photo Story is a free program from Microsoft that was designed to allow people to make videos from photos and add a voice narration to it. I use Photo Story to create video lectures in my online Astronomy class at Drury University in Springfield, Missouri. I take power point slides from my publisher, turn them into JPEG files, and add my voice over them to create the video lecture. Students at a distance say the lectures make them feel like they are back in the classroom. I will present several lectures.

  10. Digitizing the KSO white light images

    NASA Astrophysics Data System (ADS)

    Pötzi, W.

    From 1989 up to 2007 the Sun was observed at the Kanzelhöhe Observatory in white light on photographic film material. The images are on transparent sheet films and are not available to the scientific community now. With a photo scanner for transparent film material the films are now scanned and then prepared for scientific use. The programs for post processing are already finished and as an output FITS and JPEG-files are produced. The scanning should be finished end of 2011 and the data should then be available via our homepage.

  11. Optimized atom position and coefficient coding for matching pursuit-based image compression.

    PubMed

    Shoa, Alireza; Shirani, Shahram

    2009-12-01

    In this paper, we propose a new encoding algorithm for matching pursuit image coding. We show that coding performance is improved when correlations between atom positions and atom coefficients are both used in encoding. We find the optimum tradeoff between efficient atom position coding and efficient atom coefficient coding and optimize the encoder parameters. Our proposed algorithm outperforms the existing coding algorithms designed for matching pursuit image coding. Additionally, we show that our algorithm results in better rate distortion performance than JPEG 2000 at low bit rates.

  12. Bioclipse: an open source workbench for chemo- and bioinformatics.

    PubMed

    Spjuth, Ola; Helmus, Tobias; Willighagen, Egon L; Kuhn, Stefan; Eklund, Martin; Wagener, Johannes; Murray-Rust, Peter; Steinbeck, Christoph; Wikberg, Jarl E S

    2007-02-22

    There is a need for software applications that provide users with a complete and extensible toolkit for chemo- and bioinformatics accessible from a single workbench. Commercial packages are expensive and closed source, hence they do not allow end users to modify algorithms and add custom functionality. Existing open source projects are more focused on providing a framework for integrating existing, separately installed bioinformatics packages, rather than providing user-friendly interfaces. No open source chemoinformatics workbench has previously been published, and no successful attempts have been made to integrate chemo- and bioinformatics into a single framework. Bioclipse is an advanced workbench for resources in chemo- and bioinformatics, such as molecules, proteins, sequences, spectra, and scripts. It provides 2D-editing, 3D-visualization, file format conversion, calculation of chemical properties, and much more; all fully integrated into a user-friendly desktop application. Editing supports standard functions such as cut and paste, drag and drop, and undo/redo. Bioclipse is written in Java and based on the Eclipse Rich Client Platform with a state-of-the-art plugin architecture. This gives Bioclipse an advantage over other systems as it can easily be extended with functionality in any desired direction. Bioclipse is a powerful workbench for bio- and chemoinformatics as well as an advanced integration platform. The rich functionality, intuitive user interface, and powerful plugin architecture make Bioclipse the most advanced and user-friendly open source workbench for chemo- and bioinformatics. Bioclipse is released under Eclipse Public License (EPL), an open source license which sets no constraints on external plugin licensing; it is totally open for both open source plugins as well as commercial ones. Bioclipse is freely available at http://www.bioclipse.net.

  13. Web accessibility and open source software.

    PubMed

    Obrenović, Zeljko

    2009-07-01

    A Web browser provides a uniform user interface to different types of information. Making this interface universally accessible and more interactive is a long-term goal still far from being achieved. Universally accessible browsers require novel interaction modalities and additional functionalities, for which existing browsers tend to provide only partial solutions. Although functionality for Web accessibility can be found as open source and free software components, their reuse and integration is complex because they were developed in diverse implementation environments, following standards and conventions incompatible with the Web. To address these problems, we have started several activities that aim at exploiting the potential of open-source software for Web accessibility. The first of these activities is the development of Adaptable Multi-Interface COmmunicator (AMICO):WEB, an infrastructure that facilitates efficient reuse and integration of open source software components into the Web environment. The main contribution of AMICO:WEB is in enabling the syntactic and semantic interoperability between Web extension mechanisms and a variety of integration mechanisms used by open source and free software components. Its design is based on our experiences in solving practical problems where we have used open source components to improve accessibility of rich media Web applications. The second of our activities involves improving education, where we have used our platform to teach students how to build advanced accessibility solutions from diverse open-source software. We are also partially involved in the recently started Eclipse projects called Accessibility Tools Framework (ACTF), the aim of which is development of extensible infrastructure, upon which developers can build a variety of utilities that help to evaluate and enhance the accessibility of applications and content for people with disabilities. In this article we briefly report on these activities.

  14. A Platform for Innovation and Standards Evaluation: a Case Study from the OpenMRS Open-Source Radiology Information System.

    PubMed

    Gichoya, Judy W; Kohli, Marc; Ivange, Larry; Schmidt, Teri S; Purkayastha, Saptarshi

    2018-05-10

    Open-source development can provide a platform for innovation by seeking feedback from community members as well as providing tools and infrastructure to test new standards. Vendors of proprietary systems may delay adoption of new standards until there are sufficient incentives such as legal mandates or financial incentives to encourage/mandate adoption. Moreover, open-source systems in healthcare have been widely adopted in low- and middle-income countries and can be used to bridge gaps that exist in global health radiology. Since 2011, the authors, along with a community of open-source contributors, have worked on developing an open-source radiology information system (RIS) across two communities-OpenMRS and LibreHealth. The main purpose of the RIS is to implement core radiology workflows, on which others can build and test new radiology standards. This work has resulted in three major releases of the system, with current architectural changes driven by changing technology, development of new standards in health and imaging informatics, and changing user needs. At their core, both these communities are focused on building general-purpose EHR systems, but based on user contributions from the fringes, we have been able to create an innovative system that has been used by hospitals and clinics in four different countries. We provide an overview of the history of the LibreHealth RIS, the architecture of the system, overview of standards integration, describe challenges of developing an open-source product, and future directions. Our goal is to attract more participation and involvement to further develop the LibreHealth RIS into an Enterprise Imaging System that can be used in other clinical imaging including pathology and dermatology.

  15. Defending the Amazon: Conservation, Development and Security in Brazil

    DTIC Science & Technology

    2009-03-01

    against drugs is not 191 Nelson Jobim, interview by Empresa Brasil de Comunicação Radio, trans. Open Source Center, February 6, 2009, available from... Empresa Brasil de Comunicação Radio, trans. Open Source Center, February 6, 2009, available from http://www.ebc.com.br (accessed February 23, 2009...Institute of Peace, 1996. Jobim, Nelson. Interview by Empresa Brasil de Comunicação Radio. Translated by Open Source Center. February 6, 2009

  16. Open-Source web-based geographical information system for health exposure assessment

    PubMed Central

    2012-01-01

    This paper presents the design and development of an open source web-based Geographical Information System allowing users to visualise, customise and interact with spatial data within their web browser. The developed application shows that by using solely Open Source software it was possible to develop a customisable web based GIS application that provides functions necessary to convey health and environmental data to experts and non-experts alike without the requirement of proprietary software. PMID:22233606

  17. Open source 3D printers: an appropriate technology for building low cost optics labs for the developing communities

    NASA Astrophysics Data System (ADS)

    Gwamuri, J.; Pearce, Joshua M.

    2017-08-01

    The recent introduction of RepRap (self-replicating rapid prototyper) 3-D printers and the resultant open source technological improvements have resulted in affordable 3-D printing, enabling low-cost distributed manufacturing for individuals. This development and others such as the rise of open source-appropriate technology (OSAT) and solar powered 3-D printing are moving 3-D printing from an industry based technology to one that could be used in the developing world for sustainable development. In this paper, we explore some specific technological improvements and how distributed manufacturing with open-source 3-D printing can be used to provide open-source 3-D printable optics components for developing world communities through the ability to print less expensive and customized products. This paper presents an open-source low cost optical equipment library which enables relatively easily adapted customizable designs with the potential of changing the way optics is taught in resource constraint communities. The study shows that this method of scientific hardware development has a potential to enables a much broader audience to participate in optical experimentation both as research and teaching platforms. Conclusions on the technical viability of 3-D printing to assist in development and recommendations on how developing communities can fully exploit this technology to improve the learning of optics through hands-on methods have been outlined.

  18. A Forceful Demonstration by FORS

    NASA Astrophysics Data System (ADS)

    1998-09-01

    New VLT Instrument Provides Impressive Images Following a tight schedule, the ESO Very Large Telescope (VLT) project forges ahead - full operative readiness of the first of the four 8.2-m Unit Telescopes will be reached early next year. On September 15, 1998, another crucial milestone was successfully passed on-time and within budget. Just a few days after having been mounted for the first time at the first 8.2-m VLT Unit Telescope (UT1), the first of a powerful complement of complex scientific instruments, FORS1 ( FO cal R educer and S pectrograph), saw First Light . Right from the beginning, it obtained some excellent astronomical images. This major event now opens a wealth of new opportunities for European Astronomy. FORS - a technological marvel FORS1, with its future twin (FORS2), is the product of one of the most thorough and advanced technological studies ever made of a ground-based astronomical instrument. This unique facility is now mounted at the Cassegrain focus of the VLT UT1. Despite its significant dimensions, 3 x 1.5 metres and 2.3 tonnes, it appears rather small below the giant 53 m 2 Zerodur main mirror. Profiting from the large mirror area and the excellent optical properties of the UT1, FORS has been specifically designed to investigate the faintest and most remote objects in the universe. This complex VLT instrument will soon allow European astronomers to look beyond current observational horizons. The FORS instruments are "multi-mode instruments" that may be used in several different observation modes. It is, e.g., possible to take images with two different image scales (magnifications) and spectra at different resolutions may be obtained of individual or multiple objects. Thus, FORS may first detect the images of distant galaxies and immediately thereafter obtain recordings of their spectra. This allows for instance the determination of their stellar content and distances. As one of the most powerful astronomical instruments of its kind, FORS1 is a real workhorse for the study of the distant universe. How FORS was built The FORS project is being carried out under ESO contract by a consortium of three German astronomical institutes, namely the Heidelberg State Observatory and the University Observatories of Göttingen and Munich. When this project is concluded, the participating institutes will have invested about 180 man-years of work. The Heidelberg State Observatory was responsible for directing the project, for designing the entire optical system, for developing the components of the imaging, spectroscopic, and polarimetric optics, and for producing the special computer software needed for handling and analysing the measurements obtained with FORS. Moreover, a telescope simulator was built in the shop of the Heidelberg observatory that made it possible to test all major functions of FORS in Europe, before the instrument was shipped to Paranal. The University Observatory of Göttingen performed the design, the construction and the installation of the entire mechanics of FORS. Most of the high-precision parts, in particular the multislit unit, were manufactured in the observatory's fine-mechanical workshops. The procurement of the huge instrument housings and flanges, the computer analysis for mechanical and thermal stability of the sensitive spectrograph and the construction of the handling, maintenance and aligning equipment as well as testing the numerous opto- and electro-mechanical functions were also under the responsibility of this Observatory. The University of Munich had the responsibility for the management of the project, the integration and test in the laboratory of the complete instrument, for design and installation of all electronics and electro-mechanics, and for developing and testing the comprehensive software to control FORS in all its parts completely by computers (filter and grism wheels, shutters, multi-object slit units, masks, all optical components, electro motors, encoders etc.). In addition, detailed computer software was provided to prepare the complex astronomical observations with FORS in advance and to monitor the instrument performance by quality checks of the scientific data accumulated. In return for building FORS for the community of European astrophysicists, the scientists in the three institutions of the FORS Consortium have received a certain amount of Guaranteed Observing Time at the VLT. This time will be used for various research projects concerned, among others, with minor bodies in the outer solar system, stars at late stages of their evolution and the clouds of gas they eject, as well as galaxies and quasars at very large distances, thereby permitting a look-back towards the early epoch of the universe. First tests of FORS1 at the VLT UT1: a great success After careful preparation, the FORS consortium has now started the so-called commissioning of the instrument. This comprises the thorough verification of the specified instrument properties at the telescope, checking the correct functioning under software control from the Paranal control room and, at the end of this process, a demonstration that the instrument fulfills its scientific purpose as planned. While performing these tests, the commissioning team at Paranal were able to obtain images of various astronomical objects, some of which are shown here. Two of these were obtained on the night of "FORS First Light". The photos demonstrate some of the impressive posibilities with this new instrument. They are based on observations with the FORS standard resolution collimator (field size 6.8 x 6.8 armin = 2048 x 2048 pixels; 1 pixel = 0.20 arcsec). Spiral galaxy NGC 1288 ESO PR Photo 37a/98 ESO PR Photo 37a/98 [Preview - JPEG: 800 x 908 pix - 224k] [High-Res - JPEG: 3000 x 3406 pix - 1.5Mb] A colour image of spiral galaxy NGC 1288, obtained on the night of "FORS First Light". The first photo shows a reproduction of a colour composite image of the beautiful spiral galaxy NGC 1288 in the southern constellation Fornax. PR Photo 37a/98 covers the entire field that was imaged on the 2048 x 2048 pixel CCD camera. It is based on CCD frames in different colours that were taken under good seeing conditions during the night of First Light (15 September 1998). The distance to this galaxy is about 300 million light-years; it recedes with a velocity of 4500 km/sec. Its diameter is about 200,000 light-years. Technical information : Photo 37a/98 is based on a composite of three images taken behind three different filters: B (420 nm; 6 min), V (530 nm; 3 min) and I (800 nm; 3min) during a period of 0.7 arcsec seeing. The field shown measures 6.8 x 6.8 arcmin. North is left; East is down. Distant cluster of galaxies ESO PR Photo 37b/98 ESO PR Photo 37b/98 [Preview - JPEG: 657 x 800 pix - 248k] [High-Res - JPEG: 2465 x 3000 pix - 1.9Mb] A peculiar cluster of galaxies in a sky field near the quasar PB5763 . ESO PR Photo 37c/98 ESO PR Photo 37c/98 [Preview - JPEG: 670 x 800 pix - 272k] [High-Res - JPEG: 2512 x 3000 pix - 1.9Mb] Enlargement from PR Photo 37b/98, showing the peculiar cluster of galaxies in more detail. The next photos are reproduced from a 5-min near-infrared exposure, also obtained during the night of First Light of the FORS1 instrument (September 15, 1998). PR Photo 37b/98 shows a sky field near the quasar PB5763 in which is also seen a peculiar, quite distant cluster of galaxies. It consists of a large number of faint and distant galaxies that have not yet been thoroughly investigated. Many other fainter galaxies are seen in other areas, for instance in the right part of the field. This cluster is a good example of a type of object to which much observing time with FORS will be dedicated, once it enters into regular operation. An enlargement of the same field is reproduced in PR Photo 37c/98. It shows the individual members of this cluster of galaxies in more detail. Note in particular the interesting spindle-shaped galaxy that apparently possesses an equatorial ring. There is also a fine spiral galaxy and many fainter galaxies. They may be dwarf members of the cluster or be located in the background at even larger distances. Technical information : PR Photos 37b/98 (negative) and 37c/98 (positive) are based on a monochrome image taken in 0.8 arcsec seeing through a near-infrared (I; 800 nm) filtre. The exposure time was 5 minutes and the image was flat-fielded. The fields shown measure 6.8 x 6.8 arcmin and 2.5 x 2.3 arcmin, respectively. North is to the upper left; East is to the lower left. Spiral galaxy NGC 1232 ESO PR Photo 37d/98 ESO PR Photo 37d/98 [Preview - JPEG: 800 x 912 pix - 760k] [High-Res - JPEG: 3000 x 3420 pix - 5.7Mb] A colour image of spiral galaxy NGC 1232, obtained on September 21, 1998. ESO PR Photo 37e/98 ESO PR Photo 37e/98 [Preview - JPEG: 800 x 961 pix - 480k] [High-Res - JPEG: 3000 x 3602 pix - 3.5Mb] Enlargement of central area of PR Photo 37d/98. This spectacular image (Photo 37d/98) of the large spiral galaxy NGC 1232 was obtained on September 21, 1998, during a period of good observing conditions. It is based on three exposures in ultra-violet, blue and red light, respectively. The colours of the different regions are well visible: the central areas (Photo 37e/98) contain older stars of reddish colour, while the spiral arms are populated by young, blue stars and many star-forming regions. Note the distorted companion galaxy on the left side of Photo 37d/98, shaped like the greek letter "theta". NGC 1232 is located 20 o south of the celestial equator, in the constellation Eridanus (The River). The distance is about 100 million light-years, but the excellent optical quality of the VLT and FORS allows us to see an incredible wealth of details. At the indicated distance, the edge of the field shown in PR Photo 37d/98 corresponds to about 200,000 lightyears, or about twice the size of the Milky Way galaxy. Technical information : PR Photos 37d/98 and 37e/98 are based on a composite of three images taken behind three different filters: U (360 nm; 10 min), B (420 nm; 6 min) and R (600 nm; 2:30 min) during a period of 0.7 arcsec seeing. The fields shown measure 6.8 x 6.8 arcmin and 1.6 x 1.8 arcmin, respectively. North is up; East is to the left. Note: [1] This Press Release is published jointly (in English and German) by the European Southern Observatory, the Heidelberg State Observatory and the University Observatories of Goettingen and Munich. Eine Deutsche Fassung dieser Pressemitteilung steht ebenfalls zur Verfügung. How to obtain ESO Press Information ESO Press Information is made available on the World-Wide Web (URL: http://www.eso.org ). ESO Press Photos may be reproduced, if credit is given to the European Southern Observatory.

  19. Experimental assessment of theory for refraction of sound by a shear layer

    NASA Technical Reports Server (NTRS)

    Schlinker, R. H.; Amiet, R. K.

    1978-01-01

    The refraction angle and amplitude changes associated with sound transmission through a circular, open-jet shear layer were studied in a 0.91 m diameter open jet acoustic research tunnel. Free stream Mach number was varied from 0.1 to 0.4. Good agreement between refraction angle correction theory and experiment was obtained over the test Mach number, frequency and angle measurement range for all on-axis acoustic source locations. For off-axis source positions, good agreement was obtained at a source-to-shear layer separation distance greater than the jet radius. Measureable differences between theory and experiment occurred at a source-to-shear layer separation distance less than one jet radius. A shear layer turbulence scattering experiment was conducted at 90 deg to the open jet axis for the same free stream Mach numbers and axial source locations used in the refraction study. Significant discrete tone spectrum broadening and tone amplitude changes were observed at open jet Mach numbers above 0.2 and at acoustic source frequencies greater than 5 kHz. More severe turbulence scattering was observed for downstream source locations.

  20. An Open Source Model for Open Access Journal Publication

    PubMed Central

    Blesius, Carl R.; Williams, Michael A.; Holzbach, Ana; Huntley, Arthur C.; Chueh, Henry

    2005-01-01

    We describe an electronic journal publication infrastructure that allows a flexible publication workflow, academic exchange around different forms of user submissions, and the exchange of articles between publishers and archives using a common XML based standard. This web-based application is implemented on a freely available open source software stack. This publication demonstrates the Dermatology Online Journal's use of the platform for non-biased independent open access publication. PMID:16779183

Top