Sample records for picture elements pixels

  1. Fast Pixel Buffer For Processing With Lookup Tables

    NASA Technical Reports Server (NTRS)

    Fisher, Timothy E.

    1992-01-01

    Proposed scheme for buffering data on intensities of picture elements (pixels) of image increases rate or processing beyond that attainable when data read, one pixel at time, from main image memory. Scheme applied in design of specialized image-processing circuitry. Intended to optimize performance of processor in which electronic equivalent of address-lookup table used to address those pixels in main image memory required for processing.

  2. Method of fabrication of display pixels driven by silicon thin film transistors

    DOEpatents

    Carey, Paul G.; Smith, Patrick M.

    1999-01-01

    Display pixels driven by silicon thin film transistors are fabricated on plastic substrates for use in active matrix displays, such as flat panel displays. The process for forming the pixels involves a prior method for forming individual silicon thin film transistors on low-temperature plastic substrates. Low-temperature substrates are generally considered as being incapable of withstanding sustained processing temperatures greater than about 200.degree. C. The pixel formation process results in a complete pixel and active matrix pixel array. A pixel (or picture element) in an active matrix display consists of a silicon thin film transistor (TFT) and a large electrode, which may control a liquid crystal light valve, an emissive material (such as a light emitting diode or LED), or some other light emitting or attenuating material. The pixels can be connected in arrays wherein rows of pixels contain common gate electrodes and columns of pixels contain common drain electrodes. The source electrode of each pixel TFT is connected to its pixel electrode, and is electrically isolated from every other circuit element in the pixel array.

  3. Viking Lander Mosaics of Mars

    NASA Technical Reports Server (NTRS)

    Morris, E. C.

    1985-01-01

    The Viking Lander 1 and 2 cameras acquired many high-resolution pictures of the Chryse Planitia and Utopia Planitia landing sites. Based on computer-processed data of a selected number of these pictures, eight high-resolution mosaics were published by the U.S. Geological Survey as part of the Atlas of Mars, Miscellaneous Investigation Series. The mosaics are composites of the best picture elements (pixels) of all the Lander pictures used. Each complete mosaic extends 342.5 deg in azimuth, from approximately 5 deg above the horizon to 60 deg below, and incorporates approximately 15 million pixels. Each mosaic is shown in a set of five sheets. One sheet contains the full panorama from one camera taken in either morning or evening. The other four sheets show sectors of the panorama at an enlarged scale; when joined together they make a panorama approximately 2' X 9'.

  4. Exploring the Hidden Structure of Astronomical Images: A "Pixelated" View of Solar System and Deep Space Features!

    ERIC Educational Resources Information Center

    Ward, R. Bruce; Sienkiewicz, Frank; Sadler, Philip; Antonucci, Paul; Miller, Jaimie

    2013-01-01

    We describe activities created to help student participants in Project ITEAMS (Innovative Technology-Enabled Astronomy for Middle Schools) develop a deeper understanding of picture elements (pixels), image creation, and analysis of the recorded data. ITEAMS is an out-of-school time (OST) program funded by the National Science Foundation (NSF) with…

  5. Europa Imaging Highlights during GEM

    NASA Technical Reports Server (NTRS)

    1998-01-01

    During the two year Galileo Europa Mission (GEM), NASA's Galileo spacecraft will focus intensively on Jupiter's intriguing moon, Europa. This montage shows samples of some of the features that will be imaged during eight successive orbits. The images in this montage are in order of increasing orbit from the upper left (orbit 11) to the lower right (orbit 19).

    DESCRIPTIONS AND APPROXIMATE RESOLUTIONSTriple bands and dark spots

    1.6 kilometers/pixelConamara Chaos

    1.6 kilometers/pixelMannan'an Crater

    1.6 kilometers/ pixelCilix

    1.6 kilometers/pixelAgenor Linea and Thrace Macula

    2 kilometers/pixelSouth polar terrain

    2 kilometers/pixelRhadamanthys Linea

    1.6 kilometers/pixelEuropa plume search

    7 kilometers/pixel

    1. Triple bands and dark spots were the focus of some images from Galileo's eleventh orbit of Jupiter. Triple bands are multiple ridges with dark deposits along the outer margins. Some extend for thousands of kilometers across Europa's icy surface. They are cracks in the ice sheet and indicate the great stresses imposed on Europa by tides raised by Jupiter, as well as Europa's neighboring moons, Ganymede and Io. The dark spots or 'lenticulae' are spots of localized disruption.

    2. The Conamara Chaos region reveals icy plates which have broken up, moved, and rafted into new positions. This terrain suggests that liquid water or ductile ice was present near the surface. On Galileo's twelfth orbit of Jupiter, sections of this region with resolutions as high as 10 meters per picture element will be obtained.

    3. Mannann'an Crater is a feature newly discovered by Galileo in June 1996. Color and high resolution images (to 40 meters per picture element) from Galileo's fourteenth orbit of Jupiter will offer a close look at the crater and help characterize how impacts affect the icy surface of this moon.

    4. Cilix, a large mound about 1.5 kilometers high, is the center of Europa's coordinate system. Its concave top and what may be flow like features to the southwest of the mound are especially intriguing. The origin of this feature is unknown at present. Color, stereo, and high resolution images (to 65 meters per picture element) from Galileo's fifteenth orbit of Jupiter will offer new insights and resolve questions about its origin.

    5. Images of Agenor Linea (white arrow) and Thrace Macula (black arrow) with resolutions as high as 30 meters per picture element will be obtained during Galileo's sixteenth orbit of Jupiter. Agenor is an unusually bright lineament on Europa. Is the brightness due to new ice, and if so, does it represent recent activity? Could the dark region of Thrace Macula be a flow from ice volcanism?

    6. Images of Europa's south polar terrain obtained during Galileo's seventeenth orbit of Jupiter will offer insights into the processes which are active in this region. Is the ice crust thicker near Europa's poles than near the equator? The prominent dark line running from upper left to lower right through the center of this image is Astypalaea Linea. It is a fault about the length of the San Andreas fault in California and is the largest such fault known on Europa. Images with resolutions of 48 meters per picture element will be obtained to examine its geologic structure.

    7. This long lineament, Rhadamanthys Linea. is spotted with dark 'freckles'. Are these freckle features formed by icy volcanism? Is this an early form of a triple band? Stereo and high resolution (to 46 meters per picture element) obtained during Galileo's eighteenth orbit of Jupiter may indicate whether the lineament is the result of volcanic processes or is formed by other surface processes.

    8. During Galileo's nineteenth orbit of Jupiter, images of Europa will be taken with very low sun illuminations, similar to taking a picture at sunset or sunrise. The object will be to search for backlit plumes issuing from icy volcanic vents. Such plumes would be direct evidence of a liquid ocean beneath the ice. Resolutions will be as high as 40 meters per picture element. This picture was simulated image from Galileo data obtained during the spacecraft's second orbit of Jupiter in September 1996.

    North is to the top of the pictures. During orbit 13, the Galileo spacecraft was behind the sun from our vantage point on Earth so it did not obtain or transmit data from that orbit. The left two images in the bottom row were obtained by NASA's Voyager 2 spacecraft in 1979; the remaining images were obtained by the Solid State Imaging (SSI) system on NASA's Galileo spacecraft in 1996.

    The Jet Propulsion Laboratory, Pasadena, CA manages the Galileo mission for NASA's Office of Space Science, Washington, DC. JPL is an operating division of California Institute of Technology (Caltech).

    This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://galileo.jpl.nasa.gov. Background information and educational context for the images can be found at URL http://www.jpl.nasa.gov/galileo/sepo

  6. Communication system analysis for manned space flight

    NASA Technical Reports Server (NTRS)

    Schilling, D. L.

    1977-01-01

    One- and two-dimensional adaptive delta modulator (ADM) algorithms are discussed and compared. Results are shown for bit rates of two bits/pixel, one bit/pixel and 0.5 bits/pixel. Pictures showing the difference between the encoded-decoded pictures and the original pictures are presented. The effect of channel errors on the reconstructed picture is illustrated. A two-dimensional ADM using interframe encoding is also presented. This system operates at the rate of two bits/pixel and produces excellent quality pictures when there is little motion. The effect of large amounts of motion on the reconstructed picture is described.

  7. High-speed massively parallel scanning

    DOEpatents

    Decker, Derek E [Byron, CA

    2010-07-06

    A new technique for recording a series of images of a high-speed event (such as, but not limited to: ballistics, explosives, laser induced changes in materials, etc.) is presented. Such technique(s) makes use of a lenslet array to take image picture elements (pixels) and concentrate light from each pixel into a spot that is much smaller than the pixel. This array of spots illuminates a detector region (e.g., film, as one embodiment) which is scanned transverse to the light, creating tracks of exposed regions. Each track is a time history of the light intensity for a single pixel. By appropriately configuring the array of concentrated spots with respect to the scanning direction of the detection material, different tracks fit between pixels and sufficient lengths are possible which can be of interest in several high-speed imaging applications.

  8. Maskless lithography

    DOEpatents

    Sweatt, William C.; Stulen, Richard H.

    1999-01-01

    The present invention provides a method for maskless lithography. A plurality of individually addressable and rotatable micromirrors together comprise a two-dimensional array of micromirrors. Each micromirror in the two-dimensional array can be envisioned as an individually addressable element in the picture that comprises the circuit pattern desired. As each micromirror is addressed it rotates so as to reflect light from a light source onto a portion of the photoresist coated wafer thereby forming a pixel within the circuit pattern. By electronically addressing a two-dimensional array of these micromirrors in the proper sequence a circuit pattern that is comprised of these individual pixels can be constructed on a microchip. The reflecting surface of the micromirror is configured in such a way as to overcome coherence and diffraction effects in order to produce circuit elements having straight sides.

  9. Maskless lithography

    DOEpatents

    Sweatt, W.C.; Stulen, R.H.

    1999-02-09

    The present invention provides a method for maskless lithography. A plurality of individually addressable and rotatable micromirrors together comprise a two-dimensional array of micromirrors. Each micromirror in the two-dimensional array can be envisioned as an individually addressable element in the picture that comprises the circuit pattern desired. As each micromirror is addressed it rotates so as to reflect light from a light source onto a portion of the photoresist coated wafer thereby forming a pixel within the circuit pattern. By electronically addressing a two-dimensional array of these micromirrors in the proper sequence a circuit pattern that is comprised of these individual pixels can be constructed on a microchip. The reflecting surface of the micromirror is configured in such a way as to overcome coherence and diffraction effects in order to produce circuit elements having straight sides. 12 figs.

  10. Method for maskless lithography

    DOEpatents

    Sweatt, William C.; Stulen, Richard H.

    2000-01-01

    The present invention provides a method for maskless lithography. A plurality of individually addressable and rotatable micromirrors together comprise a two-dimensional array of micromirrors. Each micromirror in the two-dimensional array can be envisioned as an individually addressable element in the picture that comprises the circuit pattern desired. As each micromirror is addressed it rotates so as to reflect light from a light source onto a portion of the photoresist coated wafer thereby forming a pixel within the circuit pattern. By electronically addressing a two-dimensional array of these micromirrors in the proper sequence a circuit pattern that is comprised of these individual pixels can be constructed on a microchip. The reflecting surface of the micromirror is configured in such a way as to overcome coherence and diffraction effects in order to produce circuit elements having straight sides.

  11. Remote sensing: Physical principles, sensors and products, and the LANDSAT

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Steffen, C. A.; Lorenzzetti, J. A.; Stech, J. L.; Desouza, R. C. M.

    1981-01-01

    Techniques of data acquisition by remote sensing are introduced in this teaching aid. The properties of the elements involved (radiant energy, topograph, atmospheric attenuation, surfaces, and sensors) are covered. Radiometers, photography, scanners, and radar are described as well as their products. Aspects of the LANDSAT system examined include the characteristics of the satellite and its orbit, the multispectral band scanner, and the return beam vidicon. Pixels (picture elements), pattern registration, and the characteristics, reception, and processing of LANDSAT imagery are also considered.

  12. Monitoring Colima Volcano, Mexico, using satellite data

    NASA Technical Reports Server (NTRS)

    Abrams, Michael; Glaze, Lori; Sheridan, Michael

    1991-01-01

    The Colima Volcanic Complex at the western end of the Mexican Volcanic Belt is the most active andesitic volcano in Mexico. Short-wavelength infrared data from the Landsat Thematic Mapper satellite were used to determine the temperature and fractional area of radiant picture elements for two January data acquisitions in 1985 and 1986. The 1986 data showed four 28.5 m by 28.5 m pixels (picture elements) whose hot subpixel components had temperatures ranging from 511-774 C and areas of 1.8-13 sq m. The 1985 data had no radiating areas above background temperatures. Ground observations and measurements in November 1985 and February 1986 reported the presence of hot fumaroles at the summit with temperatures of 135-895 C. This study demonstrates the utility of satellite data for monitoring volcanic activity.

  13. Method for maskless lithography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    The present invention provides a method for maskless lithography. A plurality of individually addressable and rotatable micromirrors together comprise a two-dimensional array of micromirrors. Each micromirror in the two-dimensional array can be envisioned as an individually addressable element in the picture that comprises the circuit pattern desired. As each micromirror is addressed it rotates so as to reflect light from a light source onto a portion of the photoresist coated wafer thereby forming a pixel within the circuit pattern. By electronically addressing a two-dimensional array of these micromirrors in the proper sequence a circuit pattern that is comprised of thesemore » individual pixels can be constructed on a microchip. The reflecting surface of the micromirror is configured in such a way as to overcome coherence and diffraction effects in order to produce circuit elements having straight sides.« less

  14. Maskless lithography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sweatt, W.C.; Stulen, R.H.

    The present invention provides a method for maskless lithography. A plurality of individually addressable and rotatable micromirrors together comprise a two-dimensional array of micromirrors. Each micromirror in the two-dimensional array can be envisioned as an individually addressable element in the picture that comprises the circuit pattern desired. As each micromirror is addressed it rotates so as to reflect light from a light source onto a portion of the photoresist coated wafer thereby forming a pixel within the circuit pattern. By electronically addressing a two-dimensional array of these micromirrors in the proper sequence a circuit pattern that is comprised of thesemore » individual pixels can be constructed on a microchip. The reflecting surface of the micromirror is configured in such a way as to overcome coherence and diffraction effects in order to produce circuit elements having straight sides. 12 figs.« less

  15. Adaptive variable-length coding for efficient compression of spacecraft television data.

    NASA Technical Reports Server (NTRS)

    Rice, R. F.; Plaunt, J. R.

    1971-01-01

    An adaptive variable length coding system is presented. Although developed primarily for the proposed Grand Tour missions, many features of this system clearly indicate a much wider applicability. Using sample to sample prediction, the coding system produces output rates within 0.25 bit/picture element (pixel) of the one-dimensional difference entropy for entropy values ranging from 0 to 8 bit/pixel. This is accomplished without the necessity of storing any code words. Performance improvements of 0.5 bit/pixel can be simply achieved by utilizing previous line correlation. A Basic Compressor, using concatenated codes, adapts to rapid changes in source statistics by automatically selecting one of three codes to use for each block of 21 pixels. The system adapts to less frequent, but more dramatic, changes in source statistics by adjusting the mode in which the Basic Compressor operates on a line-to-line basis. Furthermore, the compression system is independent of the quantization requirements of the pulse-code modulation system.

  16. A Compact Polarization Imager

    NASA Technical Reports Server (NTRS)

    Thompson, Karl E.; Rust, David M.; Chen, Hua

    1995-01-01

    A new type of image detector has been designed to analyze the polarization of light simultaneously at all picture elements (pixels) in a scene. The Integrated Dual Imaging Detector (IDID) consists of a polarizing beamsplitter bonded to a custom-designed charge-coupled device with signal-analysis circuitry, all integrated on a silicon chip. The IDID should simplify the design and operation of imaging polarimeters and spectroscopic imagers used, for example, in atmospheric and solar research. Other applications include environmental monitoring and robot vision. Innovations in the IDID include two interleaved 512 x 1024 pixel imaging arrays (one for each polarization plane), large dynamic range (well depth of 10(exp 6) electrons per pixel), simultaneous readout and display of both images at 10(exp 6) pixels per second, and on-chip analog signal processing to produce polarization maps in real time. When used with a lithium niobate Fabry-Perot etalon or other color filter that can encode spectral information as polarization, the IDID can reveal tiny differences between simultaneous images at two wavelengths.

  17. High-resolution CCD imaging alternatives

    NASA Astrophysics Data System (ADS)

    Brown, D. L.; Acker, D. E.

    1992-08-01

    High resolution CCD color cameras have recently stimulated the interest of a large number of potential end-users for a wide range of practical applications. Real-time High Definition Television (HDTV) systems are now being used or considered for use in applications ranging from entertainment program origination through digital image storage to medical and scientific research. HDTV generation of electronic images offers significant cost and time-saving advantages over the use of film in such applications. Further in still image systems electronic image capture is faster and more efficient than conventional image scanners. The CCD still camera can capture 3-dimensional objects into the computing environment directly without having to shoot a picture on film develop it and then scan the image into a computer. 2. EXTENDING CCD TECHNOLOGY BEYOND BROADCAST Most standard production CCD sensor chips are made for broadcast-compatible systems. One popular CCD and the basis for this discussion offers arrays of roughly 750 x 580 picture elements (pixels) or a total array of approximately 435 pixels (see Fig. 1). FOR. A has developed a technique to increase the number of available pixels for a given image compared to that produced by the standard CCD itself. Using an inter-lined CCD with an overall spatial structure several times larger than the photo-sensitive sensor areas each of the CCD sensors is shifted in two dimensions in order to fill in spatial gaps between adjacent sensors.

  18. The CZCS geolocation algorithms

    NASA Technical Reports Server (NTRS)

    Wilson, W. H.; Smith, R. C.; Nolten, J. W.

    1981-01-01

    The Coastal Zone Color Scanner (CZCS) on board the Nimbus 7 satellite was designed to measure surface radiance upwelled from the ocean in 6 spectral bands. The CZCS spectrometer obtains its information from a rotating mirror and is timed to collect data when the mirror views the Earth surface between ca. 40 degrees to the left and right of the subsatellite track. Each scan is divided into 1968 picture elements, pixels, of 0.04 degrees scan each. In order to avoid direct reflected Sun glint, the rotating mirror shaft can be tilted so that scans across the subsatellite track up to 20 degrees forward or aft of the point directed beneath the satellite. The CZCS is the first satellite borne instrument to have this tilted scan capability and therefore poses some new problems in locating the Earth surface position of viewed pixels.

  19. Tradeoff between picture element dimensions and noncoherent averaging in side-looking airborne radar

    NASA Technical Reports Server (NTRS)

    Moore, R. K.

    1979-01-01

    An experiment was performed in which three synthetic-aperture images and one real-aperture image were successively degraded in spatial resolution, both retaining the same number of independent samples per pixel and using the spatial degradation to allow averaging of different numbers of independent samples within each pixel. The original and degraded images were provided to three interpreters familiar with both aerial photographs and radar images. The interpreters were asked to grade each image in terms of their ability to interpret various specified features on the image. The numerical interpretability grades were then used as a quantitative measure of the utility of the different kinds of image processing and different resolutions. The experiment demonstrated empirically that the interpretability is related exponentially to the SGL volume which is the product of azimuth, range, and gray-level resolution.

  20. Limited Area Coverage/High Resolution Picture Transmission (LAC/HRPT) tape IJ grid pixel extraction processor user's manual

    NASA Technical Reports Server (NTRS)

    Obrien, S. O. (Principal Investigator)

    1980-01-01

    The program, LACREG, extracted all pixels that are contained in a specific IJ grid section. The pixels, along with a header record are stored in a disk file defined by the user. The program will extract up to 99 IJ grid sections.

  1. 3D motion picture of transparent gas flow by parallel phase-shifting digital holography

    NASA Astrophysics Data System (ADS)

    Awatsuji, Yasuhiro; Fukuda, Takahito; Wang, Yexin; Xia, Peng; Kakue, Takashi; Nishio, Kenzo; Matoba, Osamu

    2018-03-01

    Parallel phase-shifting digital holography is a technique capable of recording three-dimensional (3D) motion picture of dynamic object, quantitatively. This technique can record single hologram of an object with an image sensor having a phase-shift array device and reconstructs the instantaneous 3D image of the object with a computer. In this technique, a single hologram in which the multiple holograms required for phase-shifting digital holography are multiplexed by using space-division multiplexing technique pixel by pixel. We demonstrate 3D motion picture of dynamic and transparent gas flow recorded and reconstructed by the technique. A compressed air duster was used to generate the gas flow. A motion picture of the hologram of the gas flow was recorded at 180,000 frames/s by parallel phase-shifting digital holography. The phase motion picture of the gas flow was reconstructed from the motion picture of the hologram. The Abel inversion was applied to the phase motion picture and then the 3D motion picture of the gas flow was obtained.

  2. Ground-based Nighttime Cloud Detection Using a Commercial Digital Camera: Observations at Manila Observatory (14.64N, 121.07E)

    NASA Astrophysics Data System (ADS)

    Gacal, G. F. B.; Tan, F.; Antioquia, C. T.; Lagrosas, N.

    2014-12-01

    Cloud detection during nighttime poses a real problem to researchers because of a lack of optimum sensors that can specifically detect clouds during this time of the day. Hence, lidars and satellites are currently some of the instruments that are being utilized to determine cloud presence in the atmosphere. These clouds play a significant role in the night weather system for the reason that they serve as barriers of thermal radiation from the Earth and thereby reflecting this radiation back to the Earth. This effectively lowers the rate of decreasing temperature in the atmosphere at night. The objective of this study is to detect cloud occurrences at nighttime for the purpose of studying patterns of cloud occurrence and the effects of clouds on local weather. In this study, a commercial camera (Canon Powershot A2300) is operated continuously to capture nighttime clouds. The camera is situated inside a weather-proof box with a glass cover and is placed on the rooftop of the Manila Observatory building to gather pictures of the sky every 5min to observe cloud dynamics and evolution in the atmosphere. To detect pixels with clouds, the pictures are converted from its native JPEG to grayscale format. The pixels are then screened for clouds by looking at the values of pixels with and without clouds. In grayscale format, pixels with clouds have greater pixel values than pixels without clouds. Based on the observations, 0.34 of the maximum pixel value is enough to discern pixels with clouds from pixels without clouds. Figs. 1a & 1b are sample unprocessed pictures of cloudless night (May 22-23, 2014) and cloudy skies (May 23-24, 2014), respectively. Figs.1c and 1d show percentage of occurrence of nighttime clouds on May 22-23 and May 23-24, 2014, respectively. The cloud occurrence in a pixel is defined as the ratio of the number times when the pixel has clouds to the total number of observations. Fig. 1c shows less than 50% cloud occurrence while Fig. 1d shows cloud occurrence more than what is shown in Fig. 1c. These graphs show the capability of the camera to detect and measure the cloud occurrence at nighttime. Continuous collection of nighttime pictures is currently implemented. In regions where there is a dearth of scientific data, the measured nighttime cloud occurrence will serve as a baseline for future cloud studies in this part of the world.

  3. Automated thematic mapping and change detection of ERTS-A images

    NASA Technical Reports Server (NTRS)

    Gramenopoulos, N. (Principal Investigator)

    1975-01-01

    The author has identified the following significant results. In the first part of the investigation, spatial and spectral features were developed which were employed to automatically recognize terrain features through a clustering algorithm. In this part of the investigation, the size of the cell which is the number of digital picture elements used for computing the spatial and spectral features was varied. It was determined that the accuracy of terrain recognition decreases slowly as the cell size is reduced and coincides with increased cluster diffuseness. It was also proven that a cell size of 17 x 17 pixels when used with the clustering algorithm results in high recognition rates for major terrain classes. ERTS-1 data from five diverse geographic regions of the United States were processed through the clustering algorithm with 17 x 17 pixel cells. Simple land use maps were produced and the average terrain recognition accuracy was 82 percent.

  4. Precise color images a high-speed color video camera system with three intensified sensors

    NASA Astrophysics Data System (ADS)

    Oki, Sachio; Yamakawa, Masafumi; Gohda, Susumu; Etoh, Takeharu G.

    1999-06-01

    High speed imaging systems have been used in a large field of science and engineering. Although the high speed camera systems have been improved to high performance, most of their applications are only to get high speed motion pictures. However, in some fields of science and technology, it is useful to get some other information, such as temperature of combustion flame, thermal plasma and molten materials. Recent digital high speed video imaging technology should be able to get such information from those objects. For this purpose, we have already developed a high speed video camera system with three-intensified-sensors and cubic prism image splitter. The maximum frame rate is 40,500 pps (picture per second) at 64 X 64 pixels and 4,500 pps at 256 X 256 pixels with 256 (8 bit) intensity resolution for each pixel. The camera system can store more than 1,000 pictures continuously in solid state memory. In order to get the precise color images from this camera system, we need to develop a digital technique, which consists of a computer program and ancillary instruments, to adjust displacement of images taken from two or three image sensors and to calibrate relationship between incident light intensity and corresponding digital output signals. In this paper, the digital technique for pixel-based displacement adjustment are proposed. Although the displacement of the corresponding circle was more than 8 pixels in original image, the displacement was adjusted within 0.2 pixels at most by this method.

  5. Viele Tests - viele Fehler?

    NASA Astrophysics Data System (ADS)

    Tabelow, Karsten

    Bildgebende Verfahren haben sich in den letzten Jahren einen festen Platz in der Medizin erobert und die medizinische Forschung und Diagnostik revolutioniert. Sie ermöglichen Ärzten und Forschern einen Einblick in lebendes Gewebe. Mit der fortschreitenden technischen Entwicklung liefern die Verfahren immer höhere Auflösungen, schärfere Bilder und mehr Details. Bildgebende Verfahren sind ohne Mathematik undenkbar, von der Bildrekonstruktion aus den gemessenen Signalen, bis hin zur Auswertung der Bildinformation. Für die Analyse der großen Menge an Bilddatenpunkten (Voxel, volume element, im Gegensatz zum zweidimensionalen Pixel, picture element) werden häufig insbesondere Methoden der mathematischen Statistik benötigt. Zufällige Fehler in der Messung äußern sich als Bildrauschen, die Bilder wirken unscharf und gestört. Dadurch werden diagnostische Entscheidungen erschwert.

  6. Cameras for digital microscopy.

    PubMed

    Spring, Kenneth R

    2013-01-01

    This chapter reviews the fundamental characteristics of charge-coupled devices (CCDs) and related detectors, outlines the relevant parameters for their use in microscopy, and considers promising recent developments in the technology of detectors. Electronic imaging with a CCD involves three stages--interaction of a photon with the photosensitive surface, storage of the liberated charge, and readout or measurement of the stored charge. The most demanding applications in fluorescence microscopy may require as much as four orders of greater magnitude sensitivity. The image in the present-day light microscope is usually acquired with a CCD camera. The CCD is composed of a large matrix of photosensitive elements (often referred to as "pixels" shorthand for picture elements, which simultaneously capture an image over the entire detector surface. The light-intensity information for each pixel is stored as electronic charge and is converted to an analog voltage by a readout amplifier. This analog voltage is subsequently converted to a numerical value by a digitizer situated on the CCD chip, or very close to it. Several (three to six) amplifiers are required for each pixel, and to date, uniform images with a homogeneous background have been a problem because of the inherent difficulties of balancing the gain in all of the amplifiers. Complementary metal oxide semiconductor sensors also exhibit relatively high noise associated with the requisite high-speed switching. Both of these deficiencies are being addressed, and sensor performance is nearing that required for scientific imaging. Copyright © 1998 Elsevier Inc. All rights reserved.

  7. An analysis of haze effects on LANDSAT multispectral scanner data

    NASA Technical Reports Server (NTRS)

    Johnson, W. R.; Sestak, M. L. (Principal Investigator)

    1981-01-01

    Early season changes in optical depth change brightness, primarily along the soil line; and during crop development, changes in optical depth change both greenness and brightness. Thus, the existence of haze in the imagery could cause an unsuspecting analyst to interpret the spectral appearance as indicating an episodal event when, in fact, haze was present. The techniques for converting LANDSAT-3 data to simulate LANDSAT-2 data are in error. The yellowness and none such computations are affected primarily. Yellowness appears well correlated to optical depth. Experimental evidence with variable background and variable optical depth is needed, however. The variance of picture elements within a spring wheat field is related to its equivalent in optical depth changes caused by haze. This establishes the sensitivity of channel 1 (greenness) pixels to changes in haze levels. The between field picture element means and variances were determined for the spring wheat fields. This shows the variability of channel data on two specific dates, emphasizing that crop development can be influenced by many factors. The atmospheric correction program ATCOR reduces segment data from LANDSAT acquisitions to a common haze level and improves the results of analysis.

  8. View of Callisto at Increasing Resolutions

    NASA Technical Reports Server (NTRS)

    1998-01-01

    These four views of Jupiter's second largest moon, Callisto, highlight how increasing resolutions enable interpretation of the surface. In the global view (top left) the surface is seen to have many small bright spots, while the regional view (top right) reveals the spots to be the larger craters. The local view (bottom right) not only brings out smaller craters and detailed structure of larger craters, but also shows a smooth dark layer of material that appears to cover much of the surface. The close-up frame (bottom left) presents a surprising smoothness in this highest resolution (30 meters per picture element) view of Callisto's surface.

    North is to the top of these frames which were taken by the Solid State Imaging (SSI) system on NASA's Galileo spacecraft between November 1996 and November 1997. Even higher resolution images (better than 20 meters per picture element) of Callisto will be taken on June 30, 1999 during the 21st orbit of the spacecraft around Jupiter.

    The top left frame is scaled to 10 kilometers (km) per picture element (pixel) and covers an area about 4400 by 2500 km. The moon Callisto, which has a diameter of 4806 km, appears to be peppered with many bright spots. Images at this resolution of other cratered moons in the Solar System indicate that the bright spots could be impact craters. The ring structure of Valhalla, the largest impact structure on Callisto, is visible in the center of the frame. This color view combines images obtained in November 1997 taken through the green, violet, and 1 micrometer filters of the SSI system.

    The top right frame is ten times higher resolution (about 1 km per pixel) and covers an area approximately 440 by 250 km. Craters, which are clearly recognizable, appear to be the dominant landform on Callisto. The crater rims appear bright, while the adjacent area and the crater interiors are dark. This resolution is comparable to the best data available from the 1979 flyby's of NASA's two Voyager spacecraft; it reflects the understanding of Callisto prior to new data from Galileo. This Galileo image was taken in November 1996.

    The resolution of the bottom right image is again ten times better (100 meters per pixel) and covering an area of about 44 by 25 km. This resolution reveals that some crater rims are not complete rings, but are composed of bright isolated segments. Steep slopes near crater rims reveal dark material that appears to have slid down to reveal bright material. The thickness of the dark layer could be tens of meters. The image was taken in June 1997.

    The bottom left image at about 29 meters per pixel is the highest resolution available for Callisto. It covers an area about 4.4 by 2.5 km and is somewhat oblique. Craters are visible but no longer dominate the surface. The image was taken in November 1996.

    The Jet Propulsion Laboratory, Pasadena, CA manages the Galileo mission for NASA's Office of Space Science, Washington, DC. JPL is an operating division of California Institute of Technology (Caltech).

    This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://galileo.jpl.nasa.gov. Background information and educational context for the images can be found at URL http://www.jpl.nasa.gov/galileo/sepo

  9. A neural net based architecture for the segmentation of mixed gray-level and binary pictures

    NASA Technical Reports Server (NTRS)

    Tabatabai, Ali; Troudet, Terry P.

    1991-01-01

    A neural-net-based architecture is proposed to perform segmentation in real time for mixed gray-level and binary pictures. In this approach, the composite picture is divided into 16 x 16 pixel blocks, which are identified as character blocks or image blocks on the basis of a dichotomy measure computed by an adaptive 16 x 16 neural net. For compression purposes, each image block is further divided into 4 x 4 subblocks; a one-bit nonparametric quantizer is used to encode 16 x 16 character and 4 x 4 image blocks; and the binary map and quantizer levels are obtained through a neural net segmentor over each block. The efficiency of the neural segmentation in terms of computational speed, data compression, and quality of the compressed picture is demonstrated. The effect of weight quantization is also discussed. VLSI implementations of such adaptive neural nets in CMOS technology are described and simulated in real time for a maximum block size of 256 pixels.

  10. Low complexity pixel-based halftone detection

    NASA Astrophysics Data System (ADS)

    Ok, Jiheon; Han, Seong Wook; Jarno, Mielikainen; Lee, Chulhee

    2011-10-01

    With the rapid advances of the internet and other multimedia technologies, the digital document market has been growing steadily. Since most digital images use halftone technologies, quality degradation occurs when one tries to scan and reprint them. Therefore, it is necessary to extract the halftone areas to produce high quality printing. In this paper, we propose a low complexity pixel-based halftone detection algorithm. For each pixel, we considered a surrounding block. If the block contained any flat background regions, text, thin lines, or continuous or non-homogeneous regions, the pixel was classified as a non-halftone pixel. After excluding those non-halftone pixels, the remaining pixels were considered to be halftone pixels. Finally, documents were classified as pictures or photo documents by calculating the halftone pixel ratio. The proposed algorithm proved to be memory-efficient and required low computation costs. The proposed algorithm was easily implemented using GPU.

  11. Using pixel intensity as a self-regulating threshold for deterministic image sampling in Milano Retinex: the T-Rex algorithm

    NASA Astrophysics Data System (ADS)

    Lecca, Michela; Modena, Carla Maria; Rizzi, Alessandro

    2018-01-01

    Milano Retinexes are spatial color algorithms, part of the Retinex family, usually employed for image enhancement. They modify the color of each pixel taking into account the surrounding colors and their position, catching in this way the local spatial color distribution relevant to image enhancement. We present T-Rex (from the words threshold and Retinex), an implementation of Milano Retinex, whose main novelty is the use of the pixel intensity as a self-regulating threshold to deterministically sample local color information. The experiments, carried out on real-world pictures, show that T-Rex image enhancement performance are in line with those of the Milano Retinex family: T-Rex increases the brightness, the contrast, and the flatness of the channel distributions of the input image, making more intelligible the content of pictures acquired under difficult light conditions.

  12. Wavefront sensing in space: flight demonstration II of the PICTURE sounding rocket payload

    NASA Astrophysics Data System (ADS)

    Douglas, Ewan S.; Mendillo, Christopher B.; Cook, Timothy A.; Cahoy, Kerri L.; Chakrabarti, Supriya

    2018-01-01

    A NASA sounding rocket for high-contrast imaging with a visible nulling coronagraph, the Planet Imaging Concept Testbed Using a Rocket Experiment (PICTURE) payload, has made two suborbital attempts to observe the warm dust disk inferred around Epsilon Eridani. The first flight in 2011 demonstrated a 5 mas fine pointing system in space. The reduced flight data from the second launch, on November 25, 2015, presented herein, demonstrate active sensing of wavefront phase in space. Despite several anomalies in flight, postfacto reduction phase stepping interferometer data provide insight into the wavefront sensing precision and the system stability for a portion of the pupil. These measurements show the actuation of a 32 × 32-actuator microelectromechanical system deformable mirror. The wavefront sensor reached a median precision of 1.4 nm per pixel, with 95% of samples between 0.8 and 12.0 nm per pixel. The median system stability, including telescope and coronagraph wavefront errors other than tip, tilt, and piston, was 3.6 nm per pixel, with 95% of samples between 1.2 and 23.7 nm per pixel.

  13. Digital Simulation Of Precise Sensor Degradations Including Non-Linearities And Shift Variance

    NASA Astrophysics Data System (ADS)

    Kornfeld, Gertrude H.

    1987-09-01

    Realistic atmospheric and Forward Looking Infrared Radiometer (FLIR) degradations were digitally simulated. Inputs to the routine are environmental observables and the FLIR specifications. It was possible to achieve realism in the thermal domain within acceptable computer time and random access memory (RAM) requirements because a shift variant recursive convolution algorithm that well describes thermal properties was invented and because each picture element (pixel) has radiative temperature, a materials parameter and range and altitude information. The computer generation steps start with the image synthesis of an undegraded scene. Atmospheric and sensor degradation follow. The final result is a realistic representation of an image seen on the display of a specific FLIR.

  14. Aerial projection of three-dimensional motion pictures by electro-holography and parabolic mirrors.

    PubMed

    Kakue, Takashi; Nishitsuji, Takashi; Kawashima, Tetsuya; Suzuki, Keisuke; Shimobaba, Tomoyoshi; Ito, Tomoyoshi

    2015-07-08

    We demonstrate an aerial projection system for reconstructing 3D motion pictures based on holography. The system consists of an optical source, a spatial light modulator corresponding to a display and two parabolic mirrors. The spatial light modulator displays holograms calculated by computer and can reconstruct holographic motion pictures near the surface of the modulator. The two parabolic mirrors can project floating 3D images of the motion pictures formed by the spatial light modulator without mechanical scanning or rotating. In this demonstration, we used a phase-modulation-type spatial light modulator. The number of pixels and the pixel pitch of the modulator were 1,080 × 1,920 and 8.0 μm × 8.0 μm, respectively. The diameter, the height and the focal length of each parabolic mirror were 288 mm, 55 mm and 100 mm, respectively. We succeeded in aerially projecting 3D motion pictures of size ~2.5 mm(3) by this system constructed by the modulator and mirrors. In addition, by applying a fast computational algorithm for holograms, we achieved hologram calculations at ~12 ms per hologram with 4 CPU cores.

  15. Aerial projection of three-dimensional motion pictures by electro-holography and parabolic mirrors

    PubMed Central

    Kakue, Takashi; Nishitsuji, Takashi; Kawashima, Tetsuya; Suzuki, Keisuke; Shimobaba, Tomoyoshi; Ito, Tomoyoshi

    2015-01-01

    We demonstrate an aerial projection system for reconstructing 3D motion pictures based on holography. The system consists of an optical source, a spatial light modulator corresponding to a display and two parabolic mirrors. The spatial light modulator displays holograms calculated by computer and can reconstruct holographic motion pictures near the surface of the modulator. The two parabolic mirrors can project floating 3D images of the motion pictures formed by the spatial light modulator without mechanical scanning or rotating. In this demonstration, we used a phase-modulation-type spatial light modulator. The number of pixels and the pixel pitch of the modulator were 1,080 × 1,920 and 8.0 μm × 8.0 μm, respectively. The diameter, the height and the focal length of each parabolic mirror were 288 mm, 55 mm and 100 mm, respectively. We succeeded in aerially projecting 3D motion pictures of size ~2.5 mm3 by this system constructed by the modulator and mirrors. In addition, by applying a fast computational algorithm for holograms, we achieved hologram calculations at ~12 ms per hologram with 4 CPU cores. PMID:26152453

  16. Error analysis of filtering operations in pixel-duplicated images of diabetic retinopathy

    NASA Astrophysics Data System (ADS)

    Mehrubeoglu, Mehrube; McLauchlan, Lifford

    2010-08-01

    In this paper, diabetic retinopathy is chosen for a sample target image to demonstrate the effectiveness of image enlargement through pixel duplication in identifying regions of interest. Pixel duplication is presented as a simpler alternative to data interpolation techniques for detecting small structures in the images. A comparative analysis is performed on different image processing schemes applied to both original and pixel-duplicated images. Structures of interest are detected and and classification parameters optimized for minimum false positive detection in the original and enlarged retinal pictures. The error analysis demonstrates the advantages as well as shortcomings of pixel duplication in image enhancement when spatial averaging operations (smoothing filters) are also applied.

  17. Color visualization for fluid flow prediction

    NASA Technical Reports Server (NTRS)

    Smith, R. E.; Speray, D. E.

    1982-01-01

    High-resolution raster scan color graphics allow variables to be presented as a continuum, in a color-coded picture that is referenced to a geometry such as a flow field grid or a boundary surface. Software is used to map a scalar variable such as pressure or temperature, defined on a two-dimensional slice of a flow field. The geometric shape is preserved in the resulting picture, and the relative magnitude of the variable is color-coded onto the geometric shape. The primary numerical process for color coding is an efficient search along a raster scan line to locate the quadrilteral block in the grid that bounds each pixel on the line. Tension spline interpolation is performed relative to the grid for specific values of the scalar variable, which is then color coded. When all pixels for the field of view are color-defined, a picture is played back from a memory device onto a television screen.

  18. Effect of ambiguities on SAR picture quality

    NASA Technical Reports Server (NTRS)

    Korwar, V. N.; Lipes, R. G.

    1978-01-01

    The degradation of picture quality in a high-resolution, large-swath SAR mapping system caused by speckle, additive white Gaussian noise and range and azimuthal ambiguities occurring because of the nonfinite antenna pattern produced by a square aperture antenna was studied and simulated. The effect of the azimuth antenna pattern was accounted for by calculating the azimuth ambiguity function. Range ambiguities were accounted for by adding, to each pixel of interest, appropriate pixels at a range separation corresponding to one pulse repetition period, but attenuated by the antenna pattern. It is concluded that azimuth ambiguities do not cause any noticeable degradation (for large time bandwidth product systems, at least) but range ambiguities might.

  19. Stochastic Resonance In Visual Perception

    NASA Astrophysics Data System (ADS)

    Simonotto, Enrico

    1996-03-01

    Stochastic resonance (SR) is a well established physical phenomenon wherein some measure of the coherence of a weak signal can be optimized by random fluctuations, or "noise" (K. Wiesenfeld and F. Moss, Nature), 373, 33 (1995). In all experiments to date the coherence has been measured using numerical analysis of the data, for example, signal-to-noise ratios obtained from power spectra. But, can this analysis be replaced by a perceptive task? Previously we had demonstrated this possibility with a numerical model of perceptual bistability applied to the interpretation of ambiguous figures(M. Riani and E. Simonotto, Phys. Rev. Lett.), 72, 3120 (1994). Here I describe an experiment wherein SR is detected in visual perception. A recognizible grayscale photograph was digitized and presented. The picture was then placed beneath a threshold. Every pixel for which the grayscale exceeded the threshold was painted white, and all others black. For large enough threshold, the picture is unrecognizable, but the addition of a random number to every pixel renders it interpretable(C. Seife and M. Roberts, The Economist), 336, 59, July 29 (1995). However the addition of dynamical noise to the pixels much enhances an observer's ability to interpret the picture. Here I report the results of psychophysics experiments wherein the effects of both the intensity of the noise and its correlation time were studied.

  20. An Integrated Imaging Detector of Polarization and Spectral Content

    NASA Technical Reports Server (NTRS)

    Rust, D. M.; Thompson, K. E.

    1993-01-01

    A new type of image detector has been designed to simultaneously analyze the polarization of light at all picture elements in a scene. The Integrated Dual Imaging Detector (IDID) consists of a polarizing beamsplitter bonded to a charge-coupled device (CCD), with signal-analysis circuitry and analog-to-digital converters, all integrated on a silicon chip. It should be capable of 1:10(exp 4) polarization discrimination. The IDID should simplify the design and operation of imaging polarimeters and spectroscopic imagers used, for example, in atmospheric and solar research. Innovations in the IDID include (1) two interleaved 512 x 1024-pixel imaging arrays (one for each polarization plane); (2) large dynamic range (well depth of 10(exp 6) electrons per pixel); (3) simultaneous readout of both images at 10 million pixels per second each; (4) on-chip analog signal processing to produce polarization maps in real time; (5) on-chip 10-bit A/D conversion. When used with a lithium-niobate Fabry-Perot etalon or other color filter that can encode spectral information as polarization, the IDID can collect and analyze simultaneous images at two wavelengths. Precise photometric analysis of molecular or atomic concentrations in the atmosphere is one suggested application. When used in a solar telescope, the IDID will charge the polarization, which can then be converted to maps of the vector magnetic fields on the solar surface.

  1. Full complex spatial filtering with a phase mostly DMD. [Deformable Mirror Device

    NASA Technical Reports Server (NTRS)

    Florence, James M.; Juday, Richard D.

    1991-01-01

    A new technique for implementing fully complex spatial filters with a phase mostly deformable mirror device (DMD) light modulator is described. The technique combines two or more phase-modulating flexure-beam mirror elements into a single macro-pixel. By manipulating the relative phases of the individual sub-pixels within the macro-pixel, the amplitude and the phase can be independently set for this filtering element. The combination of DMD sub-pixels into a macro-pixel is accomplished by adjusting the optical system resolution, thereby trading off system space bandwidth product for increased filtering flexibility. Volume in the larger dimensioned space, space bandwidth-complex axes count, is conserved. Experimental results are presented mapping out the coupled amplitude and phase characteristics of the individual flexure-beam DMD elements and demonstrating the independent control of amplitude and phase in a combined macro-pixel. This technique is generally applicable for implementation with any type of phase modulating light modulator.

  2. Spatial light modulator array with heat minimization and image enhancement features

    DOEpatents

    Jain, Kanti [Briarcliff Manor, NY; Sweatt, William C [Albuquerque, NM; Zemel, Marc [New Rochelle, NY

    2007-01-30

    An enhanced spatial light modulator (ESLM) array, a microelectronics patterning system and a projection display system using such an ESLM for heat-minimization and resolution enhancement during imaging, and the method for fabricating such an ESLM array. The ESLM array includes, in each individual pixel element, a small pixel mirror (reflective region) and a much larger pixel surround. Each pixel surround includes diffraction-grating regions and resolution-enhancement regions. During imaging, a selected pixel mirror reflects a selected-pixel beamlet into the capture angle of a projection lens, while the diffraction grating of the pixel surround redirects heat-producing unused radiation away from the projection lens. The resolution-enhancement regions of selected pixels provide phase shifts that increase effective modulation-transfer function in imaging. All of the non-selected pixel surrounds redirect all radiation energy away from the projection lens. All elements of the ESLM are fabricated by deposition, patterning, etching and other microelectronic process technologies.

  3. High-speed on-chip windowed centroiding using photodiode-based CMOS imager

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata (Inventor); Sun, Chao (Inventor); Yang, Guang (Inventor); Cunningham, Thomas J. (Inventor); Hancock, Bruce (Inventor)

    2003-01-01

    A centroid computation system is disclosed. The system has an imager array, a switching network, computation elements, and a divider circuit. The imager array has columns and rows of pixels. The switching network is adapted to receive pixel signals from the image array. The plurality of computation elements operates to compute inner products for at least x and y centroids. The plurality of computation elements has only passive elements to provide inner products of pixel signals the switching network. The divider circuit is adapted to receive the inner products and compute the x and y centroids.

  4. High-speed on-chip windowed centroiding using photodiode-based CMOS imager

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata (Inventor); Sun, Chao (Inventor); Yang, Guang (Inventor); Cunningham, Thomas J. (Inventor); Hancock, Bruce (Inventor)

    2004-01-01

    A centroid computation system is disclosed. The system has an imager array, a switching network, computation elements, and a divider circuit. The imager array has columns and rows of pixels. The switching network is adapted to receive pixel signals from the image array. The plurality of computation elements operates to compute inner products for at least x and y centroids. The plurality of computation elements has only passive elements to provide inner products of pixel signals the switching network. The divider circuit is adapted to receive the inner products and compute the x and y centroids.

  5. Effects of picture amount on preference, balance, and dynamic feel of Web pages.

    PubMed

    Chiang, Shu-Ying; Chen, Chien-Hsiung

    2012-04-01

    This study investigates the effects of picture amount on subjective evaluation. The experiment herein adopted two variables to define picture amount: column ratio and picture size. Six column ratios were employed: 7:93,15:85, 24:76, 33:67, 41:59, and 50:50. Five picture sizes were examined: 140 x 81, 220 x 127, 300 x 173, 380 x 219, and 460 x 266 pixels. The experiment implemented a within-subject design; 104 participants were asked to evaluate 30 web page layouts. Repeated measurements revealed that the column ratio and picture size have significant effects on preference, balance, and dynamic feel. The results indicated the most appropriate picture amount for display: column ratios of 15:85 and 24:76, and picture sizes of 220 x 127, 300 x 173, and 380 x 219. The research findings can serve as the basis for the application of design guidelines for future web page interface design.

  6. Processor Would Find Best Paths On Map

    NASA Technical Reports Server (NTRS)

    Eberhardt, Silvio P.

    1990-01-01

    Proposed very-large-scale integrated (VLSI) circuit image-data processor finds path of least cost from specified origin to any destination on map. Cost of traversal assigned to each picture element of map. Path of least cost from originating picture element to every other picture element computed as path that preserves as much as possible of signal transmitted by originating picture element. Dedicated microprocessor at each picture element stores cost of traversal and performs its share of computations of paths of least cost. Least-cost-path problem occurs in research, military maneuvers, and in planning routes of vehicles.

  7. A Compact Imaging Detector of Polarization and Spectral Content

    NASA Technical Reports Server (NTRS)

    Rust, D. M.; Kumar, A.; Thompson, K. E.

    1993-01-01

    A new type of image detector will simultaneously analyze the polarization of light at all picture elements in a scene. The integrated Dual Imaging Detector (IDID) consists of a polarizing beam splitter bonded to a charge-coupled device (CCD), with signal-analysis circuitry and analog-to-digital converters, all integrated on a silicon chip. The polarizing beam splitter can be either a Ronchi ruling, or an array of cylindrical lenslets, bonded to a birefringent wafer. The wafer, in turn, is bonded to the CCD so that light in the two orthogonal planes of polarization falls on adjacent pairs of pixels. The use of a high-index birefringent material, e.g., rutile, allows the IDID to operate at f-numbers as high as f/3.5. Other aspects of the detector are discussed.

  8. Imaging spectrometer concepts for next-generation planetary missions

    NASA Technical Reports Server (NTRS)

    Herring, M.; Juergens, D. W.; Kupferman, P. N.; Vane, G.

    1984-01-01

    In recent years there has been an increasing interest in the imaging spectrometer concept, in which imaging is accomplished in multiple, contiguous spectral bands at typical intervals of 5 to 20 nm. There are two implementations of this concept under consideration for upcoming planetary missions. One is the scanning, or 'whisk-broom' approach, in which each picture element (pixel) of the scene is spectrally dispersed onto a linear array of detectors; the spatial information is provided by a scan mirror in combination with the vehicle motion. The second approach is the 'push-broom' imager, in which a line of pixels from the scene is spectrally dispersed onto a two-dimensional (area-array) detector. In this approach, the scan mirror is eliminated, but the optics and focal plane are more complex. This paper discusses the application of these emerging instrument concepts to the planetary program. Key issues are the trade-off between the two types of imaging spectrometer, the available data rate from a typical planetary mission, and the focal-plane cooling requirements. Specific straw-man conceptual designs for the Mars Geoscience/Climatology Orbiter (MGCO) and the Mariner Mark II Comet Rendezvous/Asteroid Flyby (CRAF) missions are discussed.

  9. Principles of computer processing of Landsat data for geologic applications

    USGS Publications Warehouse

    Taranik, James V.

    1978-01-01

    The main objectives of computer processing of Landsat data for geologic applications are to improve display of image data to the analyst or to facilitate evaluation of the multispectral characteristics of the data. Interpretations of the data are made from enhanced and classified data by an analyst trained in geology. Image enhancements involve adjustments of brightness values for individual picture elements. Image classification involves determination of the brightness values of picture elements for a particular cover type. Histograms are used to display the range and frequency of occurrence of brightness values. Landsat-1 and -2 data are preprocessed at Goddard Space Flight Center (GSFC) to adjust for the detector response of the multispectral scanner (MSS). Adjustments are applied to minimize the effects of striping, adjust for bad-data lines and line segments and lost individual pixel data. Because illumination conditions and landscape characteristics vary considerably and detector response changes with time, the radiometric adjustments applied at GSFC are seldom perfect and some detector striping remain in Landsat data. Rotation of the Earth under the satellite and movements of the satellite platform introduce geometric distortions in the data that must also be compensated for if image data are to be correctly displayed to the data analyst. Adjustments to Landsat data are made to compensate for variable solar illumination and for atmospheric effects. GeoMetric registration of Landsat data involves determination of the spatial location of a pixel in. the output image and the determination of a new value for the pixel. The general objective of image enhancement is to optimize display of the data to the analyst. Contrast enhancements are employed to expand the range of brightness values in Landsat data so that the data can be efficiently recorded in a manner desired by the analyst. Spatial frequency enhancements are designed to enhance boundaries between features which have subtle differences in brightness values. Ratioing tends to reduce the effects due to topography and it tends to emphasize changes in brightness values between two Landsat bands. Simulated natural color is produced for geologists so that the colors of materials on images appear similar to colors of actual materials in the field. Image classification of Landsat data involves both machine assisted delineation of multispectral patterns in four-dimensional spectral space and identification of machine delineated multispectral patterns that represent particular cover conditions. The geological information derived from an analysis of a multispectral classification is usually related to lithology.

  10. How Many Pixels Does It Take to Make a Good 4"×6" Print? Pixel Count Wars Revisited

    NASA Astrophysics Data System (ADS)

    Kriss, Michael A.

    Digital still cameras emerged following the introduction of the Sony Mavica analog prototype camera in 1981. These early cameras produced poor image quality and did not challenge film cameras for overall quality. By 1995 digital still cameras in expensive SLR formats had 6 mega-pixels and produced high quality images (with significant image processing). In 2005 significant improvement in image quality was apparent and lower prices for digital still cameras (DSCs) started a rapid decline in film usage and film camera sells. By 2010 film usage was mostly limited to professionals and the motion picture industry. The rise of DSCs was marked by a “pixel war” where the driving feature of the cameras was the pixel count where even moderate cost, ˜120, DSCs would have 14 mega-pixels. The improvement of CMOS technology pushed this trend of lower prices and higher pixel counts. Only the single lens reflex cameras had large sensors and large pixels. The drive for smaller pixels hurt the quality aspects of the final image (sharpness, noise, speed, and exposure latitude). Only today are camera manufactures starting to reverse their course and producing DSCs with larger sensors and pixels. This paper will explore why larger pixels and sensors are key to the future of DSCs.

  11. Motion video compression system with neural network having winner-take-all function

    NASA Technical Reports Server (NTRS)

    Fang, Wai-Chi (Inventor); Sheu, Bing J. (Inventor)

    1997-01-01

    A motion video data system includes a compression system, including an image compressor, an image decompressor correlative to the image compressor having an input connected to an output of the image compressor, a feedback summing node having one input connected to an output of the image decompressor, a picture memory having an input connected to an output of the feedback summing node, apparatus for comparing an image stored in the picture memory with a received input image and deducing therefrom pixels having differences between the stored image and the received image and for retrieving from the picture memory a partial image including the pixels only and applying the partial image to another input of the feedback summing node, whereby to produce at the output of the feedback summing node an updated decompressed image, a subtraction node having one input connected to received the received image and another input connected to receive the partial image so as to generate a difference image, the image compressor having an input connected to receive the difference image whereby to produce a compressed difference image at the output of the image compressor.

  12. Advanced Three-Dimensional Display System

    NASA Technical Reports Server (NTRS)

    Geng, Jason

    2005-01-01

    A desktop-scale, computer-controlled display system, initially developed for NASA and now known as the VolumeViewer(TradeMark), generates three-dimensional (3D) images of 3D objects in a display volume. This system differs fundamentally from stereoscopic and holographic display systems: The images generated by this system are truly 3D in that they can be viewed from almost any angle, without the aid of special eyeglasses. It is possible to walk around the system while gazing at its display volume to see a displayed object from a changing perspective, and multiple observers standing at different positions around the display can view the object simultaneously from their individual perspectives, as though the displayed object were a real 3D object. At the time of writing this article, only partial information on the design and principle of operation of the system was available. It is known that the system includes a high-speed, silicon-backplane, ferroelectric-liquid-crystal spatial light modulator (SLM), multiple high-power lasers for projecting images in multiple colors, a rotating helix that serves as a moving screen for displaying voxels [volume cells or volume elements, in analogy to pixels (picture cells or picture elements) in two-dimensional (2D) images], and a host computer. The rotating helix and its motor drive are the only moving parts. Under control by the host computer, a stream of 2D image patterns is generated on the SLM and projected through optics onto the surface of the rotating helix. The system utilizes a parallel pixel/voxel-addressing scheme: All the pixels of the 2D pattern on the SLM are addressed simultaneously by laser beams. This parallel addressing scheme overcomes the difficulty of achieving both high resolution and a high frame rate in a raster scanning or serial addressing scheme. It has been reported that the structure of the system is simple and easy to build, that the optical design and alignment are not difficult, and that the system can be built by use of commercial off-the-shelf products. A prototype of the system displays an image of 1,024 by 768 by 170 (=133,693,440) voxels. In future designs, the resolution could be increased. The maximum number of voxels that can be generated depends upon the spatial resolution of SLM and the speed of rotation of the helix. For example, one could use an available SLM that has 1,024 by 1,024 pixels. Incidentally, this SLM is capable of operation at a switching speed of 300,000 frames per second. Implementation of full-color displays in future versions of the system would be straightforward: One could use three SLMs for red, green, and blue, respectively, and the colors of the voxels could be automatically controlled. An optically simpler alternative would be to use a single red/green/ blue light projector and synchronize the projection of each color with the generation of patterns for that color on a single SLM.

  13. Active pixel sensors with substantially planarized color filtering elements

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R. (Inventor); Kemeny, Sabrina E. (Inventor)

    1999-01-01

    A semiconductor imaging system preferably having an active pixel sensor array compatible with a CMOS fabrication process. Color-filtering elements such as polymer filters and wavelength-converting phosphors can be integrated with the image sensor.

  14. High-speed, large-area, p-i-n InGaAs photodiode linear array at 2-micron wavelength

    NASA Astrophysics Data System (ADS)

    Joshi, Abhay; Datta, Shubhashish

    2012-06-01

    We present 16-element and 32-element lattice-mismatched InGaAs photodiode arrays having a cut-off wavelength of ~2.2 um. Each 100 um × 200 um large pixel of the 32-element array has a capacitance of 2.5 pF at 5 V reverse bias, thereby allowing a RC-limited bandwidth of ~1.3 GHz. At room temperature, each pixel demonstrates a dark current of 25 uA at 5 V reverse bias. Corresponding results for the 16-element array having 200 um × 200 um pixels are also reported. Cooling the photodiode array to 150K is expected to reduce its dark current to < 50 nA per pixel at 5 V reverse bias. Additionally, measurement results of 2-micron single photodiodes having 16 GHz bandwidth and corresponding PIN-TIA photoreceiver having 6 GHz bandwidth are also reported.

  15. Diffraction-based optical sensor detection system for capture-restricted environments

    NASA Astrophysics Data System (ADS)

    Khandekar, Rahul M.; Nikulin, Vladimir V.

    2008-04-01

    The use of digital cameras and camcorders in prohibited areas presents a growing problem. Piracy in the movie theaters results in huge revenue loss to the motion picture industry every year, but still image and video capture may present even a bigger threat if performed in high-security locations. While several attempts are being made to address this issue, an effective solution is yet to be found. We propose to approach this problem using a very commonly observed optical phenomenon. Cameras and camcorders use CCD and CMOS sensors, which include a number of photosensitive elements/pixels arranged in a certain fashion. Those are photosites in CCD sensors and semiconductor elements in CMOS sensors. They are known to reflect a small fraction of incident light, but could also act as a diffraction grating, resulting in the optical response that could be utilized to identify the presence of such a sensor. A laser-based detection system is proposed that accounts for the elements in the optical train of the camera, as well as the eye-safety of the people who could be exposed to optical beam radiation. This paper presents preliminary experimental data, as well as the proof-of-concept simulation results.

  16. Digital Earth Watch: Investigating the World with Digital Cameras

    NASA Astrophysics Data System (ADS)

    Gould, A. D.; Schloss, A. L.; Beaudry, J.; Pickle, J.

    2015-12-01

    Every digital camera including the smart phone camera can be a scientific tool. Pictures contain millions of color intensity measurements organized spatially allowing us to measure properties of objects in the images. This presentation will demonstrate how digital pictures can be used for a variety of studies with a special emphasis on using repeat digital photographs to study change-over-time in outdoor settings with a Picture Post. Demonstrations will include using inexpensive color filters to take pictures that enhance features in images such as unhealthy leaves on plants, or clouds in the sky. Software available at no cost from the Digital Earth Watch (DEW) website that lets students explore light, color and pixels, manipulate color in images and make measurements, will be demonstrated. DEW and Picture Post were developed with support from NASA. Please visit our websites: DEW: http://dew.globalsystemsscience.orgPicture Post: http://picturepost.unh.edu

  17. Integrated Dual Imaging Detector

    NASA Technical Reports Server (NTRS)

    Rust, David M.

    1999-01-01

    A new type of image detector was designed to simultaneously analyze the polarization of light at all picture elements in a scene. The integrated Dual Imaging detector (IDID) consists of a lenslet array and a polarizing beamsplitter bonded to a commercial charge coupled device (CCD). The IDID simplifies the design and operation of solar vector magnetographs and the imaging polarimeters and spectroscopic imagers used, for example, in atmosphere and solar research. When used in a solar telescope, the vector magnetic fields on the solar surface. Other applications include environmental monitoring, robot vision, and medical diagnoses (through the eye). Innovations in the IDID include (1) two interleaved imaging arrays (one for each polarization plane); (2) large dynamic range (well depth of 10(exp 5) electrons per pixel); (3) simultaneous readout and display of both images; and (4) laptop computer signal processing to produce polarization maps in field situations.

  18. Prehospital digital photography and automated image transmission in an emergency medical service - an ancillary retrospective analysis of a prospective controlled trial.

    PubMed

    Bergrath, Sebastian; Rossaint, Rolf; Lenssen, Niklas; Fitzner, Christina; Skorning, Max

    2013-01-16

    Still picture transmission was performed using a telemedicine system in an Emergency Medical Service (EMS) during a prospective, controlled trial. In this ancillary, retrospective study the quality and content of the transmitted pictures and the possible influences of this application on prehospital time requirements were investigated. A digital camera was used with a telemedicine system enabling encrypted audio and data transmission between an ambulance and a remotely located physician. By default, images were compressed (jpeg, 640 x 480 pixels). On occasion, this compression was deactivated (3648 x 2736 pixels). Two independent investigators assessed all transmitted pictures according to predefined criteria. In cases of different ratings, a third investigator had final decision competence. Patient characteristics and time intervals were extracted from the EMS protocol sheets and dispatch centre reports. Overall 314 pictures (mean 2.77 ± 2.42 pictures/mission) were transmitted during 113 missions (group 1). Pictures were not taken for 151 missions (group 2). Regarding picture quality, the content of 240 (76.4%) pictures was clearly identifiable; 45 (14.3%) pictures were considered "limited quality" and 29 (9.2%) pictures were deemed "not useful" due to not/hardly identifiable content. For pictures with file compression (n = 84 missions) and without (n = 17 missions), the content was clearly identifiable in 74% and 97% of the pictures, respectively (p = 0.003). Medical reports (n = 98, 32.8%), medication lists (n = 49, 16.4%) and 12-lead ECGs (n = 28, 9.4%) were most frequently photographed. The patient characteristics of group 1 vs. 2 were as follows: median age - 72.5 vs. 56.5 years, p = 0.001; frequency of acute coronary syndrome - 24/113 vs. 15/151, p = 0.014. The NACA scores and gender distribution were comparable. Median on-scene times were longer with picture transmission (26 vs. 22 min, p = 0.011), but ambulance arrival to hospital arrival intervals did not differ significantly (35 vs. 33 min, p = 0.054). Picture transmission was used frequently and resulted in an acceptable picture quality, even with compressed files. In most cases, previously existing "paper data" was transmitted electronically. This application may offer an alternative to other modes of ECG transmission. Due to different patient characteristics no conclusions for a prolonged on-scene time can be drawn. Mobile picture transmission holds important opportunities for clinical handover procedures and teleconsultation.

  19. Mars Polar Lander: The Search Begins

    NASA Technical Reports Server (NTRS)

    1999-01-01

    [figure removed for brevity, see original site]

    Beginning Thursday, December 16, 1999, the Mars Global Surveyor (MGS) spacecraft initiated a search for visible evidence of the fate of the missing Mars Polar Lander using the high resolution Mars Orbiter Camera (MOC) operated by Malin Space Science Systems of San Diego, California. Mars Polar Lander was lost during its landing attempt near 76.3oS, 195.0oW on the martian south polar layered terrain on December 3, 1999. Although the likelihood of seeing the lander is quite small, the MOC effort might provide some clues that shed light on what happened to the lander. The problem, however, is one of 'pixels'--those little square boxes of different shades of gray that comprise a digital image.

    The two pictures above illustrate the difficulty of finding the lander in MOC images. The picture at the top of the page is the first of the images that were acquired to look for the lander--this one was snapped by MOC around 3:36 p.m. Greenwich time on December 16th. Local time on Mars was about 2 p.m. Portions of this image are shown at 1/4th scale (left), full-scale (1.5 meters, or 5 feet, per pixel--middle), and 10 times enlarged (right). Because the landing site is very far south (at this latitude on Earth, you would be in Antarctica), the Sun illumination is not ideal for taking high resolution pictures with MOC. Thus, the full-resolution MOC data for this region show a large amount of 'salt and pepper' noise, which arises from statistical fluctuations in how light falling on the MOC charge-coupled-device (CCD) detector is converted to electricity. Other aspects of the MOC electronics also introduce noise. These effects are greatly reduced when taking pictures of portions of Mars that have better, more direct sunlight, or when the images are taken at reduced resolution to, in effect, 'average-out' the noise.

    The lower picture shows a model of the Mars Polar Lander sitting on a carpet in a conference room at Malin Space Science Systems. This model is illuminated in the same way that sunlight would illuminate the real lander at 2 p.m. local time in December 1999--in other words, the model is illuminated exactly the way it would be if it occurred in the MOC image shown above (left). This figure shows what the Mars Polar Lander would look like if viewed from above by cameras of different resolutions from 1 centimeter (0.4 inch) per pixel in the upper left to 1.5 meters (5 feet) per pixel in the lower right. The 1.5 meters per pixel view is the best resolution that can be achieved by MOC. Note that at MOC resolution, the lander is just a few pixels across.

    The problem of recognizing the lander in MOC images is obvious--all that might be seen is a pattern of a few bright and dark gray pixels. This means that it will be extremely difficult to identify the lander by looking at the relatively noisy MOC images that can be acquired at the landing site--like those shown in the top picture.

    How, then, is the MGS MOC team looking for the lander? Primarily, they are looking for associations of features that, together, would suggest whether or not the Mars landing was successful. For example, the parachute that was used to slow the lander from supersonic speeds to just under 300 km/hr (187 mph) was to have been jettisoned, along with part of the aeroshell that protected the lander from the extreme heat of entry, about 40 seconds before landing. The parachute and aeroshell are likely to be within a kilometer (6 tenths of a mile) of the lander. The parachute and aeroshell are nearly white, so they should stand out well against the red martian soil. The parachute, if lying on the ground in a fully open, flat position, would measure about 6 meters (20 feet)--thus it would cover three or four pixels (at most) in a MOC image. If the parachute can be found, the search for the lander can be narrowed to a small, nearby zone. If, as another example, the landing rockets kicked up a lot of dust and roughened the surface around the lander, evidence for this might show up as a dark circle surrounding a bright pixel (part of the lander) in the middle. The MOC operations team is using a set of these and similar scenarios to guide the examination of these images. The search continues...

  20. New Driving Scheme to Improve Hysteresis Characteristics of Organic Thin Film Transistor-Driven Active-Matrix Organic Light Emitting Diode Display

    NASA Astrophysics Data System (ADS)

    Yamamoto, Toshihiro; Nakajima, Yoshiki; Takei, Tatsuya; Fujisaki, Yoshihide; Fukagawa, Hirohiko; Suzuki, Mitsunori; Motomura, Genichi; Sato, Hiroto; Tokito, Shizuo; Fujikake, Hideo

    2011-02-01

    A new driving scheme for an active-matrix organic light emitting diode (AMOLED) display was developed to prevent the picture quality degradation caused by the hysteresis characteristics of organic thin film transistors (OTFTs). In this driving scheme, the gate electrode voltage of a driving-OTFT is directly controlled through the storage capacitor so that the operating point for the driving-OTFT is on the same hysteresis curve for every pixel after signal data are stored in the storage capacitor. Although the number of OTFTs in each pixel for the AMOLED display is restricted because OTFT size should be large enough to drive organic light emitting diodes (OLEDs) due to their small carrier mobility, it can improve the picture quality for an OTFT-driven flexible OLED display with the basic two transistor-one capacitor circuitry.

  1. IMDISP - INTERACTIVE IMAGE DISPLAY PROGRAM

    NASA Technical Reports Server (NTRS)

    Martin, M. D.

    1994-01-01

    The Interactive Image Display Program (IMDISP) is an interactive image display utility for the IBM Personal Computer (PC, XT and AT) and compatibles. Until recently, efforts to utilize small computer systems for display and analysis of scientific data have been hampered by the lack of sufficient data storage capacity to accomodate large image arrays. Most planetary images, for example, require nearly a megabyte of storage. The recent development of the "CDROM" (Compact Disk Read-Only Memory) storage technology makes possible the storage of up to 680 megabytes of data on a single 4.72-inch disk. IMDISP was developed for use with the CDROM storage system which is currently being evaluated by the Planetary Data System. The latest disks to be produced by the Planetary Data System are a set of three disks containing all of the images of Uranus acquired by the Voyager spacecraft. The images are in both compressed and uncompressed format. IMDISP can read the uncompressed images directly, but special software is provided to decompress the compressed images, which can not be processed directly. IMDISP can also display images stored on floppy or hard disks. A digital image is a picture converted to numerical form so that it can be stored and used in a computer. The image is divided into a matrix of small regions called picture elements, or pixels. The rows and columns of pixels are called "lines" and "samples", respectively. Each pixel has a numerical value, or DN (data number) value, quantifying the darkness or brightness of the image at that spot. In total, each pixel has an address (line number, sample number) and a DN value, which is all that the computer needs for processing. DISPLAY commands allow the IMDISP user to display all or part of an image at various positions on the display screen. The user may also zoom in and out from a point on the image defined by the cursor, and may pan around the image. To enable more or all of the original image to be displayed on the screen at once, the image can be "subsampled." For example, if the image were subsampled by a factor of 2, every other pixel from every other line would be displayed, starting from the upper left corner of the image. Any positive integer may be used for subsampling. The user may produce a histogram of an image file, which is a graph showing the number of pixels per DN value, or per range of DN values, for the entire image. IMDISP can also plot the DN value versus pixels along a line between two points on the image. The user can "stretch" or increase the contrast of an image by specifying low and high DN values; all pixels with values lower than the specified "low" will then become black, and all pixels higher than the specified "high" value will become white. Pixels between the low and high values will be evenly shaded between black and white. IMDISP is written in a modular form to make it easy to change it to work with different display devices or on other computers. The code can also be adapted for use in other application programs. There are device dependent image display modules, general image display subroutines, image I/O routines, and image label and command line parsing routines. The IMDISP system is written in C-language (94%) and Assembler (6%). It was implemented on an IBM PC with the MS DOS 3.21 operating system. IMDISP has a memory requirement of about 142k bytes. IMDISP was developed in 1989 and is a copyrighted work with all copyright vested in NASA. Additional planetary images can be obtained from the National Space Science Data Center at (301) 286-6695.

  2. Potential Utility of a 4K Consumer Camera for Surgical Education in Ophthalmology.

    PubMed

    Ichihashi, Tsunetomo; Hirabayashi, Yutaka; Nagahara, Miyuki

    2017-01-01

    Purpose. We evaluated the potential utility of a cost-effective 4K consumer video system for surgical education in ophthalmology. Setting. Tokai University Hachioji Hospital, Tokyo, Japan. Design. Experimental study. Methods. The eyes that underwent cataract surgery, glaucoma surgery, vitreoretinal surgery, or oculoplastic surgery between February 2016 and April 2016 were recorded with 17.2 million pixels using a high-definition digital video camera (LUMIX DMC-GH4, Panasonic, Japan) and with 0.41 million pixels using a conventional analog video camera (MKC-501, Ikegami, Japan). Motion pictures of two cases for each surgery type were evaluated and classified as having poor, normal, or excellent visibility. Results. The 4K video system was easily installed by reading the instructions without technical expertise. The details of the surgical picture in the 4K system were highly improved over those of the conventional pictures, and the visual effects for surgical education were significantly improved. Motion pictures were stored for approximately 11 h with 512 GB SD memory. The total price of this system was USD 8000, which is a very low price compared with a commercial system. Conclusion. This 4K consumer camera was able to record and play back with high-definition surgical field visibility on the 4K monitor and is a low-cost, high-performing alternative for surgical facilities.

  3. First THEMIS Infrared and Visible Images of Mars

    NASA Technical Reports Server (NTRS)

    2001-01-01

    This picture shows both a visible and a thermal infrared image taken by the thermal emission imaging system on NASA's 2001 Mars Odyssey spacecraft on November 2, 2001. The images were taken as part of the ongoing calibration and testing of the camera system as the spacecraft orbited Mars on its 13threvolution of the planet.

    The visible wavelength image, shown on the right in black and white, was obtained using one of the instrument's five visible filters. The spacecraft was approximately 22,000 kilometers (about 13,600 miles) above Mars looking down toward the south pole when this image was acquired. It is late spring in the martian southern hemisphere.

    The thermal infrared image, center, shows the temperature of the surface in color. The circular feature seen in blue is the extremely cold martian south polar carbon dioxide ice cap. The instrument has measured a temperature of minus 120 degrees Celsius (minus 184 degrees Fahrenheit) on the south polar ice cap. The polar cap is more than 900 kilometers (540 miles) in diameter at this time.

    The visible image shows additional details along the edge of the ice cap, as well as atmospheric hazes near the cap. The view of the surface appears hazy due to dust that still remains in the martian atmosphere from the massive martian dust storms that have occurred over the past several months.

    The infrared image covers a length of over 6,500 kilometers (3,900 miles)spanning the planet from limb to limb, with a resolution of approximately 5.5 kilometers per picture element, or pixel, (3.4 miles per pixel) at the point directly beneath the spacecraft. The visible image has a resolution of approximately 1 kilometer per pixel (.6 miles per pixel) and covers an area roughly the size of the states of Arizona and New Mexico combined.

    An annotated image is available at the same resolution in tiff format. Click the image to download (note: it is a 5.2 mB file) [figure removed for brevity, see original site]

    NASA's Jet Propulsion Laboratory, Pasadena, Calif. manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington D.C. The thermal-emission imaging system was developed at Arizona State University,Tempe, with Raytheon Santa Barbara Remote Sensing, Santa Barbara, Calif. Lockheed Martin Astronautics, Denver, is the prime contractor for the project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.

  4. A Multiple-Window Video Embedding Transcoder Based on H.264/AVC Standard

    NASA Astrophysics Data System (ADS)

    Li, Chih-Hung; Wang, Chung-Neng; Chiang, Tihao

    2007-12-01

    This paper proposes a low-complexity multiple-window video embedding transcoder (MW-VET) based on H.264/AVC standard for various applications that require video embedding services including picture-in-picture (PIP), multichannel mosaic, screen-split, pay-per-view, channel browsing, commercials and logo insertion, and other visual information embedding services. The MW-VET embeds multiple foreground pictures at macroblock-aligned positions. It improves the transcoding speed with three block level adaptive techniques including slice group based transcoding (SGT), reduced frame memory transcoder (RFMT), and syntax level bypassing (SLB). The SGT utilizes prediction from the slice-aligned data partitions in the original bitstreams such that the transcoder simply merges the bitstreams by parsing. When the prediction comes from the newly covered area without slice-group data partitions, the pixels at the affected macroblocks are transcoded with the RFMT based on the concept of partial reencoding to minimize the number of refined blocks. The RFMT employs motion vector remapping (MVR) and intra mode switching (IMS) to handle intercoded blocks and intracoded blocks, respectively. The pixels outside the macroblocks that are affected by newly covered reference frame are transcoded by the SLB. Experimental results show that, as compared to the cascaded pixel domain transcoder (CPDT) with the highest complexity, our MW-VET can significantly reduce the processing complexity by 25 times and retain the rate-distortion performance close to the CPDT. At certain bit rates, the MW-VET can achieve up to 1.5 dB quality improvement in peak signal-to-noise-ratio (PSNR).

  5. Real-time colour hologram generation based on ray-sampling plane with multi-GPU acceleration.

    PubMed

    Sato, Hirochika; Kakue, Takashi; Ichihashi, Yasuyuki; Endo, Yutaka; Wakunami, Koki; Oi, Ryutaro; Yamamoto, Kenji; Nakayama, Hirotaka; Shimobaba, Tomoyoshi; Ito, Tomoyoshi

    2018-01-24

    Although electro-holography can reconstruct three-dimensional (3D) motion pictures, its computational cost is too heavy to allow for real-time reconstruction of 3D motion pictures. This study explores accelerating colour hologram generation using light-ray information on a ray-sampling (RS) plane with a graphics processing unit (GPU) to realise a real-time holographic display system. We refer to an image corresponding to light-ray information as an RS image. Colour holograms were generated from three RS images with resolutions of 2,048 × 2,048; 3,072 × 3,072 and 4,096 × 4,096 pixels. The computational results indicate that the generation of the colour holograms using multiple GPUs (NVIDIA Geforce GTX 1080) was approximately 300-500 times faster than those generated using a central processing unit. In addition, the results demonstrate that 3D motion pictures were successfully reconstructed from RS images of 3,072 × 3,072 pixels at approximately 15 frames per second using an electro-holographic reconstruction system in which colour holograms were generated from RS images in real time.

  6. Information-efficient spectral imaging sensor

    DOEpatents

    Sweatt, William C.; Gentry, Stephen M.; Boye, Clinton A.; Grotbeck, Carter L.; Stallard, Brian R.; Descour, Michael R.

    2003-01-01

    A programmable optical filter for use in multispectral and hyperspectral imaging. The filter splits the light collected by an optical telescope into two channels for each of the pixels in a row in a scanned image, one channel to handle the positive elements of a spectral basis filter and one for the negative elements of the spectral basis filter. Each channel for each pixel disperses its light into n spectral bins, with the light in each bin being attenuated in accordance with the value of the associated positive or negative element of the spectral basis vector. The spectral basis vector is constructed so that its positive elements emphasize the presence of a target and its negative elements emphasize the presence of the constituents of the background of the imaged scene. The attenuated light in the channels is re-imaged onto separate detectors for each pixel and then the signals from the detectors are combined to give an indication of the presence or not of the target in each pixel of the scanned scene. This system provides for a very efficient optical determination of the presence of the target, as opposed to the very data intensive data manipulations that are required in conventional hyperspectral imaging systems.

  7. Large scale topography of Io

    NASA Technical Reports Server (NTRS)

    Gaskell, R. W.; Synnott, S. P.

    1987-01-01

    To investigate the large scale topography of the Jovian satellite Io, both limb observations and stereographic techniques applied to landmarks are used. The raw data for this study consists of Voyager 1 images of Io, 800x800 arrays of picture elements each of which can take on 256 possible brightness values. In analyzing this data it was necessary to identify and locate landmarks and limb points on the raw images, remove the image distortions caused by the camera electronics and translate the corrected locations into positions relative to a reference geoid. Minimizing the uncertainty in the corrected locations is crucial to the success of this project. In the highest resolution frames, an error of a tenth of a pixel in image space location can lead to a 300 m error in true location. In the lowest resolution frames, the same error can lead to an uncertainty of several km.

  8. Galilean satellite geomorphology

    NASA Technical Reports Server (NTRS)

    Malin, M. C.

    1983-01-01

    Research on this task consisted of the development and initial application of photometric and photoclinometric models using interactive computer image processing and graphics. New programs were developed to compute viewing and illumination angles for every picture element in a Voyager image using C-matrices and final Voyager ephemerides. These values were then used to transform each pixel to an illumination-oriented coordinate system. An iterative integration routine permits slope displacements to be computed from brightness variations, and correlated in the cross-sun direction, resulting in two dimensional topographic data. Figure 1 shows a 'wire-mesh' view of an impact crater on Ganymede, shown with a 10-fold vertical exaggeration. The crater, about 20 km in diameter, has a central mound and raised interior floor suggestive of viscous relaxation and rebound of the crater's topography. In addition to photoclinometry, the computer models that have been developed permit an examination on non-topographically-derived variations in surface brightness.

  9. Luminance compensation for AMOLED displays using integrated MIS sensors

    NASA Astrophysics Data System (ADS)

    Vygranenko, Yuri; Fernandes, Miguel; Louro, Paula; Vieira, Manuela

    2017-05-01

    Active-matrix organic light-emitting diodes (AMOLEDs) are ideal for future TV applications due to their ability to faithfully reproduce real images. However, pixel luminance can be affected by instability of driver TFTs and aging effect in OLEDs. This paper reports on a pixel driver utilizing a metal-insulator-semiconductor (MIS) sensor for luminance control of the OLED element. In the proposed pixel architecture for bottom-emission AMOLEDs, the embedded MIS sensor shares the same layer stack with back-channel etched a Si:H TFTs to maintain the fabrication simplicity. The pixel design for a large-area HD display is presented. The external electronics performs image processing to modify incoming video using correction parameters for each pixel in the backplane, and also sensor data processing to update the correction parameters. The luminance adjusting algorithm is based on realistic models for pixel circuit elements to predict the relation between the programming voltage and OLED luminance. SPICE modeling of the sensing part of the backplane is performed to demonstrate its feasibility. Details on the pixel circuit functionality including the sensing and programming operations are also discussed.

  10. The Lancashire telemedicine ambulance.

    PubMed

    Curry, G R; Harrop, N

    1998-01-01

    An emergency ambulance was equipped with three video-cameras and a system for transmitting slow-scan video-pictures through a cellular telephone link to a hospital accident and emergency department. Video-pictures were trasmitted at a resolution of 320 x 240 pixels and a frame rate of 15 pictures/min. In addition, a helmet-mounted camera was used with a wireless transmission link to the ambulance and thence the hospital. Speech was transmitted by a second hand-held cellular telephone. The equipment was installed in 1996-7 and video-recordings of actual ambulance journeys were made in July 1997. The technical feasibility of the telemedicine ambulance has been demonstrated and further clinical assessment is now in progress.

  11. Defect detection and classification of galvanized stamping parts based on fully convolution neural network

    NASA Astrophysics Data System (ADS)

    Xiao, Zhitao; Leng, Yanyi; Geng, Lei; Xi, Jiangtao

    2018-04-01

    In this paper, a new convolution neural network method is proposed for the inspection and classification of galvanized stamping parts. Firstly, all workpieces are divided into normal and defective by image processing, and then the defective workpieces extracted from the region of interest (ROI) area are input to the trained fully convolutional networks (FCN). The network utilizes an end-to-end and pixel-to-pixel training convolution network that is currently the most advanced technology in semantic segmentation, predicts result of each pixel. Secondly, we mark the different pixel values of the workpiece, defect and background for the training image, and use the pixel value and the number of pixels to realize the recognition of the defects of the output picture. Finally, the defect area's threshold depended on the needs of the project is set to achieve the specific classification of the workpiece. The experiment results show that the proposed method can successfully achieve defect detection and classification of galvanized stamping parts under ordinary camera and illumination conditions, and its accuracy can reach 99.6%. Moreover, it overcomes the problem of complex image preprocessing and difficult feature extraction and performs better adaptability.

  12. Video framerate, resolution and grayscale tradeoffs for undersea telemanipulator

    NASA Technical Reports Server (NTRS)

    Ranadive, V.; Sheridan, T. B.

    1981-01-01

    The product of Frame Rate (F) in frames per second, Resolution (R) in total pixels and grayscale in bits (G) equals the transmission band rate in bits per second. Thus for a fixed channel capacity there are tradeoffs between F, R and G in the actual sampling of the picture for a particular manual control task in the present case remote undersea manipulation. A manipulator was used in the MASTER/SLAVE mode to study these tradeoffs. Images were systematically degraded from 28 frames per second, 128 x 128 pixels and 16 levels (4 bits) grayscale, with various FRG combinations constructed from a real-time digitized (charge-injection) video camera. It was found that frame rate, resolution and grayscale could be independently reduced without preventing the operator from accomplishing his/her task. Threshold points were found beyond which degradation would prevent any successful performance. A general conclusion is that a well trained operator can perform familiar remote manipulator tasks with a considerably degrade picture, down to 50 K bits/ sec.

  13. Microlens performance limits in sub-2mum pixel CMOS image sensors.

    PubMed

    Huo, Yijie; Fesenmaier, Christian C; Catrysse, Peter B

    2010-03-15

    CMOS image sensors with smaller pixels are expected to enable digital imaging systems with better resolution. When pixel size scales below 2 mum, however, diffraction affects the optical performance of the pixel and its microlens, in particular. We present a first-principles electromagnetic analysis of microlens behavior during the lateral scaling of CMOS image sensor pixels. We establish for a three-metal-layer pixel that diffraction prevents the microlens from acting as a focusing element when pixels become smaller than 1.4 microm. This severely degrades performance for on and off-axis pixels in red, green and blue color channels. We predict that one-metal-layer or backside-illuminated pixels are required to extend the functionality of microlenses beyond the 1.4 microm pixel node.

  14. A 100 Mfps image sensor for biological applications

    NASA Astrophysics Data System (ADS)

    Etoh, T. Goji; Shimonomura, Kazuhiro; Nguyen, Anh Quang; Takehara, Kosei; Kamakura, Yoshinari; Goetschalckx, Paul; Haspeslagh, Luc; De Moor, Piet; Dao, Vu Truong Son; Nguyen, Hoang Dung; Hayashi, Naoki; Mitsui, Yo; Inumaru, Hideo

    2018-02-01

    Two ultrahigh-speed CCD image sensors with different characteristics were fabricated for applications to advanced scientific measurement apparatuses. The sensors are BSI MCG (Backside-illuminated Multi-Collection-Gate) image sensors with multiple collection gates around the center of the front side of each pixel, placed like petals of a flower. One has five collection gates and one drain gate at the center, which can capture consecutive five frames at 100 Mfps with the pixel count of about 600 kpixels (512 x 576 x 2 pixels). In-pixel signal accumulation is possible for repetitive image capture of reproducible events. The target application is FLIM. The other is equipped with four collection gates each connected to an in-situ CCD memory with 305 elements, which enables capture of 1,220 (4 x 305) consecutive images at 50 Mfps. The CCD memory is folded and looped with the first element connected to the last element, which also makes possible the in-pixel signal accumulation. The sensor is a small test sensor with 32 x 32 pixels. The target applications are imaging TOF MS, pulse neutron tomography and dynamic PSP. The paper also briefly explains an expression of the temporal resolution of silicon image sensors theoretically derived by the authors in 2017. It is shown that the image sensor designed based on the theoretical analysis achieves imaging of consecutive frames at the frame interval of 50 ps.

  15. North Polar Cap

    NASA Technical Reports Server (NTRS)

    2004-01-01

    7 September 2004 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows a 1.4 m/pixel (5 ft/pixel) view of a typical martian north polar ice cap texture. The surface is pitted and rough at the scale of several meters. The north polar residual cap of Mars consists mainly of water ice, while the south polar residual cap is mostly carbon dioxide. This picture is located near 85.2oN, 283.2oW. The image covers an area approximately 1 km wide by 1.4 km high (0.62 by 0.87 miles). Sunlight illuminates this scene from the lower left.

  16. Hardware Implementation of a Bilateral Subtraction Filter

    NASA Technical Reports Server (NTRS)

    Huertas, Andres; Watson, Robert; Villalpando, Carlos; Goldberg, Steven

    2009-01-01

    A bilateral subtraction filter has been implemented as a hardware module in the form of a field-programmable gate array (FPGA). In general, a bilateral subtraction filter is a key subsystem of a high-quality stereoscopic machine vision system that utilizes images that are large and/or dense. Bilateral subtraction filters have been implemented in software on general-purpose computers, but the processing speeds attainable in this way even on computers containing the fastest processors are insufficient for real-time applications. The present FPGA bilateral subtraction filter is intended to accelerate processing to real-time speed and to be a prototype of a link in a stereoscopic-machine- vision processing chain, now under development, that would process large and/or dense images in real time and would be implemented in an FPGA. In terms that are necessarily oversimplified for the sake of brevity, a bilateral subtraction filter is a smoothing, edge-preserving filter for suppressing low-frequency noise. The filter operation amounts to replacing the value for each pixel with a weighted average of the values of that pixel and the neighboring pixels in a predefined neighborhood or window (e.g., a 9 9 window). The filter weights depend partly on pixel values and partly on the window size. The present FPGA implementation of a bilateral subtraction filter utilizes a 9 9 window. This implementation was designed to take advantage of the ability to do many of the component computations in parallel pipelines to enable processing of image data at the rate at which they are generated. The filter can be considered to be divided into the following parts (see figure): a) An image pixel pipeline with a 9 9- pixel window generator, b) An array of processing elements; c) An adder tree; d) A smoothing-and-delaying unit; and e) A subtraction unit. After each 9 9 window is created, the affected pixel data are fed to the processing elements. Each processing element is fed the pixel value for its position in the window as well as the pixel value for the central pixel of the window. The absolute difference between these two pixel values is calculated and used as an address in a lookup table. Each processing element has a lookup table, unique for its position in the window, containing the weight coefficients for the Gaussian function for that position. The pixel value is multiplied by the weight, and the outputs of the processing element are the weight and pixel-value weight product. The products and weights are fed to the adder tree. The sum of the products and the sum of the weights are fed to the divider, which computes the sum of products the sum of weights. The output of the divider is denoted the bilateral smoothed image. The smoothing function is a simple weighted average computed over a 3 3 subwindow centered in the 9 9 window. After smoothing, the image is delayed by an additional amount of time needed to match the processing time for computing the bilateral smoothed image. The bilateral smoothed image is then subtracted from the 3 3 smoothed image to produce the final output. The prototype filter as implemented in a commercially available FPGA processes one pixel per clock cycle. Operation at a clock speed of 66 MHz has been demonstrated, and results of a static timing analysis have been interpreted as suggesting that the clock speed could be increased to as much as 100 MHz.

  17. Utility of large spot binocular indirect laser delivery for peripheral photocoagulation therapy in children.

    PubMed

    Balasubramaniam, Saranya C; Mohney, Brian G; Bang, Genie M; Link, Thomas P; Pulido, Jose S

    2012-09-01

    The purpose of this article is to demonstrate the utility of the large spot size (LSS) setting using a binocular laser indirect delivery system for peripheral ablation in children. One patient with bilateral retinopathy of prematurity received photocoagulation with standard spot size burns placed adjacently to LSS burns. Using a pixel analysis program called Image J on the Retcam picture, the areas of each retinal spot size were determined in units of pixels, giving a standard spot range of 805 to 1294 pixels and LSS range of 1699 to 2311 pixels. Additionally, fluence was calculated using theoretical retinal areas produced by each spot size: the standard spot setting was 462 mJ/mm2 and the LSS setting was 104 mJ/mm2. For eyes with retinopathy of prematurity, our study shows that LSS laser indirect delivery halves the number of spots required for treatment and reduces fluence by almost one-quarter, producing more uniform spots.

  18. Mars Boulders: On a Hill in Utopia Planitia

    NASA Image and Video Library

    2000-09-18

    The Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) was designed specifically to provide images of Mars that have a resolution comparable to the aerial photographs commonly used by Earth scientists to study geological processes and map landforms on our home planet. When MGS reaches its Mapping Orbit in March 1999, MOC will be able to obtain pictures with spatial resolutions of 1.5 meters (5 feet) per pixel--this good enough to easily see objects the size of an automobile. Boulders are one of the keys to determining which processes have eroded, transported, and deposited material on Mars (e.g.,landslides, mud flows, flood debris). During the first year in orbit,MGS MOC obtained pictures with resolutions between 2 and 30 meters (7to 98 feet) per pixel. It was found that boulders are difficult to identify on Mars in images with resolutions worse than about 2-3 meters per pixel. Although not known when the MOC was designed,"thresholds" like this are found on Earth, too. The MOC's 1.5 m/pixel resolution was a compromise between (1) the anticipation of such resolution-dependent sensitivity based on our experience with Earth and (2)the cost in terms of mass if we had built a larger telescope to get a higher resolution. Some rather larger boulders (i.e., larger than about 10 meters--or yards--in size) have already been seen on Mars by the orbiting camera. This is a feat similar to that which can be obtained by "spy" satellites on Earth. The MOC image 53104 subframe shown above features a low, rounded hill in southeastern Utopia Planitia. Each of the small, lumpy features on the top of this hill is a boulder. In this picture, boulders are not seen on the surrounding plain. These boulders are interpreted to be the remnants of a layer of harder rock that once covered the top of the hill, but was subsequently eroded and broken up by weathering and wind processes. MOC image 53104 was taken on September 2, 1998. The subframe shows an area 2.2 km by 3.3 km (1.4 miles by 2.7 miles). The image has a resolution of about 3.25 meters (10.7 feet) per pixel. The subframe is centered at 41.0°N latitude and 207.3°W longitude. North is approximately up, illumination is from the left. http://photojournal.jpl.nasa.gov/catalog/PIA01500

  19. CMOS Active-Pixel Image Sensor With Simple Floating Gates

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R.; Nakamura, Junichi; Kemeny, Sabrina E.

    1996-01-01

    Experimental complementary metal-oxide/semiconductor (CMOS) active-pixel image sensor integrated circuit features simple floating-gate structure, with metal-oxide/semiconductor field-effect transistor (MOSFET) as active circuit element in each pixel. Provides flexibility of readout modes, no kTC noise, and relatively simple structure suitable for high-density arrays. Features desirable for "smart sensor" applications.

  20. Mapping Electrical Crosstalk in Pixelated Sensor Arrays

    NASA Technical Reports Server (NTRS)

    Seshadri, S.; Cole, D. M.; Hancock, B. R.; Smith, R. M.

    2008-01-01

    Electronic coupling effects such as Inter-Pixel Capacitance (IPC) affect the quantitative interpretation of image data from CMOS, hybrid visible and infrared imagers alike. Existing methods of characterizing IPC do not provide a map of the spatial variation of IPC over all pixels. We demonstrate a deterministic method that provides a direct quantitative map of the crosstalk across an imager. The approach requires only the ability to reset single pixels to an arbitrary voltage, different from the rest of the imager. No illumination source is required. Mapping IPC independently for each pixel is also made practical by the greater S/N ratio achievable for an electrical stimulus than for an optical stimulus, which is subject to both Poisson statistics and diffusion effects of photo-generated charge. The data we present illustrates a more complex picture of IPC in Teledyne HgCdTe and HyViSi focal plane arrays than is presently understood, including the presence of a newly discovered, long range IPC in the HyViSi FPA that extends tens of pixels in distance, likely stemming from extended field effects in the fully depleted substrate. The sensitivity of the measurement approach has been shown to be good enough to distinguish spatial structure in IPC of the order of 0.1%.

  1. Prototype Focal-Plane-Array Optoelectronic Image Processor

    NASA Technical Reports Server (NTRS)

    Fang, Wai-Chi; Shaw, Timothy; Yu, Jeffrey

    1995-01-01

    Prototype very-large-scale integrated (VLSI) planar array of optoelectronic processing elements combines speed of optical input and output with flexibility of reconfiguration (programmability) of electronic processing medium. Basic concept of processor described in "Optical-Input, Optical-Output Morphological Processor" (NPO-18174). Performs binary operations on binary (black and white) images. Each processing element corresponds to one picture element of image and located at that picture element. Includes input-plane photodetector in form of parasitic phototransistor part of processing circuit. Output of each processing circuit used to modulate one picture element in output-plane liquid-crystal display device. Intended to implement morphological processing algorithms that transform image into set of features suitable for high-level processing; e.g., recognition.

  2. Mariner 9 View of Arsia Silva

    NASA Technical Reports Server (NTRS)

    1971-01-01

    Mariner 9 took this picture of Mars during the closing hours of its approach to the planet on November 13, 1971. The picture has been computer-enhanced with electronic high-pass filtering. The crater-like object at the lower left is about 124 miles (200 kilometers) across and is the same dark spot seen earlier in more distant views. It can be identified on a Mars map as Arsia Silva. The streaks pointing north--more than 1000 kilometers long--are either atmospheric turbulence patterns or dunes formed downwind of the crater. If they are dunes, they are as extensive as the largest in the Sahara in North Africa and those in Peru, South America. The picture was taken from a distance of 65,000 miles about eight hours before Mariner 9 went into orbit around Mars. It was transmitted back to Earth at 10:00 p.m. during the first orbit.

    Mariner 9 was the first spacecraft to orbit another planet. The spacecraft was designed to continue the atmospheric studies begun by Mariners 6 and 7, and to map over 70% of the Martian surface from the lowest altitude (1500 kilometers [900 miles])and at the highest resolutions (1 kilometer per pixel to 100 meters per pixel) of any previous Mars mission.

    Mariner 9 was launched on May 30, 1971 and arrived on November 14, 1971.

  3. Method and System for Temporal Filtering in Video Compression Systems

    NASA Technical Reports Server (NTRS)

    Lu, Ligang; He, Drake; Jagmohan, Ashish; Sheinin, Vadim

    2011-01-01

    Three related innovations combine improved non-linear motion estimation, video coding, and video compression. The first system comprises a method in which side information is generated using an adaptive, non-linear motion model. This method enables extrapolating and interpolating a visual signal, including determining the first motion vector between the first pixel position in a first image to a second pixel position in a second image; determining a second motion vector between the second pixel position in the second image and a third pixel position in a third image; determining a third motion vector between the first pixel position in the first image and the second pixel position in the second image, the second pixel position in the second image, and the third pixel position in the third image using a non-linear model; and determining a position of the fourth pixel in a fourth image based upon the third motion vector. For the video compression element, the video encoder has low computational complexity and high compression efficiency. The disclosed system comprises a video encoder and a decoder. The encoder converts the source frame into a space-frequency representation, estimates the conditional statistics of at least one vector of space-frequency coefficients with similar frequencies, and is conditioned on previously encoded data. It estimates an encoding rate based on the conditional statistics and applies a Slepian-Wolf code with the computed encoding rate. The method for decoding includes generating a side-information vector of frequency coefficients based on previously decoded source data and encoder statistics and previous reconstructions of the source frequency vector. It also performs Slepian-Wolf decoding of a source frequency vector based on the generated side-information and the Slepian-Wolf code bits. The video coding element includes receiving a first reference frame having a first pixel value at a first pixel position, a second reference frame having a second pixel value at a second pixel position, and a third reference frame having a third pixel value at a third pixel position. It determines a first motion vector between the first pixel position and the second pixel position, a second motion vector between the second pixel position and the third pixel position, and a fourth pixel value for a fourth frame based upon a linear or nonlinear combination of the first pixel value, the second pixel value, and the third pixel value. A stationary filtering process determines the estimated pixel values. The parameters of the filter may be predetermined constants.

  4. Investigating Diffusion with Technology

    ERIC Educational Resources Information Center

    Miller, Jon S.; Windelborn, Augden F.

    2013-01-01

    The activities described here allow students to explore the concept of diffusion with the use of common equipment such as computers, webcams and analysis software. The procedure includes taking a series of digital pictures of a container of water with a webcam as a dye slowly diffuses. At known time points, measurements of the pixel densities…

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Philipp, Hugh T., E-mail: htp2@cornell.edu; Tate, Mark W.; Purohit, Prafull

    Modern storage rings are readily capable of providing intense x-ray pulses, tens of picoseconds in duration, millions of times per second. Exploiting the temporal structure of these x-ray sources opens avenues for studying rapid structural changes in materials. Many processes (e.g. crack propagation, deformation on impact, turbulence, etc.) differ in detail from one sample trial to the next and would benefit from the ability to record successive x-ray images with single x-ray sensitivity while framing at 5 to 10 MHz rates. To this end, we have pursued the development of fast x-ray imaging detectors capable of collecting bursts of imagesmore » that enable the isolation of single synchrotron bunches and/or bunch trains. The detector technology used is the hybrid pixel array detector (PAD) with a charge integrating front-end, and high-speed, in-pixel signal storage elements. A 384×256 pixel version, the Keck-PAD, with 150 µm × 150 µm pixels and 8 dedicated in-pixel storage elements is operational, has been tested at CHESS, and has collected data for compression wave studies. An updated version with 27 dedicated storage capacitors and identical pixel size has been fabricated.« less

  6. Digital identification of cartographic control points

    NASA Technical Reports Server (NTRS)

    Gaskell, R. W.

    1988-01-01

    Techniques have been developed for the sub-pixel location of control points in satellite images returned by the Voyager spacecraft. The procedure uses digital imaging data in the neighborhood of the point to form a multipicture model of a piece of the surface. Comparison of this model with the digital image in each picture determines the control point locations to about a tenth of a pixel. At this level of precision, previously insignificant effects must be considered, including chromatic aberration, high level imaging distortions, and systematic errors due to navigation uncertainties. Use of these methods in the study of Jupiter's satellite Io has proven very fruitful.

  7. X-ray ‘ghost images’ could cut radiation doses

    NASA Astrophysics Data System (ADS)

    Chen, Sophia

    2018-03-01

    On its own, a single-pixel camera captures pictures that are pretty dull: squares that are completely black, completely white, or some shade of gray in between. All it does, after all, is detect brightness. Yet by connecting a single-pixel camera to a patterned light source, a team of physicists in China has made detailed x-ray images using a statistical technique called ghost imaging, first pioneered 20 years ago in infrared and visible light. Researchers in the field say future versions of this system could take clear x-ray photographs with cheap cameras—no need for lenses and multipixel detectors—and less cancer-causing radiation than conventional techniques.

  8. Image coding of SAR imagery

    NASA Technical Reports Server (NTRS)

    Chang, C. Y.; Kwok, R.; Curlander, J. C.

    1987-01-01

    Five coding techniques in the spatial and transform domains have been evaluated for SAR image compression: linear three-point predictor (LTPP), block truncation coding (BTC), microadaptive picture sequencing (MAPS), adaptive discrete cosine transform (ADCT), and adaptive Hadamard transform (AHT). These techniques have been tested with Seasat data. Both LTPP and BTC spatial domain coding techniques provide very good performance at rates of 1-2 bits/pixel. The two transform techniques, ADCT and AHT, demonstrate the capability to compress the SAR imagery to less than 0.5 bits/pixel without visible artifacts. Tradeoffs such as the rate distortion performance, the computational complexity, the algorithm flexibility, and the controllability of compression ratios are also discussed.

  9. A study of some nine-element decision rules. [for multispectral recognition of remote sensing

    NASA Technical Reports Server (NTRS)

    Richardson, W.

    1974-01-01

    A nine-element rule is one that makes a classification decision for each pixel based on data from that pixel and its eight immediate neighbors. Three such rules, all fast and simple to use, are defined and tested. All performed substantially better on field interiors than the best one-point rule. Qualitative results indicate that fine detail and contradictory testimony tend to be overlooked by the rules.

  10. Solar System Portrait - Earth as Pale Blue Dot

    NASA Image and Video Library

    1996-09-12

    This narrow-angle color image of the Earth, dubbed Pale Blue Dot, is a part of the first ever 'portrait' of the solar system taken by NASA’s Voyager 1. The spacecraft acquired a total of 60 frames for a mosaic of the solar system from a distance of more than 4 billion miles from Earth and about 32 degrees above the ecliptic. From Voyager's great distance Earth is a mere point of light, less than the size of a picture element even in the narrow-angle camera. Earth was a crescent only 0.12 pixel in size. Coincidentally, Earth lies right in the center of one of the scattered light rays resulting from taking the image so close to the sun. This blown-up image of the Earth was taken through three color filters -- violet, blue and green -- and recombined to produce the color image. The background features in the image are artifacts resulting from the magnification. http://photojournal.jpl.nasa.gov/catalog/PIA00452

  11. Changing the color of textiles with realistic visual rendering

    NASA Astrophysics Data System (ADS)

    Hébert, Mathieu; Henckens, Lambert; Barbier, Justine; Leboulleux, Lucie; Page, Marine; Roujas, Lucie; Cazier, Anthony

    2015-03-01

    Fast and easy preview of a fabric without having to produce samples would be very profitable for textile designers, but remains a technological challenge. As a first step towards this objective, we study the possibility of making images of a real sample, and changing virtually the colors of its yarns while preserving the shine and shadow texture. We consider two types of fabrics: Jacquard weave fabrics made of polyester warp and weft yarns of different colors, and satin ribbons made of polyester and metallic yarns. For the Jacquard fabric, we make a color picture with a scanner on a sample in which the yarns have contrasted colors, threshold this image in order to distinguish the pixels corresponding to each yarn, and accordingly modify their hue and chroma values. This method is simple to operate but do not enable to simulate the angle-dependent shine. A second method, tested on the satin ribbon made of black polyester and achromatic metallic yarns, is based on polarized imaging. We analyze the polarization state of the reflected light which is different for dielectric and metallic materials illuminated by polarized light. We then add a fixed color value to the pixels representing the polyester yarns and modify the hue and chroma of the pixels representing the metallic yarns. This was performed for many incident angles of light, in order to render the twinkling effect displayed by these ribbons. We could verify through a few samples that the simulated previews reproduce real pictures with visually acceptable accuracy.

  12. Active-Pixel Image Sensor With Analog-To-Digital Converters

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R.; Mendis, Sunetra K.; Pain, Bedabrata; Nixon, Robert H.

    1995-01-01

    Proposed single-chip integrated-circuit image sensor contains 128 x 128 array of active pixel sensors at 50-micrometer pitch. Output terminals of all pixels in each given column connected to analog-to-digital (A/D) converter located at bottom of column. Pixels scanned in semiparallel fashion, one row at time; during time allocated to scanning row, outputs of all active pixel sensors in row fed to respective A/D converters. Design of chip based on complementary metal oxide semiconductor (CMOS) technology, and individual circuit elements fabricated according to 2-micrometer CMOS design rules. Active pixel sensors designed to operate at video rate of 30 frames/second, even at low light levels. A/D scheme based on first-order Sigma-Delta modulation.

  13. Infrared Camera System for Visualization of IR-Absorbing Gas Leaks

    NASA Technical Reports Server (NTRS)

    Youngquist, Robert; Immer, Christopher; Cox, Robert

    2010-01-01

    Leak detection and location remain a common problem in NASA and industry, where gas leaks can create hazardous conditions if not quickly detected and corrected. In order to help rectify this problem, this design equips an infrared (IR) camera with the means to make gas leaks of IR-absorbing gases more visible for leak detection and location. By comparing the output of two IR cameras (or two pictures from the same camera under essentially identical conditions and very closely spaced in time) on a pixel-by-pixel basis, one can cancel out all but the desired variations that correspond to the IR absorption of the gas of interest. This can be simply done by absorbing the IR lines that correspond to the gas of interest from the radiation received by one of the cameras by the intervention of a filter that removes the particular wavelength of interest from the "reference" picture. This can be done most sensitively with a gas filter (filled with the gas of interest) placed in front of the IR detector array, or (less sensitively) by use of a suitable line filter in the same location. This arrangement would then be balanced against the unfiltered "measurement" picture, which will have variations from IR absorption from the gas of interest. By suitable processing of the signals from each pixel in the two IR pictures, the user can display only the differences in the signals. Either a difference or a ratio output of the two signals is feasible. From a gas concentration viewpoint, the ratio could be processed to show the column depth of the gas leak. If a variation in the background IR light intensity is present in the field of view, then large changes in the difference signal will occur for the same gas column concentration between the background and the camera. By ratioing the outputs, the same signal ratio is obtained for both high- and low-background signals, even though the low-signal areas may have greater noise content due to their smaller signal strength. Thus, one embodiment would use a ratioed output signal to better represent the gas column concentration. An alternative approach uses a simpler multiplication of the filtered signal to make the filtered signal equal to the unfiltered signal at most locations, followed by a subtraction to remove all but the wavelength-specific absorption in the unfiltered sample. This signal processing can also reveal the net difference signal representing the leaking gas absorption, and allow rapid leak location, but signal intensity would not relate solely to gas absorption, as raw signal intensity would also affect the displayed signal. A second design choice is whether to use one camera with two images closely spaced in time, or two cameras with essentially the same view and time. The figure shows the two-camera version. This choice involves many tradeoffs that are not apparent until some detailed testing is done. In short, the tradeoffs involve the temporal changes in the field picture versus the pixel sensitivity curves and frame alignment differences with two cameras, and which system would lead to the smaller variations from the uncontrolled variables.

  14. The Earth and Moon As Seen by 2001 Mars Odyssey's Thermal Emission Imaging System

    NASA Technical Reports Server (NTRS)

    2001-01-01

    2001 Mars Odyssey's Thermal Emission Imaging System (THEMIS) took this portrait of the Earth and its companion Moon, using the infrared camera, one of two cameras in the instrument. It was taken at a distance of 3,563,735 kilometers (more than 2 million miles) on April 19, 2001 as the 2001 Mars Odyssey spacecraft left the Earth. From this distance and perspective the camera was able to acquire an image that directly shows the true distance from the Earth to the Moon. The Earth's diameter is about 12,750 km, and the distance from the Earth to the Moon is about 385,000 km, corresponding to 30 Earth diameters. The dark region seen on Earth in the infrared temperature image is the cold south pole, with a temperature of minus 50 degrees Celsius (minus 58 degrees Fahrenheit). The small bright region above it is warm Australia. This image was acquired using the 9.1 um infrared filter, one of nine filters that the instrument will use to map the mineral composition and temperature of the martian surface. From this great distance, each picture element (pixel) in the image corresponds to a region 900 by 900 kilometers or greater in size or about size of the state of Texas. Once Odyssey reaches Mars orbit each infrared pixel will cover a region only 100 by 100 meters on the surface, about the size of a major league baseball field.

  15. Effect of ambiguities on SAR picture quality

    NASA Technical Reports Server (NTRS)

    Korwar, V. N.; Lipes, R. G.

    1978-01-01

    The degradation of picture quality is studied for a high-resolution, large-swath SAR mapping system subjected to speckle, additive white Gaussian noise, and range and azimuthal ambiguities occurring because of the non-finite antenna pattern produced by a square aperture antenna. The effect of the azimuth antenna pattern was accounted for by calculating the aximuth ambiguity function. Range ambiguities were accounted for by adding appropriate pixels at a range separation corresponding to one pulse repetition period, but attenuated by the antenna pattern. A method of estimating the range defocussing effect which arises from the azimuth matched filter being a function of range is shown. The resulting simulated picture was compared with one degraded by speckle and noise but no ambiguities. It is concluded that azimuth ambiguities don't cause any noticeable degradation but range ambiguities might.

  16. Pluto Close-up, Now in Color

    NASA Image and Video Library

    2015-12-10

    This enhanced color mosaic combines some of the sharpest views of Pluto that NASA's New Horizons spacecraft obtained during its July 14 flyby. The pictures are part of a sequence taken near New Horizons' closest approach to Pluto, with resolutions of about 250-280 feet (77-85 meters) per pixel -- revealing features smaller than half a city block on Pluto's surface. Lower resolution color data (at about 2,066 feet, or 630 meters, per pixel) were added to create this new image. The images form a strip 50 miles (80 kilometers) wide, trending (top to bottom) from the edge of "badlands" northwest of the informally named Sputnik Planum, across the al-Idrisi mountains, onto the shoreline of Pluto's "heart" feature, and just into its icy plains. They combine pictures from the telescopic Long Range Reconnaissance Imager (LORRI) taken approximately 15 minutes before New Horizons' closest approach to Pluto, with -- from a range of only 10,000 miles (17,000 kilometers) -- with color data (in near-infrared, red and blue) gathered by the Ralph/Multispectral Visible Imaging Camera (MVIC) 25 minutes before the LORRI pictures. The wide variety of cratered, mountainous and glacial terrains seen here gives scientists and the public alike a breathtaking, super-high-resolution color window into Pluto's geology. e border between the relatively smooth Sputnik Planum ice sheet and the pitted area, with a series of hills forming slightly inside this unusual "shoreline." http://photojournal.jpl.nasa.gov/catalog/PIA20213

  17. The FoCal prototype—an extremely fine-grained electromagnetic calorimeter using CMOS pixel sensors

    NASA Astrophysics Data System (ADS)

    de Haas, A. P.; Nooren, G.; Peitzmann, T.; Reicher, M.; Rocco, E.; Röhrich, D.; Ullaland, K.; van den Brink, A.; van Leeuwen, M.; Wang, H.; Yang, S.; Zhang, C.

    2018-01-01

    A prototype of a Si-W EM calorimeter was built with Monolithic Active Pixel Sensors as the active elements. With a pixel size of 30 μm it allows digital calorimetry, i.e. the particle's energy is determined by counting pixels, not by measuring the energy deposited. Although of modest size, with a width of only four Moliere radii, it has 39 million pixels. In this article the construction and tuning of the prototype is described. Results from beam tests are compared with predictions of GEANT-based Monte Carlo simulations. The shape of showers caused by electrons is shown in unprecedented detail. Results for energy and position resolution are also given.

  18. Studies on a 300 k pixel detector telescope

    NASA Astrophysics Data System (ADS)

    Middelkamp, Peter; Antinori, F.; Barberis, D.; Becks, K. H.; Beker, H.; Beusch, W.; Burger, P.; Campbell, M.; Cantatore, E.; Catanesi, M. G.; Chesi, E.; Darbo, G.; D'Auria, S.; Davia, C.; di Bari, D.; di Liberto, S.; Elia, D.; Gys, T.; Heijne, E. H. M.; Helstrup, H.; Jacholkowski, A.; Jæger, J. J.; Jakubek, J.; Jarron, P.; Klempt, W.; Krummenacher, F.; Knudson, K.; Kralik, I.; Kubasta, J.; Lasalle, J. C.; Leitner, R.; Lemeilleur, F.; Lenti, V.; Letheren, M.; Lopez, L.; Loukas, D.; Luptak, M.; Martinengo, P.; Meddeler, G.; Meddi, F.; Morando, M.; Munns, A.; Pellegrini, F.; Pengg, F.; Pospisil, S.; Quercigh, E.; Ridky, J.; Rossi, L.; Safarik, K.; Scharfetter, L.; Segato, G.; Simone, S.; Smith, K.; Snoeys, W.; Vrba, V.

    1996-02-01

    Four silicon pixel detector planes are combined to form a tracking telescope in the lead ion experiment WA97 at CERN with 290 304 sensitive elements each of 75 μm by 500 μm area. An electronic pulse processing circuit is associated with each individual sensing element and the response for ionizing particles is binary with an adjustable threshold. The noise rate for a threshold of 6000 e- has been measured to be less than 10-10. The inefficient area due to malfunctioning pixels is 2.8% of the 120 cm2. Detector overlaps within one plane have been used to determine the alignment of the components of the plane itself, without need for track reconstruction using external detectors. It is the first time that such a big surface covered with active pixels has been used in a physics experiment. Some aspects concerning inclined particle tracks and time walk have been measured separately in a beam test at the CERN SPS H6 beam.

  19. Terahertz imaging devices and systems, and related methods, for detection of materials

    DOEpatents

    Kotter, Dale K.

    2016-11-15

    Terahertz imaging devices may comprise a focal plane array including a substrate and a plurality of resonance elements. The plurality of resonance elements may comprise a conductive material coupled to the substrate. Each resonance element of the plurality of resonance elements may be configured to resonate and produce an output signal responsive to incident radiation having a frequency between about a 0.1 THz and 4 THz range. A method of detecting a hazardous material may comprise receiving incident radiation by a focal plane array having a plurality of discrete pixels including a resonance element configured to absorb the incident radiation at a resonant frequency in the THz, generating an output signal from each of the discrete pixels, and determining a presence of a hazardous material by interpreting spectral information from the output signal.

  20. OLED panel with fuses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levermore, Levermore; Pang, Huiqing; Rajan, Kamala

    2014-09-16

    Embodiments may provide a first device that may comprise a substrate, a plurality of conductive bus lines disposed over the substrate, and a plurality of OLED circuit elements disposed on the substrate, where each of the OLED circuit elements comprises one and only one pixel electrically connected in series with a fuse. Each pixel may further comprise a first electrode, a second electrode, and an organic electroluminescent (EL) material disposed between the first and the second electrodes. The fuse of each of the plurality of OLED circuit elements may electrically connect each of the OLED circuit elements to at leastmore » one of the plurality of bus lines. Each of the plurality of bus lines may be electrically connected to a plurality of OLED circuit elements that are commonly addressable and at least two of the bus lines may be separately addressable.« less

  1. Mercuric iodide room-temperature array detectors for gamma-ray imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patt, B.

    Significant progress has been made recently in the development of mercuric iodide detector arrays for gamma-ray imaging, making real the possibility of constructing high-performance small, light-weight, portable gamma-ray imaging systems. New techniques have been applied in detector fabrication and then low noise electronics which have produced pixel arrays with high-energy resolution, high spatial resolution, high gamma stopping efficiency. Measurements of the energy resolution capability have been made on a 19-element protypical array. Pixel energy resolutions of 2.98% fwhm and 3.88% fwhm were obtained at 59 keV (241-Am) and 140-keV (99m-Tc), respectively. The pixel spectra for a 14-element section of themore » data is shown together with the composition of the overlapped individual pixel spectra. These techniques are now being applied to fabricate much larger arrays with thousands of pixels. Extension of these principles to imaging scenarios involving gamma-ray energies up to several hundred keV is also possible. This would enable imaging of the 208 keV and 375-414 keV 239-Pu and 240-Pu structures, as well as the 186 keV line of 235-U.« less

  2. Satellite image maps of Pakistan

    USGS Publications Warehouse

    ,

    1997-01-01

    Georeferenced Landsat satellite image maps of Pakistan are now being made available for purchase from the U.S. Geological Survey (USGS). The first maps to be released are a series of Multi-Spectral Scanner (MSS) color image maps compiled from Landsat scenes taken before 1979. The Pakistan image maps were originally developed by USGS as an aid for geologic and general terrain mapping in support of the Coal Resource Exploration and Development Program in Pakistan (COALREAP). COALREAP, a cooperative program between the USGS, the United States Agency for International Development, and the Geological Survey of Pakistan, was in effect from 1985 through 1994. The Pakistan MSS image maps (bands 1, 2, and 4) are available as a full-country mosaic of 72 Landsat scenes at a scale of 1:2,000,000, and in 7 regional sheets covering various portions of the entire country at a scale of 1:500,000. The scenes used to compile the maps were selected from imagery available at the Eros Data Center (EDC), Sioux Falls, S. Dak. Where possible, preference was given to cloud-free and snow-free scenes that displayed similar stages of seasonal vegetation development. The data for the MSS scenes were resampled from the original 80-meter resolution to 50-meter picture elements (pixels) and digitally transformed to a geometrically corrected Lambert conformal conic projection. The cubic convolution algorithm was used during rotation and resampling. The 50-meter pixel size allows for such data to be imaged at a scale of 1:250,000 without degradation; for cost and convenience considerations, however, the maps were printed at 1:500,000 scale. The seven regional sheets have been named according to the main province or area covered. The 50-meter data were averaged to 150-meter pixels to generate the country image on a single sheet at 1:2,000,000 scale

  3. Reull Valles in Approximately Natural Color

    NASA Technical Reports Server (NTRS)

    1995-01-01

    Reull Valles, conspicuous southeast-trending fretted channel, dissects wall deposits of the large Hellas impact basin. Center of picture is at latitude 42 degrees S. longitude 258 degrees. Fretted channels are wide, flat-floored channels with steep walls, which may be runoff channels that have been modified and enlarged by mass wasting. Many nearby hills and mountains are surrounded by lobate debris aprons, which may have formed by slow creep of rock deposits aided by the presence of near-surface ice. Layering is exposed in the channel and crater walls. The color variations of the surface are very bland in this region; most of the variations seen in the enhanced-color version (PIA00153) are due to atmospheric scattering. Viking Orbiter Picture Numbers 126A08 (violet), 126A16 (green), and 126A24 (red) at 157 m/pixel resolution. Picture width is 161 km. North is 112 degrees clockwise from top.

  4. Mariner 6 and 7 picture analysis

    NASA Technical Reports Server (NTRS)

    Leighton, R. B.

    1975-01-01

    Analysis of Mariner 6 and 7 far-encounter (FE) pictures is discussed. The purpose of the studies was to devise ways to combine digital data from the full set of FE pictures so as to improve surface resolution, distinguish clouds and haze patches from permanent surface topographic markings, deduce improved values for radius, oblateness, and spin-axis orientation, and produce a composite photographic map of Mars. Attempts to measure and correct camera distortions, locate each image in the frame, and convert image coordinates to martian surface coordinates were highly successful; residual uncertainties in location were considerably less than one pixel. However, analysis of the data to improve the radius, figure, and axial tilt and to produce a composite map was curtailed because of the superior data provided by Mariner 9. The data, programs, and intermediate results are still available (1976), and the project could be resumed with little difficulty.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Becker, Julian; Tate, Mark W.; Shanks, Katherine S.

    Pixel Array Detectors (PADs) consist of an x-ray sensor layer bonded pixel-by-pixel to an underlying readout chip. This approach allows both the sensor and the custom pixel electronics to be tailored independently to best match the x-ray imaging requirements. Here we describe the hybridization of CdTe sensors to two different charge-integrating readout chips, the Keck PAD and the Mixed-Mode PAD (MM-PAD), both developed previously in our laboratory. The charge-integrating architecture of each of these PADs extends the instantaneous counting rate by many orders of magnitude beyond that obtainable with photon counting architectures. The Keck PAD chip consists of rapid, 8-frame,more » in-pixel storage elements with framing periods <150 ns. The second detector, the MM-PAD, has an extended dynamic range by utilizing an in-pixel overflow counter coupled with charge removal circuitry activated at each overflow. This allows the recording of signals from the single-photon level to tens of millions of x-rays/pixel/frame while framing at 1 kHz. Both detector chips consist of a 128×128 pixel array with (150 µm){sup 2} pixels.« less

  6. An evaluation of the influence of modifications of selected face elements on the reliability of composite drawings.

    PubMed

    Lewandowski, Z

    2004-01-01

    The modification of various face elements in a composite drawing influences its reliability in relation to the corresponding photograph. Changes in various face elements are the cause of this decrease in similarity, when examined by both sexes. The results of an initial test suggest: regardless of the observing subject's sex, the highest rated picture (regarded as the most similar one) was the original picture. The face with a shortened nose height (n-sn) was recognised as the least similar to the original. In the second test, this picture obtained the lowest number of points, irrespective of the examining subjects' sex. In the examining group of females, the picture with changed bigonial breadth (go-go) was rated low. In the group of males the picture with a shortened interocular breadth (en-en) appeared poorly reliable. In the case of females, the likeness of the composite drawing to the photograph is least affected by shortening of mouth breadth (ch-ch), whereas in males, by the decrease in nose breadth (al-al).

  7. Sedimentary Rocks of Aram Chaos

    NASA Technical Reports Server (NTRS)

    2004-01-01

    10 May 2004 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows outcroppings of light-toned, layered, sedimentary rock within Aram Chaos, an ancient, partly-filled impact crater located near 3.2oN, 19.9oW. This 1.5 meters (5 feet) per pixel picture is illuminated by sunlight from the left and covers an area about 3 km (1.9 mi) across.

  8. Selecting "App"ealing and "App"ropriate Book Apps for Beginning Readers

    ERIC Educational Resources Information Center

    Cahill, Maria; McGill-Franzen, Anne

    2013-01-01

    Beginning with a brief rationale for selecting quality digital picture book apps for beginning readers, the authors describe the elements of digital picture books and provide a brief review of the instructional benefits of digital picture book use for beginning readers. They then present a detailed taxonomy for selecting quality picture book apps.…

  9. CMOS Imaging of Pin-Printed Xerogel-Based Luminescent Sensor Microarrays.

    PubMed

    Yao, Lei; Yung, Ka Yi; Khan, Rifat; Chodavarapu, Vamsy P; Bright, Frank V

    2010-12-01

    We present the design and implementation of a luminescence-based miniaturized multisensor system using pin-printed xerogel materials which act as host media for chemical recognition elements. We developed a CMOS imager integrated circuit (IC) to image the luminescence response of the xerogel-based sensor array. The imager IC uses a 26 × 20 (520 elements) array of active pixel sensors and each active pixel includes a high-gain phototransistor to convert the detected optical signals into electrical currents. The imager includes a correlated double sampling circuit and pixel address/digital control circuit; the image data is read-out as coded serial signal. The sensor system uses a light-emitting diode (LED) to excite the target analyte responsive luminophores doped within discrete xerogel-based sensor elements. As a prototype, we developed a 4 × 4 (16 elements) array of oxygen (O 2 ) sensors. Each group of 4 sensor elements in the array (arranged in a row) is designed to provide a different and specific sensitivity to the target gaseous O 2 concentration. This property of multiple sensitivities is achieved by using a strategic mix of two oxygen sensitive luminophores ([Ru(dpp) 3 ] 2+ and ([Ru(bpy) 3 ] 2+ ) in each pin-printed xerogel sensor element. The CMOS imager consumes an average power of 8 mW operating at 1 kHz sampling frequency driven at 5 V. The developed prototype system demonstrates a low cost and miniaturized luminescence multisensor system.

  10. The Solid State Image Sensor's Contribution To The Development Of Silicon Technology

    NASA Astrophysics Data System (ADS)

    Weckler, Gene P.

    1985-12-01

    Until recently, a solid-state image sensor with full television resolution was a dream. However, the dream of a solid state image sensor has been a driving force in the development of silicon technology for more than twenty-five years. There are probably many in the main stream of semiconductor technology who would argue with this; however, the solid state image sensor was conceived years before the invention of the semi conductor RAM or the microprocessor (i.e., even before the invention of the integrated circuit). No other potential application envisioned at that time required such complexity. How could anyone have ever hoped in 1960 to make a semi conductor chip containing half-a-million picture elements, capable of resolving eight to twelve bits of infornation, and each capable of readout rates in the tens of mega-pixels per second? As early as 1960 arrays of p-n junctions were being investigated as the optical targets in vidicon tubes, replacing the photoconductive targets. It took silicon technology several years to catch up with these dreamers.

  11. Roughness effects on thermal-infrared emissivities estimated from remotely sensed images

    NASA Astrophysics Data System (ADS)

    Mushkin, Amit; Danilina, Iryna; Gillespie, Alan R.; Balick, Lee K.; McCabe, Matthew F.

    2007-10-01

    Multispectral thermal-infrared images from the Mauna Loa caldera in Hawaii, USA are examined to study the effects of surface roughness on remotely retrieved emissivities. We find up to a 3% decrease in spectral contrast in ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer) 90-m/pixel emissivities due to sub-pixel surface roughness variations on the caldera floor. A similar decrease in spectral contrast of emissivities extracted from MASTER (MODIS/ASTER Airborne Simulator) ~12.5-m/pixel data can be described as a function of increasing surface roughness, which was measured remotely from ASTER 15-m/pixel stereo images. The ratio between ASTER stereo images provides a measure of sub-pixel surface-roughness variations across the scene. These independent roughness estimates complement a radiosity model designed to quantify the unresolved effects of multiple scattering and differential solar heating due to sub-pixel roughness elements and to compensate for both sub-pixel temperature dispersion and cavity radiation on TIR measurements.

  12. Theory and applications of structured light single pixel imaging

    NASA Astrophysics Data System (ADS)

    Stokoe, Robert J.; Stockton, Patrick A.; Pezeshki, Ali; Bartels, Randy A.

    2018-02-01

    Many single-pixel imaging techniques have been developed in recent years. Though the methods of image acquisition vary considerably, the methods share unifying features that make general analysis possible. Furthermore, the methods developed thus far are based on intuitive processes that enable simple and physically-motivated reconstruction algorithms, however, this approach may not leverage the full potential of single-pixel imaging. We present a general theoretical framework of single-pixel imaging based on frame theory, which enables general, mathematically rigorous analysis. We apply our theoretical framework to existing single-pixel imaging techniques, as well as provide a foundation for developing more-advanced methods of image acquisition and reconstruction. The proposed frame theoretic framework for single-pixel imaging results in improved noise robustness, decrease in acquisition time, and can take advantage of special properties of the specimen under study. By building on this framework, new methods of imaging with a single element detector can be developed to realize the full potential associated with single-pixel imaging.

  13. Picture Books Are for Little Kids, Aren't They? Using Picture Books with Adolescent Readers to Enhance Literacy Instruction

    ERIC Educational Resources Information Center

    Senokossoff, Gwyn W.

    2013-01-01

    This article discusses the benefits of using picture books with adolescent readers, describes strategies that can be taught with picture books, and provides examples of books the author has used. Some of the topics discussed include: reading comprehension, visual literacy, interactive read-aloud with facilitative talk, literary elements, and…

  14. Resolving the percentage of component terrains within single resolution elements

    NASA Technical Reports Server (NTRS)

    Marsh, S. E.; Switzer, P.; Kowalik, W. S.; Lyon, R. J. P.

    1980-01-01

    An approximate maximum likelihood technique employing a widely available discriminant analysis program is discussed that has been developed for resolving the percentage of component terrains within single resolution elements. The method uses all four channels of Landsat data simultaneously and does not require prior knowledge of the percentage of components in mixed pixels. It was tested in five cases that were chosen to represent mixtures of outcrop, soil and vegetation which would typically be encountered in geologic studies with Landsat data. For all five cases, the method proved to be superior to single band weighted average and linear regression techniques and permitted an estimate of the total area occupied by component terrains to within plus or minus 6% of the true area covered. Its major drawback is a consistent overestimation of the pixel component percent of the darker materials (vegetation) and an underestimation of the pixel component percent of the brighter materials (sand).

  15. Proposed Standard For Variable Format Picture Processing And A Codec Approach To Match Diverse Imaging Devices

    NASA Astrophysics Data System (ADS)

    Wendler, Th.; Meyer-Ebrecht, D.

    1982-01-01

    Picture archiving and communication systems, especially those for medical applications, will offer the potential to integrate the various image sources of different nature. A major problem, however, is the incompatibility of the different matrix sizes and data formats. This may be overcome by a novel hierarchical coding process, which could lead to a unified picture format standard. A picture coding scheme is described, which decomposites a given (2n)2 picture matrix into a basic (2m)2 coarse information matrix (representing lower spatial frequencies) and a set of n-m detail matrices, containing information of increasing spatial resolution. Thus, the picture is described by an ordered set of data blocks rather than by a full resolution matrix of pixels. The blocks of data are transferred and stored using data formats, which have to be standardized throughout the system. Picture sources, which produce pictures of different resolution, will provide the coarse-matrix datablock and additionally only those detail matrices that correspond to their required resolution. Correspondingly, only those detail-matrix blocks need to be retrieved from the picture base, that are actually required for softcopy or hardcopy output. Thus, picture sources and retrieval terminals of diverse nature and retrieval processes for diverse purposes are easily made compatible. Furthermore this approach will yield an economic use of storage space and transmission capacity: In contrast to fixed formats, redundand data blocks are always skipped. The user will get a coarse representation even of a high-resolution picture almost instantaneously with gradually added details, and may abort transmission at any desired detail level. The coding scheme applies the S-transform, which is a simple add/substract algorithm basically derived from the Hadamard Transform. Thus, an additional data compression can easily be achieved especially for high-resolution pictures by applying appropriate non-linear and/or adaptive quantizing.

  16. Planet Mercury

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Mariner 10's first image of Mercury acquired on March 24, 1974. During its flight, Mariner 10's trajectory brought it behind the lighted hemisphere of Mercury, where this image was taken, in order to acquire important measurements with other instruments.

    This picture was acquired from a distance of 3,340,000 miles (5,380,000 km) from the surface of Mercury. The diameter of Mercury (3,031 miles; 4,878 km) is about 1/3 that of Earth.

    Images of Mercury were acquired in two steps, an inbound leg (images acquired before passing into Mercury's shadow) and an outbound leg (after exiting from Mercury's shadow). More than 2300 useful images of Mercury were taken, both moderate resolution (3-20 km/pixel) color and high resolution (better than 1 km/pixel) black and white coverage.

  17. Flexible AMOLED backplane using pentacene TFT

    NASA Astrophysics Data System (ADS)

    Song, Chung Kun; Ryu, Gi Seong

    2005-01-01

    In this paper we fabricated a panel consisting of an array of organic TFTs (OTFT) and organic LEDs (OLED) in order to demonstrate the possible application of OTFTs to flexible active matrix OLED (AMOLED). The panel was composed of 64×64 pixels on 4 inch size PET substrate in which each pixel had one OTFT integrated with one green OLED. The panel successfully demonstrated to display some letters and pictures by emitting green light with luminance of 20 cd/m2 at 6 V, which was controlled by the gate voltage of OTFT. In addition we also developed fabrication processes for pentacene TFT with PVP gate on PET substrate. The OTFTs produced the maximum mobility of 1.2 cm2/V"sec and on/off current ratio of 2×106.

  18. A Dedicated Microprocessor Controller for a Bound Document Scanner.

    DTIC Science & Technology

    1984-06-01

    focused onto the CCD which converts the image into 2048 pixels. After the pixel data are processed by the scanner hardware, they are sent to the display...data in real time after the data on each of the 2048 pixel elements .-.- .---.; . has been transferred out of the device. Display-control commands and...05 06 07 Fig. 4.9 2716 EPROM Block D~iagram and Pin Assignment HE-E 64 BYTES RA ’ FFF 4095 INTERNAL I FCO 4032 EXECUTABLE FBP 4031 RA Soo0 2048 _ _7FF

  19. Advanced Code-Division Multiplexers for Superconducting Detector Arrays

    NASA Astrophysics Data System (ADS)

    Irwin, K. D.; Cho, H. M.; Doriese, W. B.; Fowler, J. W.; Hilton, G. C.; Niemack, M. D.; Reintsema, C. D.; Schmidt, D. R.; Ullom, J. N.; Vale, L. R.

    2012-06-01

    Multiplexers based on the modulation of superconducting quantum interference devices are now regularly used in multi-kilopixel arrays of superconducting detectors for astrophysics, cosmology, and materials analysis. Over the next decade, much larger arrays will be needed. These larger arrays require new modulation techniques and compact multiplexer elements that fit within each pixel. We present a new in-focal-plane code-division multiplexer that provides multiplexing elements with the required scalability. This code-division multiplexer uses compact lithographic modulation elements that simultaneously multiplex both signal outputs and superconducting transition-edge sensor (TES) detector bias voltages. It eliminates the shunt resistor used to voltage bias TES detectors, greatly reduces power dissipation, allows different dc bias voltages for each TES, and makes all elements sufficiently compact to fit inside the detector pixel area. These in-focal plane code-division multiplexers can be combined with multi-GHz readout based on superconducting microresonators to scale to even larger arrays.

  20. Skeletonization of gray-scale images by gray weighted distance transform

    NASA Astrophysics Data System (ADS)

    Qian, Kai; Cao, Siqi; Bhattacharya, Prabir

    1997-07-01

    In pattern recognition, thinning algorithms are often a useful tool to represent a digital pattern by means of a skeletonized image, consisting of a set of one-pixel-width lines that highlight the significant features interest in applying thinning directly to gray-scale images, motivated by the desire of processing images characterized by meaningful information distributed over different levels of gray intensity. In this paper, a new algorithm is presented which can skeletonize both black-white and gray pictures. This algorithm is based on the gray distance transformation and can be used to process any non-well uniformly distributed gray-scale picture and can preserve the topology of original picture. This process includes a preliminary phase of investigation in the 'hollows' in the gray-scale image; these hollows are considered not as topological constrains for the skeleton structure depending on their statistically significant depth. This algorithm can also be executed on a parallel machine as all the operations are executed in local. Some examples are discussed to illustrate the algorithm.

  1. Uncooled IR imager with 5-mK NEDT

    NASA Astrophysics Data System (ADS)

    Amantea, Robert; Knoedler, C. M.; Pantuso, Francis P.; Patel, Vipulkumar; Sauer, Donald J.; Tower, John R.

    1997-08-01

    The bi-material concept for room-temperature infrared imaging has the potential of reaching an NE(Delta) T approaching the theoretical limit because of its high responsivity and low noise. The approach, which is 100% compatible with silicon IC foundry processing, utilizes a novel combination of surface micromachining and conventional integrated circuits to produce a bimaterial thermally sensitive element that controls the position of a capacitive plate coupled to the input of a low noise MOS amplifier. This approach can achieve the high sensitivity, the low weight, and the low cost necessary for equipment such as helmet mounted IR viewers and IR rifle sights. The pixel design has the following benefits: (1) an order of magnitude improvement in NE(Delta) T due to extremely high sensitivity and low noise, (2) low cost due to 100% silicon IC compatibility, (3) high image quality and increased yield due to ability to do offset and sensitivity corrections on the imager, pixel-by-pixel; (4) no cryogenic cooler and no high vacuum processing; and (5) commercial applications such as law enforcement, home security, and transportation safety. Two designs are presented. One is a 50 micrometer pixel using silicon nitride as the thermal isolation element that can achieve 5 mK NE(Delta) T; the other is a 29 micrometer pixel using silicon carbide that provides much higher thermal isolation and can achieve 10 mK NE(Delta) T.

  2. Non-integer expansion embedding techniques for reversible image watermarking

    NASA Astrophysics Data System (ADS)

    Xiang, Shijun; Wang, Yi

    2015-12-01

    This work aims at reducing the embedding distortion of prediction-error expansion (PE)-based reversible watermarking. In the classical PE embedding method proposed by Thodi and Rodriguez, the predicted value is rounded to integer number for integer prediction-error expansion (IPE) embedding. The rounding operation makes a constraint on a predictor's performance. In this paper, we propose a non-integer PE (NIPE) embedding approach, which can proceed non-integer prediction errors for embedding data into an audio or image file by only expanding integer element of a prediction error while keeping its fractional element unchanged. The advantage of the NIPE embedding technique is that the NIPE technique can really bring a predictor into full play by estimating a sample/pixel in a noncausal way in a single pass since there is no rounding operation. A new noncausal image prediction method to estimate a pixel with four immediate pixels in a single pass is included in the proposed scheme. The proposed noncausal image predictor can provide better performance than Sachnev et al.'s noncausal double-set prediction method (where data prediction in two passes brings a distortion problem due to the fact that half of the pixels were predicted with the watermarked pixels). In comparison with existing several state-of-the-art works, experimental results have shown that the NIPE technique with the new noncausal prediction strategy can reduce the embedding distortion for the same embedding payload.

  3. CMOS Imaging of Temperature Effects on Pin-Printed Xerogel Sensor Microarrays.

    PubMed

    Lei Yao; Ka Yi Yung; Chodavarapu, Vamsy P; Bright, Frank V

    2011-04-01

    In this paper, we study the effect of temperature on the operation and performance of a xerogel-based sensor microarrays coupled to a complementary metal-oxide semiconductor (CMOS) imager integrated circuit (IC) that images the photoluminescence response from the sensor microarray. The CMOS imager uses a 32 × 32 (1024 elements) array of active pixel sensors and each pixel includes a high-gain phototransistor to convert the detected optical signals into electrical currents. A correlated double sampling circuit and pixel address/digital control/signal integration circuit are also implemented on-chip. The CMOS imager data are read out as a serial coded signal. The sensor system uses a light-emitting diode to excite target analyte responsive organometallic luminophores doped within discrete xerogel-based sensor elements. As a proto type, we developed a 3 × 3 (9 elements) array of oxygen (O2) sensors. Each group of three sensor elements in the array (arranged in a column) is designed to provide a different and specific sensitivity to the target gaseous O2 concentration. This property of multiple sensitivities is achieved by using a mix of two O2 sensitive luminophores in each pin-printed xerogel sensor element. The CMOS imager is designed to be low noise and consumes a static power of 320.4 μW and an average dynamic power of 624.6 μW when operating at 100-Hz sampling frequency and 1.8-V dc power supply.

  4. Estimation of a cover-type change matrix from error-prone data

    Treesearch

    Steen Magnussen

    2009-01-01

    Coregistration and classification errors seriously compromise per-pixel estimates of land cover change. A more robust estimation of change is proposed in which adjacent pixels are grouped into 3x3 clusters and treated as a unit of observation. A complete change matrix is recovered in a two-step process. The diagonal elements of a change matrix are recovered from...

  5. Methods and computer executable instructions for rapidly calculating simulated particle transport through geometrically modeled treatment volumes having uniform volume elements for use in radiotherapy

    DOEpatents

    Frandsen, Michael W.; Wessol, Daniel E.; Wheeler, Floyd J.

    2001-01-16

    Methods and computer executable instructions are disclosed for ultimately developing a dosimetry plan for a treatment volume targeted for irradiation during cancer therapy. The dosimetry plan is available in "real-time" which especially enhances clinical use for in vivo applications. The real-time is achieved because of the novel geometric model constructed for the planned treatment volume which, in turn, allows for rapid calculations to be performed for simulated movements of particles along particle tracks there through. The particles are exemplary representations of neutrons emanating from a neutron source during BNCT. In a preferred embodiment, a medical image having a plurality of pixels of information representative of a treatment volume is obtained. The pixels are: (i) converted into a plurality of substantially uniform volume elements having substantially the same shape and volume of the pixels; and (ii) arranged into a geometric model of the treatment volume. An anatomical material associated with each uniform volume element is defined and stored. Thereafter, a movement of a particle along a particle track is defined through the geometric model along a primary direction of movement that begins in a starting element of the uniform volume elements and traverses to a next element of the uniform volume elements. The particle movement along the particle track is effectuated in integer based increments along the primary direction of movement until a position of intersection occurs that represents a condition where the anatomical material of the next element is substantially different from the anatomical material of the starting element. This position of intersection is then useful for indicating whether a neutron has been captured, scattered or exited from the geometric model. From this intersection, a distribution of radiation doses can be computed for use in the cancer therapy. The foregoing represents an advance in computational times by multiple factors of time magnitudes.

  6. Polarized-pixel performance model for DoFP polarimeter

    NASA Astrophysics Data System (ADS)

    Feng, Bin; Shi, Zelin; Liu, Haizheng; Liu, Li; Zhao, Yaohong; Zhang, Junchao

    2018-06-01

    A division of a focal plane (DoFP) polarimeter is manufactured by placing a micropolarizer array directly onto the focal plane array (FPA) of a detector. Each element of the DoFP polarimeter is a polarized pixel. This paper proposes a performance model for a polarized pixel. The proposed model characterizes the optical and electronic performance of a polarized pixel by three parameters. They are respectively major polarization responsivity, minor polarization responsivity and polarization orientation. Each parameter corresponds to an intuitive physical feature of a polarized pixel. This paper further extends this model to calibrate polarization images from a DoFP (division of focal plane) polarimeter. This calibration work is evaluated quantitatively by a developed DoFP polarimeter under varying illumination intensity and angle of linear polarization. The experiment proves that our model reduces nonuniformity to 6.79% of uncalibrated DoLP (degree of linear polarization) images, and significantly improves the visual effect of DoLP images.

  7. Characterization of pixel sensor designed in 180 nm SOI CMOS technology

    NASA Astrophysics Data System (ADS)

    Benka, T.; Havranek, M.; Hejtmanek, M.; Jakovenko, J.; Janoska, Z.; Marcisovska, M.; Marcisovsky, M.; Neue, G.; Tomasek, L.; Vrba, V.

    2018-01-01

    A new type of X-ray imaging Monolithic Active Pixel Sensor (MAPS), X-CHIP-02, was developed using a 180 nm deep submicron Silicon On Insulator (SOI) CMOS commercial technology. Two pixel matrices were integrated into the prototype chip, which differ by the pixel pitch of 50 μm and 100 μm. The X-CHIP-02 contains several test structures, which are useful for characterization of individual blocks. The sensitive part of the pixel integrated in the handle wafer is one of the key structures designed for testing. The purpose of this structure is to determine the capacitance of the sensitive part (diode in the MAPS pixel). The measured capacitance is 2.9 fF for 50 μm pixel pitch and 4.8 fF for 100 μm pixel pitch at -100 V (default operational voltage). This structure was used to measure the IV characteristics of the sensitive diode. In this work, we report on a circuit designed for precise determination of sensor capacitance and IV characteristics of both pixel types with respect to X-ray irradiation. The motivation for measurement of the sensor capacitance was its importance for the design of front-end amplifier circuits. The design of pixel elements, as well as circuit simulation and laboratory measurement techniques are described. The experimental results are of great importance for further development of MAPS sensors in this technology.

  8. Lifting the Veil of Dust from NGC 0959: The Importance of a Pixel-based Two-dimensional Extinction Correction

    NASA Astrophysics Data System (ADS)

    Tamura, K.; Jansen, R. A.; Eskridge, P. B.; Cohen, S. H.; Windhorst, R. A.

    2010-06-01

    We present the results of a study of the late-type spiral galaxy NGC 0959, before and after application of the pixel-based dust extinction correction described in Tamura et al. (Paper I). Galaxy Evolution Explorer far-UV, and near-UV, ground-based Vatican Advanced Technology Telescope, UBVR, and Spitzer/Infrared Array Camera 3.6, 4.5, 5.8, and 8.0 μm images are studied through pixel color-magnitude diagrams and pixel color-color diagrams (pCCDs). We define groups of pixels based on their distribution in a pCCD of (B - 3.6 μm) versus (FUV - U) colors after extinction correction. In the same pCCD, we trace their locations before the extinction correction was applied. This shows that selecting pixel groups is not meaningful when using colors uncorrected for dust. We also trace the distribution of the pixel groups on a pixel coordinate map of the galaxy. We find that the pixel-based (two-dimensional) extinction correction is crucial for revealing the spatial variations in the dominant stellar population, averaged over each resolution element. Different types and mixtures of stellar populations, and galaxy structures such as a previously unrecognized bar, become readily discernible in the extinction-corrected pCCD and as coherent spatial structures in the pixel coordinate map.

  9. Solar System Portrait - 60 Frame Mosaic

    NASA Image and Video Library

    1996-09-13

    The cameras of Voyager 1 on Feb. 14, 1990, pointed back toward the sun and took a series of pictures of the sun and the planets, making the first ever portrait of our solar system as seen from the outside. In the course of taking this mosaic consisting of a total of 60 frames, Voyager 1 made several images of the inner solar system from a distance of approximately 4 billion miles and about 32 degrees above the ecliptic plane. Thirty-nine wide angle frames link together six of the planets of our solar system in this mosaic. Outermost Neptune is 30 times further from the sun than Earth. Our sun is seen as the bright object in the center of the circle of frames. The wide-angle image of the sun was taken with the camera's darkest filter (a methane absorption band) and the shortest possible exposure (5 thousandths of a second) to avoid saturating the camera's vidicon tube with scattered sunlight. The sun is not large as seen from Voyager, only about one-fortieth of the diameter as seen from Earth, but is still almost 8 million times brighter than the brightest star in Earth's sky, Sirius. The result of this great brightness is an image with multiple reflections from the optics in the camera. Wide-angle images surrounding the sun also show many artifacts attributable to scattered light in the optics. These were taken through the clear filter with one second exposures. The insets show the planets magnified many times. Narrow-angle images of Earth, Venus, Jupiter, Saturn, Uranus and Neptune were acquired as the spacecraft built the wide-angle mosaic. Jupiter is larger than a narrow-angle pixel and is clearly resolved, as is Saturn with its rings. Uranus and Neptune appear larger than they really are because of image smear due to spacecraft motion during the long (15 second) exposures. From Voyager's great distance Earth and Venus are mere points of light, less than the size of a picture element even in the narrow-angle camera. Earth was a crescent only 0.12 pixel in size. Coincidentally, Earth lies right in the center of one of the scattered light rays resulting from taking the image so close to the sun. http://photojournal.jpl.nasa.gov/catalog/PIA00451

  10. Solar System Portrait - 60 Frame Mosaic

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The cameras of Voyager 1 on Feb. 14, 1990, pointed back toward the sun and took a series of pictures of the sun and the planets, making the first ever 'portrait' of our solar system as seen from the outside. In the course of taking this mosaic consisting of a total of 60 frames, Voyager 1 made several images of the inner solar system from a distance of approximately 4 billion miles and about 32 degrees above the ecliptic plane. Thirty-nine wide angle frames link together six of the planets of our solar system in this mosaic. Outermost Neptune is 30 times further from the sun than Earth. Our sun is seen as the bright object in the center of the circle of frames. The wide-angle image of the sun was taken with the camera's darkest filter (a methane absorption band) and the shortest possible exposure (5 thousandths of a second) to avoid saturating the camera's vidicon tube with scattered sunlight. The sun is not large as seen from Voyager, only about one-fortieth of the diameter as seen from Earth, but is still almost 8 million times brighter than the brightest star in Earth's sky, Sirius. The result of this great brightness is an image with multiple reflections from the optics in the camera. Wide-angle images surrounding the sun also show many artifacts attributable to scattered light in the optics. These were taken through the clear filter with one second exposures. The insets show the planets magnified many times. Narrow-angle images of Earth, Venus, Jupiter, Saturn, Uranus and Neptune were acquired as the spacecraft built the wide-angle mosaic. Jupiter is larger than a narrow-angle pixel and is clearly resolved, as is Saturn with its rings. Uranus and Neptune appear larger than they really are because of image smear due to spacecraft motion during the long (15 second) exposures. From Voyager's great distance Earth and Venus are mere points of light, less than the size of a picture element even in the narrow-angle camera. Earth was a crescent only 0.12 pixel in size. Coincidentally, Earth lies right in the center of one of the scattered light rays resulting from taking the image so close to the sun.

  11. Simulation and Measurement of Absorbed Dose from 137 Cs Gammas Using a Si Timepix Detector

    NASA Technical Reports Server (NTRS)

    Stoffle, Nicholas; Pinsky, Lawrence; Empl, Anton; Semones, Edward

    2011-01-01

    The TimePix readout chip is a hybrid pixel detector with over 65k independent pixel elements. Each pixel contains its own circuitry for charge collection, counting logic, and readout. When coupled with a Silicon detector layer, the Timepix chip is capable of measuring the charge, and thus energy, deposited in the Silicon. Measurements using a NIST traceable 137Cs gamma source have been made at Johnson Space Center using such a Si Timepix detector, and this data is compared to simulations of energy deposition in the Si layer carried out using FLUKA.

  12. Planet Mercury

    NASA Image and Video Library

    1999-06-12

    The first image of Mercury acquired by NASA's Mariner 10 in 1974. During its flight, Mariner 10's trajectory brought it behind the lighted hemisphere of Mercury, where this image was taken, in order to acquire important measurements with other instruments. This picture was acquired from a distance of 3,340,000 miles (5,380,000 km) from the surface of Mercury. The diameter of Mercury (3,031 miles; 4,878 km) is about 1/3 that of Earth. Images of Mercury were acquired in two steps, an inbound leg (images acquired before passing into Mercury's shadow) and an outbound leg (after exiting from Mercury's shadow). More than 2300 useful images of Mercury were taken, both moderate resolution (3-20 km/pixel) color and high resolution (better than 1 km/pixel) black and white coverage. http://photojournal.jpl.nasa.gov/catalog/PIA00437

  13. Direct tests of a pixelated microchannel plate as the active element of a shower maximum detector

    DOE PAGES

    Apresyan, A.; Los, S.; Pena, C.; ...

    2016-05-07

    One possibility to make a fast and radiation resistant shower maximum detector is to use a secondary emitter as an active element. We report our studies of microchannel plate photomultipliers (MCPs) as the active element of a shower-maximum detector. We present test beam results obtained using Photonis XP85011 to detect secondary particles of an electromagnetic shower. We focus on the use of the multiple pixels on the Photonis MCP in order to find a transverse two-dimensional shower distribution. A spatial resolution of 0.8 mm was obtained with an 8 GeV electron beam. As a result, a method for measuring themore » arrival time resolution for electromagnetic showers is presented, and we show that time resolution better than 40 ps can be achieved.« less

  14. Direct tests of a pixelated microchannel plate as the active element of a shower maximum detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Apresyan, A.; Los, S.; Pena, C.

    One possibility to make a fast and radiation resistant shower maximum detector is to use a secondary emitter as an active element. We report our studies of microchannel plate photomultipliers (MCPs) as the active element of a shower-maximum detector. We present test beam results obtained using Photonis XP85011 to detect secondary particles of an electromagnetic shower. We focus on the use of the multiple pixels on the Photonis MCP in order to find a transverse two-dimensional shower distribution. A spatial resolution of 0.8 mm was obtained with an 8 GeV electron beam. As a result, a method for measuring themore » arrival time resolution for electromagnetic showers is presented, and we show that time resolution better than 40 ps can be achieved.« less

  15. Magnified pseudo-elemental map of atomic column obtained by Moiré method in scanning transmission electron microscopy.

    PubMed

    Kondo, Yukihito; Okunishi, Eiji

    2014-10-01

    Moiré method in scanning transmission electron microscopy allows observing a magnified two-dimensional atomic column elemental map of a higher pixel resolution with a lower electron dose unlike conventional atomic column mapping. The magnification of the map is determined by the ratio between the pixel size and the lattice spacing. With proper ratios for the x and y directions, we could observe magnified elemental maps, homothetic to the atomic arrangement in the sample of SrTiO3 [0 0 1]. The map showed peaks at all expected oxygen sites in SrTiO3 [0 0 1]. © The Author 2014. Published by Oxford University Press on behalf of The Japanese Society of Microscopy. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  16. Indium antimonide large-format detector arrays

    NASA Astrophysics Data System (ADS)

    Davis, Mike; Greiner, Mark

    2011-06-01

    Large format infrared imaging sensors are required to achieve simultaneously high resolution and wide field of view image data. Infrared sensors are generally required to be cooled from room temperature to cryogenic temperatures in less than 10 min thousands of times during their lifetime. The challenge is to remove mechanical stress, which is due to different materials with different coefficients of expansion, over a very wide temperature range and at the same time, provide a high sensitivity and high resolution image data. These challenges are met by developing a hybrid where the indium antimonide detector elements (pixels) are unconnected islands that essentially float on a silicon substrate and form a near perfect match to the silicon read-out circuit. Since the pixels are unconnected and isolated from each other, the array is reticulated. This paper shows that the front side illuminated and reticulated element indium antimonide focal plane developed at L-3 Cincinnati Electronics are robust, approach background limited sensitivity limit, and provide the resolution expected of the reticulated pixel array.

  17. Single Particle Damage Events in Candidate Star Camera Sensors

    NASA Technical Reports Server (NTRS)

    Marshall, Paul; Marshall, Cheryl; Polidan, Elizabeth; Wacyznski, Augustyn; Johnson, Scott

    2005-01-01

    Si charge coupled devices (CCDs) are currently the preeminent detector in star cameras as well as in the near ultraviolet (uv) to visible wavelength region for astronomical observations in space and in earth-observing space missions. Unfortunately, the performance of CCDs is permanently degraded by total ionizing dose (TID) and displacement damage effects. TID produces threshold voltage shifts on the CCD gates and displacement damage reduces the charge transfer efficiency (CTE), increases the dark current, produces dark current nonuniformities and creates random telegraph noise in individual pixels. In addition to these long term effects, cosmic ray and trapped proton transients also interfere with device operation on orbit. In the present paper, we investigate the dark current behavior of CCDs - in particular the formation and annealing of hot pixels. Such pixels degrade the ability of a CCD to perform science and also can present problems to the performance of star camera functions (especially if their numbers are not correctly anticipated). To date, most dark current radiation studies have been performed by irradiating the CCDs at room temperature but this can result in a significantly optimistic picture of the hot pixel count. We know from the Hubble Space Telescope (HST) that high dark current pixels (so-called hot pixels or hot spikes) accumulate as a function of time on orbit. For example, the HST Advanced Camera for Surveys/Wide Field Camera instrument performs monthly anneals despite the loss of observational time, in order to partially anneal the hot pixels. Note that the fact that significant reduction in hot pixel populations occurs for room temperature anneals is not presently understood since none of the commonly expected defects in Si (e.g. divacancy, E center, and A-center) anneal at such a low temperature. A HST Wide Field Camera 3 (WFC3) CCD manufactured by E2V was irradiated while operating at -83C and the dark current studied as a function of temperature while the CCD was warmed to a sequence of temperatures up to a maximum of +30C. The device was then cooled back down to -83 and re-measured. Hot pixel populations were tracked during the warm-up and cool-down. Hot pixel annealing began below 40C and the anneal process was largely completed before the detector reached +3OC. There was no apparent sharp temperature dependence in the annealing. Although a large fraction of the hot pixels fell below the threshold to be counted as a hot pixel, they nevertheless remained warmer than the remaining population. The details of the mechanism for the formation and annealing of hot pixels is not presently understood, but it appears likely that hot pixels are associated with displacement damage occurring in high electric field regions.

  18. Shape in Picture: Mathematical Description of Shape in Grey-Level Images

    DTIC Science & Technology

    1992-09-11

    representation is scale-space, derived frrr- the linear isotropic diffusion equation; recently other types of equations have been considered. Multiscale...recognition of dimensions in the general case of an arbitrary denominator is similar to that just explained. 3 Linear Inequalities in the Two-Dimensional...solid region containing all pixels of the space, whose coordinates satisfy a linear inequality. A Um C scspt fr Digital Geometry 41 s a a v--’ -0 7 O

  19. The DEPFET Sensor-Amplifier Structure: A Method to Beat 1/f Noise and Reach Sub-Electron Noise in Pixel Detectors

    PubMed Central

    Lutz, Gerhard; Porro, Matteo; Aschauer, Stefan; Wölfel, Stefan; Strüder, Lothar

    2016-01-01

    Depleted field effect transistors (DEPFET) are used to achieve very low noise signal charge readout with sub-electron measurement precision. This is accomplished by repeatedly reading an identical charge, thereby suppressing not only the white serial noise but also the usually constant 1/f noise. The repetitive non-destructive readout (RNDR) DEPFET is an ideal central element for an active pixel sensor (APS) pixel. The theory has been derived thoroughly and results have been verified on RNDR-DEPFET prototypes. A charge measurement precision of 0.18 electrons has been achieved. The device is well-suited for spectroscopic X-ray imaging and for optical photon counting in pixel sensors, even at high photon numbers in the same cell. PMID:27136549

  20. Imer-product array processor for retrieval of stored images represented by bipolar binary (+1,-1) pixels using partial input trinary pixels represented by (+1,-1)

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang (Inventor); Awwal, Abdul A. S. (Inventor); Karim, Mohammad A. (Inventor)

    1993-01-01

    An inner-product array processor is provided with thresholding of the inner product during each iteration to make more significant the inner product employed in estimating a vector to be used as the input vector for the next iteration. While stored vectors and estimated vectors are represented in bipolar binary (1,-1), only those elements of an initial partial input vector that are believed to be common with those of a stored vector are represented in bipolar binary; the remaining elements of a partial input vector are set to 0. This mode of representation, in which the known elements of a partial input vector are in bipolar binary form and the remaining elements are set equal to 0, is referred to as trinary representation. The initial inner products corresponding to the partial input vector will then be equal to the number of known elements. Inner-product thresholding is applied to accelerate convergence and to avoid convergence to a negative input product.

  1. Depth perception camera for autonomous vehicle applications

    NASA Astrophysics Data System (ADS)

    Kornreich, Philipp

    2013-05-01

    An imager that can measure the distance from each pixel to the point on the object that is in focus at the pixel is described. Since it provides numeric information of the distance from the camera to all points in its field of view it is ideally suited for autonomous vehicle navigation and robotic vision. This eliminates the LIDAR conventionally used for range measurements. The light arriving at a pixel through a convex lens adds constructively only if it comes from the object point in focus at this pixel. The light from all other object points cancels. Thus, the lens selects the point on the object who's range is to be determined. The range measurement is accomplished by short light guides at each pixel. The light guides contain a p - n junction and a pair of contacts along its length. They, too, contain light sensing elements along the length. The device uses ambient light that is only coherent in spherical shell shaped light packets of thickness of one coherence length. Each of the frequency components of the broad band light arriving at a pixel has a phase proportional to the distance from an object point to its image pixel.

  2. The development of infrared detectors and mechanisms for use in future infrared space missions

    NASA Technical Reports Server (NTRS)

    Houck, James R.

    1995-01-01

    The environment above earth's atmosphere offers significant advantages in sensitivity and wavelength coverage in infrared astronomy over ground-based observatories. In support of future infrared space missions, technology development efforts were undertaken to develop detectors sensitive to radiation between 2.5 micron and 200 micron. Additionally, work was undertaken to develop mechanisms supporting the imaging and spectroscopy requirements of infrared space missions. Arsenic-doped-Silicon and Antimony-doped-Silicon Blocked Impurity Band detectors, responsive to radiation between 4 micron and 45 micron, were produced in 128x128 picture element arrays with the low noise, high sensitivity performance needed for space environments. Technology development continued on Gallium-doped-Germanium detectors (for use between 80 micron and 200 micron), but were hampered by contamination during manufacture. Antimony-doped-Indium detectors (for use between 2.5 micron and 5 micron) were developed in a 256x256 pixel format with high responsive quantum efficiency and low dark current. Work began on adapting an existing cryogenic mechanism design for space-based missions; then was redirected towards an all-fixed optical design to improve reliability and lower projected mission costs.

  3. Mapping and measuring land-cover characteristics of New River Basin, Tennessee, using Landsat digital tapes

    USGS Publications Warehouse

    Hollyday, E.F.; Sauer, S.P.

    1976-01-01

    Land-cover information is needed to select subbasins within the New River basin, Tennessee, for the study of hydrologic processes and also is needed to transfer study results to other sites affected by coal mining. It was believed that data recorded by the first Earth Resources Technology Satellite (Landsat-1) could be processed to yield the needed land-cover information. This study demonstrates that digital computer processing of the spectral information contained in each picture element (pixel) of 1.1 acres (4,500 m2) can produce maps and tables of the areal extent of selected land-cover categories.The distribution of water, rock, agricultural areas, evergreens, bare earth, hardwoods, and uncategorized areas, is portrayed on a map of the entire New River basin (1:62,500 scale) and on 15 quadrangles (1:24,000 scale). Although some categories are a mixture of land-cover types, they portray the predominant component named. Tables quantify the area of each category and indicate that agriculture covers 5 percent of the basin, evergreens cover 7 percent, bare earth covers 6 percent, three categories of hardwoods cover 81 percent, and water, rock, and uncategorized areas each cover less than 1 percent of the basin.

  4. Design and fabrication of reflective spatial light modulator for high-dynamic-range wavefront control

    NASA Astrophysics Data System (ADS)

    Zhu, Hao; Bierden, Paul; Cornelissen, Steven; Bifano, Thomas; Kim, Jin-Hong

    2004-10-01

    This paper describes design and fabrication of a microelectromechanical metal spatial light modulator (SLM) integrated with complementary metal-oxide semiconductor (CMOS) electronics, for high-dynamic-range wavefront control. The metal SLM consists of a large array of piston-motion MEMS mirror segments (pixels) which can deflect up to 0.78 µm each. Both 32x32 and 150x150 arrays of the actuators (1024 and 22500 elements respectively) were fabricated onto the CMOS driver electronics and individual pixels were addressed. A new process has been developed to reduce the topography during the metal MEMS processing to fabricate mirror pixels with improved optical quality.

  5. Non-Destructive Study of Bulk Crystallinity and Elemental Composition of Natural Gold Single Crystal Samples by Energy-Resolved Neutron Imaging

    PubMed Central

    Tremsin, Anton S.; Rakovan, John; Shinohara, Takenao; Kockelmann, Winfried; Losko, Adrian S.; Vogel, Sven C.

    2017-01-01

    Energy-resolved neutron imaging enables non-destructive analyses of bulk structure and elemental composition, which can be resolved with high spatial resolution at bright pulsed spallation neutron sources due to recent developments and improvements of neutron counting detectors. This technique, suitable for many applications, is demonstrated here with a specific study of ~5–10 mm thick natural gold samples. Through the analysis of neutron absorption resonances the spatial distribution of palladium (with average elemental concentration of ~0.4 atom% and ~5 atom%) is mapped within the gold samples. At the same time, the analysis of coherent neutron scattering in the thermal and cold energy regimes reveals which samples have a single-crystalline bulk structure through the entire sample volume. A spatially resolved analysis is possible because neutron transmission spectra are measured simultaneously on each detector pixel in the epithermal, thermal and cold energy ranges. With a pixel size of 55 μm and a detector-area of 512 by 512 pixels, a total of 262,144 neutron transmission spectra are measured concurrently. The results of our experiments indicate that high resolution energy-resolved neutron imaging is a very attractive analytical technique in cases where other conventional non-destructive methods are ineffective due to sample opacity. PMID:28102285

  6. Accelerated Gaussian mixture model and its application on image segmentation

    NASA Astrophysics Data System (ADS)

    Zhao, Jianhui; Zhang, Yuanyuan; Ding, Yihua; Long, Chengjiang; Yuan, Zhiyong; Zhang, Dengyi

    2013-03-01

    Gaussian mixture model (GMM) has been widely used for image segmentation in recent years due to its superior adaptability and simplicity of implementation. However, traditional GMM has the disadvantage of high computational complexity. In this paper an accelerated GMM is designed, for which the following approaches are adopted: establish the lookup table for Gaussian probability matrix to avoid the repetitive probability calculations on all pixels, employ the blocking detection method on each block of pixels to further decrease the complexity, change the structure of lookup table from 3D to 1D with more simple data type to reduce the space requirement. The accelerated GMM is applied on image segmentation with the help of OTSU method to decide the threshold value automatically. Our algorithm has been tested through image segmenting of flames and faces from a set of real pictures, and the experimental results prove its efficiency in segmentation precision and computational cost.

  7. First THEMIS Image of Mars

    NASA Technical Reports Server (NTRS)

    2001-01-01

    This thermal infrared image was acquired by Mars Odyssey's thermal emission imaging system on October 30, 2001, as the spacecraft orbited Mars on its ninth revolution around the planet. The image was taken as part of the calibration and testing process of the camera system.

    This image shows the temperature of Mars in one of the 10 thermal infrared filters. The spacecraft was approximately 22,000 kilometers (about 13,600 miles) above the planet looking down toward the south pole of Mars when this image was acquired.

    It is late spring in the martian southern hemisphere. The extremely cold, circular feature shown in blue is the martian south polar carbon dioxide ice cap at a temperature of about -120 oC (-184 o F). The cap is more than 900 kilometers (540 miles) in diameter at this time and will continue to shrink as summer progresses. Clouds of cooler air blowing off the cap can be seen in orange extending across the image to the left of the cap. The cold region in the lower right portion of the image shows the nighttime temperatures of Mars, demonstrating the 'night-vision' capability of the camera system to observe Mars even when the surface is in darkness. The warmest regions occur near local noontime. The ring of mountains surrounding the 900-kilometer (540-mile) diameter impact basin Argyre can be seen in the early afternoon in the upper portion of the image. The thin blue crescent along the upper limb of the planet is the martian atmosphere.

    This image covers a length of over 6,500 kilometers (3,900 miles) spanning the planet from limb to limb, with a resolution of approximately 5.5 kilometers per pixel (3.4 miles per pixel), or picture elements, at the point directly beneath the spacecraft. The Odyssey's infrared camera is planned to have a resolution of 100 meters per pixel (about 300 feet per pixel) from its mapping orbit.

    JPL manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The thermal emission imaging system was developed at Arizona State University, Tempe with Raytheon Santa Barbara Remote Sensing, Santa Barbara, Calif. Lockheed Martin Astronautics, Denver, Colo., is the prime contractor for the project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.

  8. Optimized Laplacian image sharpening algorithm based on graphic processing unit

    NASA Astrophysics Data System (ADS)

    Ma, Tinghuai; Li, Lu; Ji, Sai; Wang, Xin; Tian, Yuan; Al-Dhelaan, Abdullah; Al-Rodhaan, Mznah

    2014-12-01

    In classical Laplacian image sharpening, all pixels are processed one by one, which leads to large amount of computation. Traditional Laplacian sharpening processed on CPU is considerably time-consuming especially for those large pictures. In this paper, we propose a parallel implementation of Laplacian sharpening based on Compute Unified Device Architecture (CUDA), which is a computing platform of Graphic Processing Units (GPU), and analyze the impact of picture size on performance and the relationship between the processing time of between data transfer time and parallel computing time. Further, according to different features of different memory, an improved scheme of our method is developed, which exploits shared memory in GPU instead of global memory and further increases the efficiency. Experimental results prove that two novel algorithms outperform traditional consequentially method based on OpenCV in the aspect of computing speed.

  9. Instant wireless transmission of radiological images using a personal digital assistant phone for emergency teleconsultation.

    PubMed

    Kim, Dong-Keun; Yoo, Sun K; Kim, Sun H

    2005-01-01

    The instant transmission of radiological images may be important for making rapid clinical decisions about emergency patients. We have examined an instant image transfer system based on a personal digital assistant (PDA) phone with a built-in camera. Images displayed on a picture archiving and communication systems (PACS) monitor can be captured by the camera in the PDA phone directly. Images can then be transmitted from an emergency centre to a remote physician via a wireless high-bandwidth network (CDMA 1 x EVDO). We reviewed the radiological lesions in 10 normal and 10 abnormal cases produced by modalities such as computerized tomography (CT), magnetic resonance (MR) and digital angiography. The images were of 24-bit depth and 1,144 x 880, 1,120 x 840, 1,024 x 768, 800 x 600, 640 x 480 and 320 x 240 pixels. Three neurosurgeons found that for satisfactory remote consultation a minimum size of 640 x 480 pixels was required for CT and MR images and 1,024 x 768 pixels for angiography images. Although higher resolution produced higher clinical satisfaction, it also required more transmission time. At the limited bandwidth employed, higher resolutions could not be justified.

  10. Medusae Fossae Formation

    NASA Technical Reports Server (NTRS)

    1998-01-01

    An exotic terrain of wind-eroded ridges and residual smooth surfaces are seen in one of the highest resolution images ever taken of Mars from orbit. The Medusae Fossae formation is believed to be formed of the fragmental ejecta of huge explosive volcanic eruptions. When subjected to intense wind-blasting over hundreds of millions of years, this material erodes easily once the uppermost tougher crust is breached. In the Mars Orbiter Camera (MOC) image shown on the right, the crust, or cap rock, can be seen in the upper right part of the picture. The finely-spaced ridges are similar to features on Earth called yardangs, which are formed by intense winds plucking individual grains from, and by wind-driven sand blasting particles off, sedimentary deposits.

    The MOC image was taken on October 30, 1997 at 11:05 AM PST, shortly after the Mars Global Surveyor spacecraft's 31st closest approach to Mars. The image covers an area 3.6 X 21.5 km (2.2 X 13.4 miles) at 3.6 m (12 feet) per picture element--craters only 11 m (36 feet, about the size of a swimming pool) across can be seen. The context image (left; the best Viking view of the area; VO 1 387S34) has a resolution of 240 m/pixel, or 67 times lower resolution than the MOC frame.

    Malin Space Science Systems (MSSS) and the California Institute of Technology built the MOC using spare hardware from the Mars Observer mission. MSSS operates the camera from its facilities in San Diego, CA. The Jet Propulsion Laboratory's Mars Surveyor Operations Project operates the Mars Global Surveyor spacecraft with its industrial partner, Lockheed Martin Astronautics, from facilities in Pasadena, CA and Denver, CO.

  11. Medusae Fossae Formation - High Resolution Image

    NASA Technical Reports Server (NTRS)

    1998-01-01

    An exotic terrain of wind-eroded ridges and residual smooth surfaces are seen in one of the highest resolution images ever taken of Mars from orbit. The Medusae Fossae formation is believed to be formed of the fragmental ejecta of huge explosive volcanic eruptions. When subjected to intense wind-blasting over hundreds of millions of years, this material erodes easily once the uppermost tougher crust is breached. The crust, or cap rock, can be seen in the upper right part of the picture. The finely-spaced ridges are similar to features on Earth called yardangs, which are formed by intense winds plucking individual grains from, and by wind-driven sand blasting particles off, sedimentary deposits.

    The image was taken on October 30, 1997 at 11:05 AM PST, shortly after the Mars Global Surveyor spacecraft's 31st closest approach to Mars. The image covers an area 3.6 X 21.5 km (2.2 X 13.4 miles) at 3.6 m (12 feet) per picture element--craters only 11 m (36 feet, about the size of a swimming pool) across can be seen. The best Viking view of the area (VO 1 387S34) has a resolution of 240 m/pixel, or 67 times lower resolution than the MOC frame.

    Malin Space Science Systems (MSSS) and the California Institute of Technology built the MOC using spare hardware from the Mars Observer mission. MSSS operates the camera from its facilities in San Diego, CA. The Jet Propulsion Laboratory's Mars Surveyor Operations Project operates the Mars Global Surveyor spacecraft with its industrial partner, Lockheed Martin Astronautics, from facilities in Pasadena, CA and Denver, CO.

  12. Ada (Trade Name) Foundation Technology. Volume 4. Software Requirements for WIS (WWMCCS (World Wide Military Command and Control System) Information System) Text Processing Prototypes

    DTIC Science & Technology

    1986-12-01

    graphics : The package allows a character set which can be defined by users giving the picture for a character by designating its pixels. Such characters...type lonts and gsei-oriented "help" messages tailored to the operations being performed and user expertise In general, critical design issues...other volumes include command language, software design , description and analysis tools, database management system operating systems; planning and

  13. Organic non-volatile resistive photo-switches for flexible image detector arrays.

    PubMed

    Nau, Sebastian; Wolf, Christoph; Sax, Stefan; List-Kratochvil, Emil J W

    2015-02-01

    A unique implementation of an organic image detector using resistive photo-switchable pixels is presented. This resistive photo-switch comprises the vertical integration of an organic photodiode and an organic resistive switching memory element. The photodiodes act as a photosensitive element while the resistive switching elements simultaneously store the detected light information. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Data Processing for a High Resolution Preclinical PET Detector Based on Philips DPC Digital SiPMs

    NASA Astrophysics Data System (ADS)

    Schug, David; Wehner, Jakob; Goldschmidt, Benjamin; Lerche, Christoph; Dueppenbecker, Peter Michael; Hallen, Patrick; Weissler, Bjoern; Gebhardt, Pierre; Kiessling, Fabian; Schulz, Volkmar

    2015-06-01

    In positron emission tomography (PET) systems, light sharing techniques are commonly used to readout scintillator arrays consisting of scintillation elements, which are smaller than the optical sensors. The scintillating element is then identified evaluating the signal heights in the readout channels using statistical algorithms, the center of gravity (COG) algorithm being the simplest and mostly used one. We propose a COG algorithm with a fixed number of input channels in order to guarantee a stable calculation of the position. The algorithm is implemented and tested with the raw detector data obtained with the Hyperion-II D preclinical PET insert which uses Philips Digital Photon Counting's (PDPC) digitial SiPMs. The gamma detectors use LYSO scintillator arrays with 30 ×30 crystals of 1 ×1 ×12 mm3 in size coupled to 4 ×4 PDPC DPC 3200-22 sensors (DPC) via a 2-mm-thick light guide. These self-triggering sensors are made up of 2 ×2 pixels resulting in a total of 64 readout channels. We restrict the COG calculation to a main pixel, which captures most of the scintillation light from a crystal, and its (direct and diagonal) neighboring pixels and reject single events in which this data is not fully available. This results in stable COG positions for a crystal element and enables high spatial image resolution. Due to the sensor layout, for some crystals it is very likely that a single diagonal neighbor pixel is missing as a result of the low light level on the corresponding DPC. This leads to a loss of sensitivity, if these events are rejected. An enhancement of the COG algorithm is proposed which handles the potentially missing pixel separately both for the crystal identification and the energy calculation. Using this advancement, we show that the sensitivity of the Hyperion-II D insert using the described scintillator configuration can be improved by 20-100% for practical useful readout thresholds of a single DPC pixel ranging from 17-52 photons. Furthermore, we show that the energy resolution of the scanner is superior for all readout thresholds if singles with a single missing pixel are accepted and correctly handled compared to the COG method only accepting singles with all neighbors present by 0-1.6% (relative difference). The presented methods can not only be applied to gamma detectors employing DPC sensors, but can be generalized to other similarly structured and self-triggering detectors, using light sharing techniques, as well.

  15. Studies of finite element analysis of composite material structures

    NASA Technical Reports Server (NTRS)

    Douglas, D. O.; Holzmacher, D. E.; Lane, Z. C.; Thornton, E. A.

    1975-01-01

    Research in the area of finite element analysis is summarized. Topics discussed include finite element analysis of a picture frame shear test, BANSAP (a bandwidth reduction program for SAP IV), FEMESH (a finite element mesh generation program based on isoparametric zones), and finite element analysis of a composite bolted joint specimens.

  16. Recent developments in OLED-based chemical and biological sensors

    NASA Astrophysics Data System (ADS)

    Shinar, Joseph; Zhou, Zhaoqun; Cai, Yuankun; Shinar, Ruth

    2007-09-01

    Recent developments in the structurally integrated OLED-based platform of luminescent chemical and biological sensors are reviewed. In this platform, an array of OLED pixels, which is structurally integrated with the sensing elements, is used as the photoluminescence (PL) excitation source. The structural integration is achieved by fabricating the OLED array and the sensing element on opposite sides of a common glass substrate or on two glass substrates that are attached back-to-back. As it does not require optical fibers, lens, or mirrors, it results in a uniquely simple, low-cost, and potentially rugged geometry. The recent developments on this platform include the following: (1) Enhancing the performance of gas-phase and dissolved oxygen sensors. This is achieved by (a) incorporating high-dielectric TiO II nanoparticles in the oxygen-sensitive Pt and Pd octaethylporphyrin (PtOEP and PdOEP, respectively)- doped polystyrene (PS) sensor films, and (b) embedding the oxygen-sensitive dyes in a matrix of polymer blends such as PS:polydimethylsiloxane (PDMS). (2) Developing sensor arrays for simultaneous detection of multiple serum analytes, including oxygen, glucose, lactate, and alcohol. The sensing element for each analyte consists of a PtOEP-doped PS oxygen sensor, and a solution containing the oxidase enzyme specific to the analyte. Each sensing element is coupled to two individually addressable OLED pixels and a Si photodiode photodetector (PD). (3) Enhancing the integration of the platform, whereby a PD array is also structurally integrated with the OLED array and sensing elements. This enhanced integration is achieved by fabricating an array of amorphous or nanocrystalline Si-based PDs, followed by fabrication of the OLED pixels in the gaps between these Si PDs.

  17. Digital overlaying of the universal transverse Mercator grid with LANDSAT data derived products

    NASA Technical Reports Server (NTRS)

    Graham, M. H.

    1977-01-01

    Picture elements of data from the LANDSAT multispectral scanner are correlated with the universal tranverse Mercator grid. In the procedure, a series of computer modules was used to make approximations of universal transverse Mercator grid locations for all picture elements from the grid locations of a limited number of known control points and to provide display and digital storage of the data. The software has been written in FORTRAN 4 language for a Varian 70-series computer.

  18. The Maia Spectroscopy Detector System: Engineering for Integrated Pulse Capture, Low-Latency Scanning and Real-Time Processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirkham, R.; Siddons, D.; Dunn, P.A.

    2010-06-23

    The Maia detector system is engineered for energy dispersive x-ray fluorescence spectroscopy and elemental imaging at photon rates exceeding 10{sup 7}/s, integrated scanning of samples for pixel transit times as small as 50 {micro}s and high definition images of 10{sup 8} pixels and real-time processing of detected events for spectral deconvolution and online display of pure elemental images. The system developed by CSIRO and BNL combines a planar silicon 384 detector array, application-specific integrated circuits for pulse shaping and peak detection and sampling and optical data transmission to an FPGA-based pipelined, parallel processor. This paper describes the system and themore » underpinning engineering solutions.« less

  19. [How children show positive and negative relationships on their drawings].

    PubMed

    Gramel, Sabine

    2005-01-01

    This study analyses, whether pictures of children showing a positive relationship are significantly different from those showing a negative one with respect to several criteria. The study involved a random selection of 45 children aged 4;6 to 11;6 years. The children painted a picture with themselves and a person they liked and a picture of themselves with someone they disliked. For the most part, the children drew pictures of themselves with peers both with respect to positive as well as negative images. In an interview afterwards, the children specified the criteria in their drawings by which the quality of the particular relationship can be identified. Positive and negative relationship paintings differ in the character of activity described. The sun as an element in children's paintings is painted not more frequent on positive compared to negative pictures. The colour black is used more often in the drawings signifying negative relationships. While girls used more colour in negative relationship drawings, boys used more colour in the positive ones. There was no significant difference in the use of favourite colours and decorative elements between the two groups. Only in negative relationship drawings people were looking away from each other. Smiling individuals were more common in the positive relationship pictures and in pictures painted by the 6 to 8 year olds. A greater distance between the individuals emerged on negative relationship drawings of the girls.

  20. Solar System Portrait - View of the Sun, Earth and Venus

    NASA Image and Video Library

    1996-09-13

    This color image of the sun, Earth and Venus was taken by the Voyager 1 spacecraft Feb. 14, 1990, when it was approximately 32 degrees above the plane of the ecliptic and at a slant-range distance of approximately 4 billion miles. It is the first -- and may be the only -- time that we will ever see our solar system from such a vantage point. The image is a portion of a wide-angle image containing the sun and the region of space where the Earth and Venus were at the time with two narrow-angle pictures centered on each planet. The wide-angle was taken with the camera's darkest filter (a methane absorption band), and the shortest possible exposure (5 thousandths of a second) to avoid saturating the camera's vidicon tube with scattered sunlight. The sun is not large in the sky as seen from Voyager's perspective at the edge of the solar system but is still eight million times brighter than the brightest star in Earth's sky, Sirius. The image of the sun you see is far larger than the actual dimension of the solar disk. The result of the brightness is a bright burned out image with multiple reflections from the optics in the camera. The "rays" around the sun are a diffraction pattern of the calibration lamp which is mounted in front of the wide angle lens. The two narrow-angle frames containing the images of the Earth and Venus have been digitally mosaiced into the wide-angle image at the appropriate scale. These images were taken through three color filters and recombined to produce a color image. The violet, green and blue filters were used; exposure times were, for the Earth image, 0.72, 0.48 and 0.72 seconds, and for the Venus frame, 0.36, 0.24 and 0.36, respectively. Although the planetary pictures were taken with the narrow-angle camera (1500 mm focal length) and were not pointed directly at the sun, they show the effects of the glare from the nearby sun, in the form of long linear streaks resulting from the scattering of sunlight off parts of the camera and its sun shade. From Voyager's great distance both Earth and Venus are mere points of light, less than the size of a picture element even in the narrow-angle camera. Earth was a crescent only 0.12 pixel in size. Coincidentally, Earth lies right in the center of one of the scattered light rays resulting from taking the image so close to the sun. Detailed analysis also suggests that Voyager detected the moon as well, but it is too faint to be seen without special processing. Venus was only 0.11 pixel in diameter. The faint colored structure in both planetary frames results from sunlight scattered in the optics. http://photojournal.jpl.nasa.gov/catalog/PIA00450

  1. Solar System Portrait - View of the Sun, Earth and Venus

    NASA Technical Reports Server (NTRS)

    1990-01-01

    This color image of the sun, Earth and Venus was taken by the Voyager 1 spacecraft Feb. 14, 1990, when it was approximately 32 degrees above the plane of the ecliptic and at a slant-range distance of approximately 4 billion miles. It is the first -- and may be the only -- time that we will ever see our solar system from such a vantage point. The image is a portion of a wide-angle image containing the sun and the region of space where the Earth and Venus were at the time with two narrow-angle pictures centered on each planet. The wide-angle was taken with the camera's darkest filter (a methane absorption band), and the shortest possible exposure (5 thousandths of a second) to avoid saturating the camera's vidicon tube with scattered sunlight. The sun is not large in the sky as seen from Voyager's perspective at the edge of the solar system but is still eight million times brighter than the brightest star in Earth's sky, Sirius. The image of the sun you see is far larger than the actual dimension of the solar disk. The result of the brightness is a bright burned out image with multiple reflections from the optics in the camera. The 'rays' around the sun are a diffraction pattern of the calibration lamp which is mounted in front of the wide angle lens. The two narrow-angle frames containing the images of the Earth and Venus have been digitally mosaiced into the wide-angle image at the appropriate scale. These images were taken through three color filters and recombined to produce a color image. The violet, green and blue filters were used; exposure times were, for the Earth image, 0.72, 0.48 and 0.72 seconds, and for the Venus frame, 0.36, 0.24 and 0.36, respectively. Although the planetary pictures were taken with the narrow-angle camera (1500 mm focal length) and were not pointed directly at the sun, they show the effects of the glare from the nearby sun, in the form of long linear streaks resulting from the scattering of sunlight off parts of the camera and its sun shade. From Voyager's great distance both Earth and Venus are mere points of light, less than the size of a picture element even in the narrow-angle camera. Earth was a crescent only 0.12 pixel in size. Coincidentally, Earth lies right in the center of one of the scattered light rays resulting from taking the image so close to the sun. Detailed analysis also suggests that Voyager detected the moon as well, but it is too faint to be seen without special processing. Venus was only 0.11 pixel in diameter. The faint colored structure in both planetary frames results from sunlight scattered in the optics.

  2. Study of image matching algorithm and sub-pixel fitting algorithm in target tracking

    NASA Astrophysics Data System (ADS)

    Yang, Ming-dong; Jia, Jianjun; Qiang, Jia; Wang, Jian-yu

    2015-03-01

    Image correlation matching is a tracking method that searched a region most approximate to the target template based on the correlation measure between two images. Because there is no need to segment the image, and the computation of this method is little. Image correlation matching is a basic method of target tracking. This paper mainly studies the image matching algorithm of gray scale image, which precision is at sub-pixel level. The matching algorithm used in this paper is SAD (Sum of Absolute Difference) method. This method excels in real-time systems because of its low computation complexity. The SAD method is introduced firstly and the most frequently used sub-pixel fitting algorithms are introduced at the meantime. These fitting algorithms can't be used in real-time systems because they are too complex. However, target tracking often requires high real-time performance, we put forward a fitting algorithm named paraboloidal fitting algorithm based on the consideration above, this algorithm is simple and realized easily in real-time system. The result of this algorithm is compared with that of surface fitting algorithm through image matching simulation. By comparison, the precision difference between these two algorithms is little, it's less than 0.01pixel. In order to research the influence of target rotation on precision of image matching, the experiment of camera rotation was carried on. The detector used in the camera is a CMOS detector. It is fixed to an arc pendulum table, take pictures when the camera rotated different angles. Choose a subarea in the original picture as the template, and search the best matching spot using image matching algorithm mentioned above. The result shows that the matching error is bigger when the target rotation angle is larger. It's an approximate linear relation. Finally, the influence of noise on matching precision was researched. Gaussian noise and pepper and salt noise were added in the image respectively, and the image was processed by mean filter and median filter, then image matching was processed. The result show that when the noise is little, mean filter and median filter can achieve a good result. But when the noise density of salt and pepper noise is bigger than 0.4, or the variance of Gaussian noise is bigger than 0.0015, the result of image matching will be wrong.

  3. An image hiding method based on cascaded iterative Fourier transform and public-key encryption algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, B.; Sang, Jun; Alam, Mohammad S.

    2013-03-01

    An image hiding method based on cascaded iterative Fourier transform and public-key encryption algorithm was proposed. Firstly, the original secret image was encrypted into two phase-only masks M1 and M2 via cascaded iterative Fourier transform (CIFT) algorithm. Then, the public-key encryption algorithm RSA was adopted to encrypt M2 into M2' . Finally, a host image was enlarged by extending one pixel into 2×2 pixels and each element in M1 and M2' was multiplied with a superimposition coefficient and added to or subtracted from two different elements in the 2×2 pixels of the enlarged host image. To recover the secret image from the stego-image, the two masks were extracted from the stego-image without the original host image. By applying public-key encryption algorithm, the key distribution was facilitated, and also compared with the image hiding method based on optical interference, the proposed method may reach higher robustness by employing the characteristics of the CIFT algorithm. Computer simulations show that this method has good robustness against image processing.

  4. Commercial CMOS image sensors as X-ray imagers and particle beam monitors

    NASA Astrophysics Data System (ADS)

    Castoldi, A.; Guazzoni, C.; Maffessanti, S.; Montemurro, G. V.; Carraresi, L.

    2015-01-01

    CMOS image sensors are widely used in several applications such as mobile handsets webcams and digital cameras among others. Furthermore they are available across a wide range of resolutions with excellent spectral and chromatic responses. In order to fulfill the need of cheap systems as beam monitors and high resolution image sensors for scientific applications we exploited the possibility of using commercial CMOS image sensors as X-rays and proton detectors. Two different sensors have been mounted and tested. An Aptina MT9v034, featuring 752 × 480 pixels, 6μm × 6μm pixel size has been mounted and successfully tested as bi-dimensional beam profile monitor, able to take pictures of the incoming proton bunches at the DeFEL beamline (1-6 MeV pulsed proton beam) of the LaBeC of INFN in Florence. The naked sensor is able to successfully detect the interactions of the single protons. The sensor point-spread-function (PSF) has been qualified with 1MeV protons and is equal to one pixel (6 mm) r.m.s. in both directions. A second sensor MT9M032, featuring 1472 × 1096 pixels, 2.2 × 2.2 μm pixel size has been mounted on a dedicated board as high-resolution imager to be used in X-ray imaging experiments with table-top generators. In order to ease and simplify the data transfer and the image acquisition the system is controlled by a dedicated micro-processor board (DM3730 1GHz SoC ARM Cortex-A8) on which a modified LINUX kernel has been implemented. The paper presents the architecture of the sensor systems and the results of the experimental measurements.

  5. Bonding techniques for hybrid active pixel sensors (HAPS)

    NASA Astrophysics Data System (ADS)

    Bigas, M.; Cabruja, E.; Lozano, M.

    2007-05-01

    A hybrid active pixel sensor (HAPS) consists of an array of sensing elements which is connected to an electronic read-out unit. The most used way to connect these two different devices is bump bonding. This interconnection technique is very suitable for these systems because it allows a very fine pitch and a high number of I/Os. However, there are other interconnection techniques available such as direct bonding. This paper, as a continuation of a review [M. Lozano, E. Cabruja, A. Collado, J. Santander, M. Ullan, Nucl. Instr. and Meth. A 473 (1-2) (2001) 95-101] published in 2001, presents an update of the different advanced bonding techniques available for manufacturing a hybrid active pixel detector.

  6. Image Accumulation in Pixel Detector Gated by Late External Trigger Signal and its Application in Imaging Activation Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jakubek, J.; Cejnarova, A.; Platkevic, M.

    Single quantum counting pixel detectors of Medipix type are starting to be used in various radiographic applications. Compared to standard devices for digital imaging (such as CCDs or CMOS sensors) they present significant advantages: direct conversion of radiation to electric signal, energy sensitivity, noiseless image integration, unlimited dynamic range, absolute linearity. In this article we describe usage of the pixel device TimePix for image accumulation gated by late trigger signal. Demonstration of the technique is given on imaging coincidence instrumental neutron activation analysis (Imaging CINAA). This method allows one to determine concentration and distribution of certain preselected element in anmore » inspected sample.« less

  7. Data compression for satellite images

    NASA Technical Reports Server (NTRS)

    Chen, P. H.; Wintz, P. A.

    1976-01-01

    An efficient data compression system is presented for satellite pictures and two grey level pictures derived from satellite pictures. The compression techniques take advantages of the correlation between adjacent picture elements. Several source coding methods are investigated. Double delta coding is presented and shown to be the most efficient. Both predictive differential quantizing technique and double delta coding can be significantly improved by applying a background skipping technique. An extension code is constructed. This code requires very little storage space and operates efficiently. Simulation results are presented for various coding schemes and source codes.

  8. Advanced microlens and color filter process technology for the high-efficiency CMOS and CCD image sensors

    NASA Astrophysics Data System (ADS)

    Fan, Yang-Tung; Peng, Chiou-Shian; Chu, Cheng-Yu

    2000-12-01

    New markets are emerging for digital electronic image device, especially in visual communications, PC camera, mobile/cell phone, security system, toys, vehicle image system and computer peripherals for document capture. To enable one-chip image system that image sensor is with a full digital interface, can make image capture devices in our daily lives. Adding a color filter to such image sensor in a pattern of mosaics pixel or wide stripes can make image more real and colorful. We can say 'color filter makes the life more colorful color filter is? Color filter means can filter image light source except the color with specific wavelength and transmittance that is same as color filter itself. Color filter process is coating and patterning green, red and blue (or cyan, magenta and yellow) mosaic resists onto matched pixel in image sensing array pixels. According to the signal caught from each pixel, we can figure out the environment image picture. Widely use of digital electronic camera and multimedia applications today makes the feature of color filter becoming bright. Although it has challenge but it is very worthy to develop the process of color filter. We provide the best service on shorter cycle time, excellent color quality, high and stable yield. The key issues of advanced color process have to be solved and implemented are planarization and micro-lens technology. Lost of key points of color filter process technology have to consider will also be described in this paper.

  9. Micro-computed tomography characterization of tissue engineering scaffolds: effects of pixel size and rotation step.

    PubMed

    Cengiz, Ibrahim Fatih; Oliveira, Joaquim Miguel; Reis, Rui L

    2017-08-01

    Quantitative assessment of micro-structure of materials is of key importance in many fields including tissue engineering, biology, and dentistry. Micro-computed tomography (µ-CT) is an intensively used non-destructive technique. However, the acquisition parameters such as pixel size and rotation step may have significant effects on the obtained results. In this study, a set of tissue engineering scaffolds including examples of natural and synthetic polymers, and ceramics were analyzed. We comprehensively compared the quantitative results of µ-CT characterization using 15 acquisition scenarios that differ in the combination of the pixel size and rotation step. The results showed that the acquisition parameters could statistically significantly affect the quantified mean porosity, mean pore size, and mean wall thickness of the scaffolds. The effects are also practically important since the differences can be as high as 24% regarding the mean porosity in average, and 19.5 h and 166 GB regarding the characterization time and data storage per sample with a relatively small volume. This study showed in a quantitative manner the effects of such a wide range of acquisition scenarios on the final data, as well as the characterization time and data storage per sample. Herein, a clear picture of the effects of the pixel size and rotation step on the results is provided which can notably be useful to refine the practice of µ-CT characterization of scaffolds and economize the related resources.

  10. Enhanced Early View of Ceres from Dawn

    NASA Image and Video Library

    2014-12-05

    As the Dawn spacecraft flies through space toward the dwarf planet Ceres, the unexplored world appears to its camera as a bright light in the distance, full of possibility for scientific discovery. This view was acquired as part of a final calibration of the science camera before Dawn's arrival at Ceres. To accomplish this, the camera needed to take pictures of a target that appears just a few pixels across. On Dec. 1, 2014, Ceres was about nine pixels in diameter, nearly perfect for this calibration. The images provide data on very subtle optical properties of the camera that scientists will use when they analyze and interpret the details of some of the pictures returned from orbit. Ceres is the bright spot in the center of the image. Because the dwarf planet is much brighter than the stars in the background, the camera team selected a long exposure time to make the stars visible. The long exposure made Ceres appear overexposed, and exaggerated its size; this was corrected by superimposing a shorter exposure of the dwarf planet in the center of the image. A cropped, magnified view of Ceres appears in the inset image at lower left. The image was taken on Dec. 1, 2014 with the Dawn spacecraft's framing camera, using a clear spectral filter. Dawn was about 740,000 miles (1.2 million kilometers) from Ceres at the time. Ceres is 590 miles (950 kilometers) across and was discovered in 1801. http://photojournal.jpl.nasa.gov/catalog/PIA19050

  11. Solution processed integrated pixel element for an imaging device

    NASA Astrophysics Data System (ADS)

    Swathi, K.; Narayan, K. S.

    2016-09-01

    We demonstrate the implementation of a solid state circuit/structure comprising of a high performing polymer field effect transistor (PFET) utilizing an oxide layer in conjunction with a self-assembled monolayer (SAM) as the dielectric and a bulk-heterostructure based organic photodiode as a CMOS-like pixel element for an imaging sensor. Practical usage of functional organic photon detectors requires on chip components for image capture and signal transfer as in the CMOS/CCD architecture rather than simple photodiode arrays in order to increase speed and sensitivity of the sensor. The availability of high performing PFETs with low operating voltage and photodiodes with high sensitivity provides the necessary prerequisite to implement a CMOS type image sensing device structure based on organic electronic devices. Solution processing routes in organic electronics offers relatively facile procedures to integrate these components, combined with unique features of large-area, form factor and multiple optical attributes. We utilize the inherent property of a binary mixture in a blend to phase-separate vertically and create a graded junction for effective photocurrent response. The implemented design enables photocharge generation along with on chip charge to voltage conversion with performance parameters comparable to traditional counterparts. Charge integration analysis for the passive pixel element using 2D TCAD simulations is also presented to evaluate the different processes that take place in the monolithic structure.

  12. Identifying Learning Preferences Early.

    ERIC Educational Resources Information Center

    Reiff, Judith C.

    The Picture Learning Style Inventory was administered to 42 first graders and 46 second graders attending two public schools in a Southern university community. The inventory consists of 13 individual picture booklets, each illustrating a different element of learning style (environmental, emotional, sociological, and physical). The inventory is…

  13. Investigating diffusion with technology

    NASA Astrophysics Data System (ADS)

    Miller, Jon S.; Windelborn, Augden F.

    2013-07-01

    The activities described here allow students to explore the concept of diffusion with the use of common equipment such as computers, webcams and analysis software. The procedure includes taking a series of digital pictures of a container of water with a webcam as a dye slowly diffuses. At known time points, measurements of the pixel densities (darkness) of the digital pictures are recorded and then plotted on a graph. The resulting graph of darkness versus time allows students to see the results of diffusion of the dye over time. Through modification of the basic lesson plan, students are able to investigate the influence of a variety of variables on diffusion. Furthermore, students are able to expand the boundaries of their thinking by formulating hypotheses and testing their hypotheses through experimentation. As a result, students acquire a relevant science experience through taking measurements, organizing data into tables, analysing data and drawing conclusions.

  14. Automated Sargassum Detection for Landsat Imagery

    NASA Astrophysics Data System (ADS)

    McCarthy, S.; Gallegos, S. C.; Armstrong, D.

    2016-02-01

    We implemented a system to automatically detect Sargassum, a floating seaweed, in 30-meter LANDSAT-8 Operational Land Imager (OLI) imagery. Our algorithm for Sargassum detection is an extended form of Hu's approach to derive a floating algae index (FAI) [1]. Hu's algorithm was developed for Moderate Resolution Imaging Spectroradiometer (MODIS) data, but we extended it for use with the OLI bands centered at 655, 865, and 1609 nm, which are comparable to the MODIS bands located at 645, 859, and 1640 nm. We also developed a high resolution true color product to mask cloud pixels in the OLI scene by applying a threshold to top of the atmosphere (TOA) radiances in the red (655 nm), green (561 nm), and blue (443 nm) wavelengths, as well as a method for removing false positive identifications of Sargassum in the imagery. Hu's algorithm derives a FAI for each Sargassum identified pixel. Our algorithm is currently set to only flag the presence of Sargassum in an OLI pixel by classifying any pixel with a FAI > 0.0 as Sargassum. Additionally, our system geo-locates the flagged Sargassum pixels identified in the OLI imagery into the U.S. Navy Global HYCOM model grid. One element of the model grid covers an area 0.125 degrees of latitude by 0.125 degrees of longitude. To resolve the differences in spatial coverage between Landsat and HYCOM, a scheme was developed to calculate the percentage of pixels flagged within the grid element and if above a threshold, it will be flagged as Sargassum. This work is a part of a larger system, sponsored by NASA/Applied Science and Technology Project at J.C. Stennis Space Center, to forecast when and where Sargassum will land on shore. The focus area of this work is currently the Texas coast. Plans call for extending our efforts into the Caribbean. References: [1] Hu, Chuanmin. A novel ocean color index to detect floating algae in the global oceans. Remote Sensing of Environment 113 (2009) 2118-2129.

  15. An approach of surface coal fire detection from ASTER and Landsat-8 thermal data: Jharia coal field, India

    NASA Astrophysics Data System (ADS)

    Roy, Priyom; Guha, Arindam; Kumar, K. Vinod

    2015-07-01

    Radiant temperature images from thermal remote sensing sensors are used to delineate surface coal fires, by deriving a cut-off temperature to separate coal-fire from non-fire pixels. Temperature contrast of coal fire and background elements (rocks and vegetation etc.) controls this cut-off temperature. This contrast varies across the coal field, as it is influenced by variability of associated rock types, proportion of vegetation cover and intensity of coal fires etc. We have delineated coal fires from background, based on separation in data clusters in maximum v/s mean radiant temperature (13th band of ASTER and 10th band of Landsat-8) scatter-plot, derived using randomly distributed homogeneous pixel-blocks (9 × 9 pixels for ASTER and 27 × 27 pixels for Landsat-8), covering the entire coal bearing geological formation. It is seen that, for both the datasets, overall temperature variability of background and fires can be addressed using this regional cut-off. However, the summer time ASTER data could not delineate fire pixels for one specific mine (Bhulanbararee) as opposed to the winter time Landsat-8 data. The contrast of radiant temperature of fire and background terrain elements, specific to this mine, is different from the regional contrast of fire and background, during summer. This is due to the higher solar heating of background rocky outcrops, thus, reducing their temperature contrast with fire. The specific cut-off temperature determined for this mine, to extract this fire, differs from the regional cut-off. This is derived by reducing the pixel-block size of the temperature data. It is seen that, summer-time ASTER image is useful for fire detection but required additional processing to determine a local threshold, along with the regional threshold to capture all the fires. However, the winter Landsat-8 data was better for fire detection with a regional threshold.

  16. Teaching the Elements of Realistic-Style Pictures

    ERIC Educational Resources Information Center

    Duncum, Paul

    2013-01-01

    This article describes how Paul Duncum teaches elements of realististic-style imagery. The elements he teaches are framing, angles of view, lighting, depth of field, and body language. He stresses how each of these elements contributes to meaning, and shows how they apply equally to old master paintings and today's digital photography.

  17. Pixelated gamma detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dolinsky, Sergei Ivanovich; Yanoff, Brian David; Guida, Renato

    2016-12-27

    A pixelated gamma detector includes a scintillator column assembly having scintillator crystals and optical transparent elements alternating along a longitudinal axis, a collimator assembly having longitudinal walls separated by collimator septum, the collimator septum spaced apart to form collimator channels, the scintillator column assembly positioned adjacent to the collimator assembly so that the respective ones of the scintillator crystal are positioned adjacent to respective ones of the collimator channels, the respective ones of the optical transparent element are positioned adjacent to respective ones of the collimator septum, and a first photosensor and a second photosensor, the first and the secondmore » photosensor each connected to an opposing end of the scintillator column assembly. A system and a method for inspecting and/or detecting defects in an interior of an object are also disclosed.« less

  18. Methods in quantitative image analysis.

    PubMed

    Oberholzer, M; Ostreicher, M; Christen, H; Brühlmann, M

    1996-05-01

    The main steps of image analysis are image capturing, image storage (compression), correcting imaging defects (e.g. non-uniform illumination, electronic-noise, glare effect), image enhancement, segmentation of objects in the image and image measurements. Digitisation is made by a camera. The most modern types include a frame-grabber, converting the analog-to-digital signal into digital (numerical) information. The numerical information consists of the grey values describing the brightness of every point within the image, named a pixel. The information is stored in bits. Eight bits are summarised in one byte. Therefore, grey values can have a value between 0 and 256 (2(8)). The human eye seems to be quite content with a display of 5-bit images (corresponding to 64 different grey values). In a digitised image, the pixel grey values can vary within regions that are uniform in the original scene: the image is noisy. The noise is mainly manifested in the background of the image. For an optimal discrimination between different objects or features in an image, uniformity of illumination in the whole image is required. These defects can be minimised by shading correction [subtraction of a background (white) image from the original image, pixel per pixel, or division of the original image by the background image]. The brightness of an image represented by its grey values can be analysed for every single pixel or for a group of pixels. The most frequently used pixel-based image descriptors are optical density, integrated optical density, the histogram of the grey values, mean grey value and entropy. The distribution of the grey values existing within an image is one of the most important characteristics of the image. However, the histogram gives no information about the texture of the image. The simplest way to improve the contrast of an image is to expand the brightness scale by spreading the histogram out to the full available range. Rules for transforming the grey value histogram of an existing image (input image) into a new grey value histogram (output image) are most quickly handled by a look-up table (LUT). The histogram of an image can be influenced by gain, offset and gamma of the camera. Gain defines the voltage range, offset defines the reference voltage and gamma the slope of the regression line between the light intensity and the voltage of the camera. A very important descriptor of neighbourhood relations in an image is the co-occurrence matrix. The distance between the pixels (original pixel and its neighbouring pixel) can influence the various parameters calculated from the co-occurrence matrix. The main goals of image enhancement are elimination of surface roughness in an image (smoothing), correction of defects (e.g. noise), extraction of edges, identification of points, strengthening texture elements and improving contrast. In enhancement, two types of operations can be distinguished: pixel-based (point operations) and neighbourhood-based (matrix operations). The most important pixel-based operations are linear stretching of grey values, application of pre-stored LUTs and histogram equalisation. The neighbourhood-based operations work with so-called filters. These are organising elements with an original or initial point in their centre. Filters can be used to accentuate or to suppress specific structures within the image. Filters can work either in the spatial or in the frequency domain. The method used for analysing alterations of grey value intensities in the frequency domain is the Hartley transform. Filter operations in the spatial domain can be based on averaging or ranking the grey values occurring in the organising element. The most important filters, which are usually applied, are the Gaussian filter and the Laplace filter (both averaging filters), and the median filter, the top hat filter and the range operator (all ranking filters). Segmentation of objects is traditionally based on threshold grey values. (AB

  19. Beyond the resolution limit: subpixel resolution in animals and now in silicon

    NASA Astrophysics Data System (ADS)

    Wilcox, M. J.

    2007-09-01

    Automatic acquisition of aerial threats at thousands of kilometers distance requires high sensitivity to small differences in contrast and high optical quality for subpixel resolution, since targets occupy much less surface area than a single pixel. Targets travel at high speed and break up in the re-entry phase. Target/decoy discrimination at the earliest possible time is imperative. Real time performance requires a multifaceted approach with hyperspectral imaging and analog processing allowing feature extraction in real time. Hyperacuity Systems has developed a prototype chip capable of nonlinear increase in resolution or subpixel resolution far beyond either pixel size or spacing. Performance increase is due to a biomimetic implementation of animal retinas. Photosensitivity is not homogeneous across the sensor surface, allowing pixel parsing. It is remarkably simple to provide this profile to detectors and we showed at least three ways to do so. Individual photoreceptors have a Gaussian sensitivity profile and this nonlinear profile can be exploited to extract high-resolution. Adaptive, analog circuitry provides contrast enhancement, dynamic range setting with offset and gain control. Pixels are processed in parallel within modular elements called cartridges like photo-receptor inputs in fly eyes. These modular elements are connected by a novel function for a cell matrix known as L4. The system is exquisitely sensitive to small target motion and operates with a robust signal under degraded viewing conditions, allowing detection of targets smaller than a single pixel or at greater distance. Therefore, not only is instantaneous feature extraction possible but also subpixel resolution. Analog circuitry increases processing speed with more accurate motion specification for target tracking and identification.

  20. Implementation of total focusing method for phased array ultrasonic imaging on FPGA

    NASA Astrophysics Data System (ADS)

    Guo, JianQiang; Li, Xi; Gao, Xiaorong; Wang, Zeyong; Zhao, Quanke

    2015-02-01

    This paper describes a multi-FPGA imaging system dedicated for the real-time imaging using the Total Focusing Method (TFM) and Full Matrix Capture (FMC). The system was entirely described using Verilog HDL language and implemented on Altera Stratix IV GX FPGA development board. The whole algorithm process is to: establish a coordinate system of image and divide it into grids; calculate the complete acoustic distance of array element between transmitting array element and receiving array element, and transform it into index value; then index the sound pressure values from ROM and superimpose sound pressure values to get pixel value of one focus point; and calculate the pixel values of all focus points to get the final imaging. The imaging result shows that this algorithm has high SNR of defect imaging. And FPGA with parallel processing capability can provide high speed performance, so this system can provide the imaging interface, with complete function and good performance.

  1. Development of CMOS Active Pixel Image Sensors for Low Cost Commercial Applications

    NASA Technical Reports Server (NTRS)

    Fossum, E.; Gee, R.; Kemeny, S.; Kim, Q.; Mendis, S.; Nakamura, J.; Nixon, R.; Ortiz, M.; Pain, B.; Zhou, Z.; hide

    1994-01-01

    This paper describes ongoing research and development of CMOS active pixel image sensors for low cost commercial applications. A number of sensor designs have been fabricated and tested in both p-well and n-well technologies. Major elements in the development of the sensor include on-chip analog signal processing circuits for the reduction of fixed pattern noise, on-chip timing and control circuits and on-chip analog-to-digital conversion (ADC). Recent results and continuing efforts in these areas will be presented.

  2. Less is More: How manipulative features affect children’s learning from picture books

    PubMed Central

    Tare, Medha; Chiong, Cynthia; Ganea, Patricia; DeLoache, Judy

    2010-01-01

    Picture books are ubiquitous in young children’s lives and are assumed to support children’s acquisition of information about the world. Given their importance, relatively little research has directly examined children’s learning from picture books. We report two studies examining children’s acquisition of labels and facts from picture books that vary on two dimensions: iconicity of the pictures and presence of manipulative features (or “pop-ups”). In Study 1, 20-month-old children generalized novel labels less well when taught from a book with manipulative features than from standard picture books without such elements. In Study 2, 30- and 36-month-old children learned fewer facts when taught from a manipulative picture book with drawings than from a standard picture book with realistic images and no manipulative features. The results of the two studies indicate that children’s learning from picture books is facilitated by realistic illustrations, but impeded by manipulative features. PMID:20948970

  3. High-voltage pixel sensors for ATLAS upgrade

    NASA Astrophysics Data System (ADS)

    Perić, I.; Kreidl, C.; Fischer, P.; Bompard, F.; Breugnon, P.; Clemens, J.-C.; Fougeron, D.; Liu, J.; Pangaud, P.; Rozanov, A.; Barbero, M.; Feigl, S.; Capeans, M.; Ferrere, D.; Pernegger, H.; Ristic, B.; Muenstermann, D.; Gonzalez Sevilla, S.; La Rosa, A.; Miucci, A.; Nessi, M.; Iacobucci, G.; Backhaus, M.; Hügging, Fabian; Krüger, H.; Hemperek, T.; Obermann, T.; Wermes, N.; Garcia-Sciveres, M.; Quadt, A.; Weingarten, J.; George, M.; Grosse-Knetter, J.; Rieger, J.; Bates, R.; Blue, A.; Buttar, C.; Hynds, D.

    2014-11-01

    The high-voltage (HV-) CMOS pixel sensors offer several good properties: a fast charge collection by drift, the possibility to implement relatively complex CMOS in-pixel electronics and the compatibility with commercial processes. The sensor element is a deep n-well diode in a p-type substrate. The n-well contains CMOS pixel electronics. The main charge collection mechanism is drift in a shallow, high field region, which leads to a fast charge collection and a high radiation tolerance. We are currently evaluating the use of the high-voltage detectors implemented in 180 nm HV-CMOS technology for the high-luminosity ATLAS upgrade. Our approach is replacing the existing pixel and strip sensors with the CMOS sensors while keeping the presently used readout ASICs. By intelligence we mean the ability of the sensor to recognize a particle hit and generate the address information. In this way we could benefit from the advantages of the HV sensor technology such as lower cost, lower mass, lower operating voltage, smaller pitch, smaller clusters at high incidence angles. Additionally we expect to achieve a radiation hardness necessary for ATLAS upgrade. In order to test the concept, we have designed two HV-CMOS prototypes that can be readout in two ways: using pixel and strip readout chips. In the case of the pixel readout, the connection between HV-CMOS sensor and the readout ASIC can be established capacitively.

  4. Methods for identification of images acquired with digital cameras

    NASA Astrophysics Data System (ADS)

    Geradts, Zeno J.; Bijhold, Jurrien; Kieft, Martijn; Kurosawa, Kenji; Kuroki, Kenro; Saitoh, Naoki

    2001-02-01

    From the court we were asked whether it is possible to determine if an image has been made with a specific digital camera. This question has to be answered in child pornography cases, where evidence is needed that a certain picture has been made with a specific camera. We have looked into different methods of examining the cameras to determine if a specific image has been made with a camera: defects in CCDs, file formats that are used, noise introduced by the pixel arrays and watermarking in images used by the camera manufacturer.

  5. Fast processing of digital imaging and communications in medicine (DICOM) metadata using multiseries DICOM format.

    PubMed

    Ismail, Mahmoud; Philbin, James

    2015-04-01

    The digital imaging and communications in medicine (DICOM) information model combines pixel data and its metadata in a single object. There are user scenarios that only need metadata manipulation, such as deidentification and study migration. Most picture archiving and communication system use a database to store and update the metadata rather than updating the raw DICOM files themselves. The multiseries DICOM (MSD) format separates metadata from pixel data and eliminates duplicate attributes. This work promotes storing DICOM studies in MSD format to reduce the metadata processing time. A set of experiments are performed that update the metadata of a set of DICOM studies for deidentification and migration. The studies are stored in both the traditional single frame DICOM (SFD) format and the MSD format. The results show that it is faster to update studies' metadata in MSD format than in SFD format because the bulk data is separated in MSD and is not retrieved from the storage system. In addition, it is space efficient to store the deidentified studies in MSD format as it shares the same bulk data object with the original study. In summary, separation of metadata from pixel data using the MSD format provides fast metadata access and speeds up applications that process only the metadata.

  6. Fast processing of digital imaging and communications in medicine (DICOM) metadata using multiseries DICOM format

    PubMed Central

    Ismail, Mahmoud; Philbin, James

    2015-01-01

    Abstract. The digital imaging and communications in medicine (DICOM) information model combines pixel data and its metadata in a single object. There are user scenarios that only need metadata manipulation, such as deidentification and study migration. Most picture archiving and communication system use a database to store and update the metadata rather than updating the raw DICOM files themselves. The multiseries DICOM (MSD) format separates metadata from pixel data and eliminates duplicate attributes. This work promotes storing DICOM studies in MSD format to reduce the metadata processing time. A set of experiments are performed that update the metadata of a set of DICOM studies for deidentification and migration. The studies are stored in both the traditional single frame DICOM (SFD) format and the MSD format. The results show that it is faster to update studies’ metadata in MSD format than in SFD format because the bulk data is separated in MSD and is not retrieved from the storage system. In addition, it is space efficient to store the deidentified studies in MSD format as it shares the same bulk data object with the original study. In summary, separation of metadata from pixel data using the MSD format provides fast metadata access and speeds up applications that process only the metadata. PMID:26158117

  7. HUBBLE REVEALS STELLAR FIREWORKS ACCOMPANYING GALAXY COLLISION

    NASA Technical Reports Server (NTRS)

    2002-01-01

    This Hubble Space Telescope image provides a detailed look at a brilliant 'fireworks show' at the center of a collision between two galaxies. Hubble has uncovered over 1,000 bright, young star clusters bursting to life as a result of the head-on wreck. [Left] A ground-based telescopic view of the Antennae galaxies (known formally as NGC 4038/4039) - so named because a pair of long tails of luminous matter, formed by the gravitational tidal forces of their encounter, resembles an insect's antennae. The galaxies are located 63 million light-years away in the southern constellation Corvus. [Right] The respective cores of the twin galaxies are the orange blobs, left and right of image center, crisscrossed by filaments of dark dust. A wide band of chaotic dust, called the overlap region, stretches between the cores of the two galaxies. The sweeping spiral- like patterns, traced by bright blue star clusters, shows the result of a firestorm of star birth activity which was triggered by the collision. This natural-color image is a composite of four separately filtered images taken with the Wide Field Planetary Camera 2 (WFPC2), on January 20, 1996. Resolution is 15 light-years per pixel (picture element). Credit: Brad Whitmore (STScI), and NASA

  8. Micromachined poly-SiGe bolometer arrays for infrared imaging and spectroscopy

    NASA Astrophysics Data System (ADS)

    Leonov, Vladimir N.; Perova, Natalia A.; De Moor, Piet; Du Bois, Bert; Goessens, Claus; Grietens, Bob; Verbist, Agnes; Van Hoof, Chris A.; Vermeiren, Jan P.

    2003-03-01

    The state-of-the-art characteristics of micromachined polycrystalline SiGe microbolometer arrays are reported. An average NETD of 85 mK at a time constant of 14 ms is already achievable on typical self-supported 50 μm pixels in a linear 64-element array. In order to reach these values, the design optimization was performed based on the performance characteristics of linear 32-, 64- and 128-element arrays of 50-, 60- and 75-μm-pixel bolometers on several detector lots. The infrared and thermal modeling accounting for the read-out properties and self-heating effect in bolometers resulted in improved designs and competitive NETD values of 80 mK on 50 μm pixels in a 160x128 format at standard frame rates and f-number of 1. In parallel, the TCR-to-1/f noise ratio and the mechanical design of the pixels were improved making poly-SiGe a good candidate for a low-cost uncooled thermal array. The technological CMOS-based process possesses an attractive balance between characteristics and price, and allows the micromachining of thin structures, less than 0.2 μm. The resistance and TCR non-uniformity with σ/μ better than 0.2% combined with 99.93% yield are demonstrated. The first lots of fully processed linear arrays have already come from the IMEC process line and the results of characterization are presented. Next year, the first linear and small 2D arrays will be introduced on the market.

  9. A custom hardware classifier for bruised apple detection in hyperspectral images

    NASA Astrophysics Data System (ADS)

    Cárdenas, Javier; Figueroa, Miguel; Pezoa, Jorge E.

    2015-09-01

    We present a custom digital architecture for bruised apple classification using hyperspectral images in the near infrared (NIR) spectrum. The algorithm classifies each pixel in an image into one of three classes: bruised, non-bruised, and background. We extract two 5-element feature vectors for each pixel using only 10 out of the 236 spectral bands provided by the hyperspectral camera, thereby greatly reducing both the requirements of the imager and the computational complexity of the algorithm. We then use two linear-kernel support vector machine (SVM) to classify each pixel. Each SVM was trained with 504 windows of size 17×17-pixel taken from 14 hyperspectral images of 320×320 pixels each, for each class. The architecture then computes the percentage of bruised pixels in each apple in order to adequately classify the fruit. We implemented the architecture on a Xilinx Zynq Z-7010 field-programmable gate array (FPGA) and tested it on images from a NIR N17E push-broom camera with a frame rate of 25 fps, a band-pixel rate of 1.888 MHz, and 236 spectral bands between 900 and 1700 nanometers in laboratory conditions. Using 28-bit fixed-point arithmetic, the circuit accurately discriminates 95.2% of the pixels corresponding to an apple, 81% of the pixels corresponding to a bruised apple, and 96.4% of the background. With the default threshold settings, the highest false positive (FP) for a bruised apple is 18.7%. The circuit operates at the native frame rate of the camera, consumes 67 mW of dynamic power, and uses less than 10% of the logic resources on the FPGA.

  10. Solar-blind ultraviolet optical system design for missile warning

    NASA Astrophysics Data System (ADS)

    Chen, Yu; Huo, Furong; Zheng, Liqin

    2015-03-01

    Solar-blind region of Ultraviolet (UV) spectrum has very important application in military field. The spectrum range is from 240nm to 280nm, which can be applied to detect the tail flame from approaching missile. A solar-blind UV optical system is designed to detect the UV radiation, which is an energy system. iKon-L 936 from ANDOR company is selected as the UV detector, which has pixel size 13.5μm x 13.5 μm and active image area 27.6mm x 27.6 mm. CaF2 and F_silica are the chosen materials. The original structure is composed of 6 elements. To reduce the system structure and improve image quality, two aspheric surfaces and one diffractive optical element are adopted in this paper. After optimization and normalization, the designed system is composed of five elements with the maximum spot size 11.988μ m, which is less than the pixel size of the selected CCD detector. Application of aspheric surface and diffractive optical element makes each FOV have similar spot size, which shows the system almost meets the requirements of isoplanatic condition. If the focal length can be decreased, the FOV of the system can be enlarged further.

  11. Characterisation of crystal matrices and single pixels for nuclear medicine applications

    NASA Astrophysics Data System (ADS)

    Herbert, D. J.; Belcari, N.; Camarda, M.; Guerra, A. Del; Vaiano, A.

    2005-01-01

    Commercially constructed crystal matrices are characterised for use with PSPMT detectors for PET system developments and other nuclear medicine applications. The matrices of different scintillation materials were specified with pixel dimensions of 1.5×1.5 mm2 in cross-section and a length corresponding to one gamma ray interaction length at 511 keV. The materials used in this study were BGO, LSO, LYSO, YSO and CsI(Na). Each matrix was constructed using a white TiO loaded epoxy that forms a 0.2 mm septa between each pixel. The white epoxy is not the optimum choice in terms of the reflective properties, but represents a good compromise between cost and the need for optical isolation between pixels. We also tested a YAP matrix that consisted of pixels of the same size specification but was manufactured by a different company, who instead of white epoxy, used a thin aluminium reflective layer for optical isolation that resulted in a septal thickness of just 0.01 mm, resulting in a much higher packing fraction. The characteristics of the scintillation materials, such as the light output and energy resolution, were first studied in the form of individual crystal elements by using a single pixel HPD. A comparison of individual pixels with and without the epoxy/dielectric coatings was also performed. Then the matrices themselves were coupled to a PSPMT in order to study the imaging performance. In particular, the system pixel resolution and the peak to valley ratio were measured at 511 and 122 keV.

  12. HUBBLE WATCHES THE RED PLANET AS MARS GLOBAL SURVEYOR BEGINS AEROBRAKING

    NASA Technical Reports Server (NTRS)

    2002-01-01

    his NASA Hubble Space Telescope picture of Mars was taken on Sept. 12, one day after the arrival of the Mars Global Surveyor (MGS) spacecraft and only five hours before the beginning of autumn in the Martian northern hemisphere. (Mars is tilted on its axis like Earth, so it has similar seasonal changes, including an autumnal equinox when the Sun crosses Mars' equator from the northern to the southern hemisphere). This Hubble picture was taken in support of the MGS mission. Hubble is monitoring the Martian weather conditions during the early phases of MGS aerobraking; in particular, the detection of large dust storms are important inputs into the atmospheric models used by the MGS mission to plan aerobraking operations. Though a dusty haze fills the giant Hellas impact basin south of the dark fin-shaped feature Syrtis Major, the dust appears to be localized within Hellas. Unless the region covered expands significantly, the dust will not be of concern for MGS aerobraking. Other early signs of seasonal transitions on Mars are apparent in the Hubble picture. The northern polar ice cap is blanketed under a polar hood of clouds that typically start forming in late northern summer. As fall progresses, sunlight will dwindle in the north polar region and the seasonal polar cap of frozen carbon dioxide will start condensing onto the surface under these clouds. Hubble observations will continue until October 13, as MGS carefully uses the drag of the Martian atmosphere to circularize its orbit about the Red Planet. After mid-October, Mars will be too close to the Sun, in angular separation, for Hubble to safely view. The image is a composite of three separately filtered colored images taken with the Wide Field Planetary Camera 2 (WFPC2). Resolution is 35 miles (57 kilometers) per pixel (picture element). The Pathfinder landing site near Ares Valles is about 2200 miles (3600 kilometers) west of the center of this image, so was not visible during this observation. Mars was 158 million miles (255 million kilometers) from Earth at the time. [LEFT] An image of this region of Mars, taken in June 1997, is shown for comparison. The Hellas basin is filled with bright clouds and/or surface frost. More water ice clouds are visible across the planet than in the Sept. image, reflecting the effects of the changing season. Mars appears larger because it was 44 million miles (77 million kilometers) closer to Earth than in the September image. Credit: Phil James (Univ. Toledo) and Steve Lee (Univ. Colorado), and NASA

  13. What Can We Learn about Pictures from the Blind?.

    ERIC Educational Resources Information Center

    Kennedy, John M.

    1983-01-01

    A series of studies on tangible pictures and their application to blind persons are reviewed and possible explanations for the suggestion of depth offered by outline drawings are discussed. Findings from ancient cave and rock art, together with drawings made by blind children and adults suggest that outline drawings contain some elements that are…

  14. Visual and Verbal Literacy.

    ERIC Educational Resources Information Center

    Stewig, John Warren

    Visual literacy--seeing with insight--enables child viewers of pictures to examine elements such as color, line, shape, form, depth, and detail to see what relations exist both among these components and between what is in the picture and their previous visual experience. The viewer can extract meaning and respond to it, either by talking or…

  15. Stress analysis and buckling of J-stiffened graphite-epoxy panel

    NASA Technical Reports Server (NTRS)

    Davis, R. C.

    1980-01-01

    A graphite epoxy shear panel with bonded on J stiffeners was investigated. The panel, loaded to buckling in a picture frame shear test is described. Two finite element models, each of which included the doubler material bonded to the panel skin under the stiffeners and at the panel edges, were used to make a stress analysis of the panel. The shear load distributions in the panel from two commonly used boundary conditions, applied shear load and applied displacement, were compared with the results from one of the finite element models that included the picture frame test fixture.

  16. East Candor Chasma

    NASA Technical Reports Server (NTRS)

    1997-01-01

    During its examination of Mars, the Viking 1 spacecraft returned images of Valles Marineris, a huge canyon system 5,000 km long, up to 240 km wide, and 6.5 km deep, whose connected chasma or valleys may have formed from a combination of erosional collapse and structural activity. The view shows east Candor Chasma, one of the connected valleys of Valles Marineris; north toward top of frame; for scale, the impact crater in upper right corner is 15 km (9 miles) wide. The image, centered at latitude 7.5 degrees S., longitude 67.5 degrees, is a composite of Viking 1 Orbiter high-resolution (about 80 m/pixel or picture element) images in black and white and low-resolution (about 250 m/pixel) images in color. The Viking 1 craft landed on Mars in July of 1976.

    East Candor Chasma occupies the eastern part of the large west-northwest-trending trough of Candor Chasma. This section is about 150 km wide. East Candor Chasma is bordered on the north and south by walled cliffs, most likely faults. The walls may have been dissected by landslides forming reentrants; one area on the north wall shows what appears to be landslide debris. Both walls show spur-and-gully morphology and smooth sections. In the lower part of the image northwest-trending, linear depressions on the plateau are younger graben or fault valleys that cut the south wall.

    Material central to the chasma shows layering in places and has been locally eroded by the wind to form flutes and ridges. These interior layered deposits have curvilinear reentrants carved into them, and in one locale a lobe flows away from the top of the interior deposit. The lobe may be mass-wasting deposits due to collapse of older interior deposits (Lucchitta, 1996, LPSC XXVII abs., p. 779- 780); this controversial idea requires that the older layered deposits were saturated with ice, perhaps from former lakes, and that young volcanism and/or tectonism melted the ice and made the material flow.

  17. Micro Coronal Bright Points Observed in the Quiet Magnetic Network by SOHO/EIT

    NASA Technical Reports Server (NTRS)

    Falconer, D. A.; Moore, R. L.; Porter, J. G.

    1997-01-01

    When one looks at SOHO/EIT Fe XII images of quiet regions, one can see the conventional coronal bright points (> 10 arcsec in diameter), but one will also notice many smaller faint enhancements in brightness (Figure 1). Do these micro coronal bright points belong to the same family as the conventional bright points? To investigate this question we compared SOHO/EIT Fe XII images with Kitt Peak magnetograms to determine whether the micro bright points are in the magnetic network and mark magnetic bipoles within the network. To identify the coronal bright points, we applied a picture frame filter to the Fe XII images; this brings out the Fe XII network and bright points (Figure 2) and allows us to study the bright points down to the resolution limit of the SOHO/EIT instrument. This picture frame filter is a square smoothing function (hlargelyalf a network cell wide) with a central square (quarter of a network cell wide) removed so that a bright point's intensity does not effect its own background. This smoothing function is applied to the full disk image. Then we divide the original image by the smoothed image to obtain our filtered image. A bright point is defined as any contiguous set of pixels (including diagonally) which have enhancements of 30% or more above the background; a micro bright point is any bright point 16 pixels or smaller in size. We then analyzed the bright points that were fully within quiet regions (0.6 x 0.6 solar radius) centered on disk center on six different days.

  18. Neuropsychological functioning of an Asperger child with exceptional skill in arranging picture stories.

    PubMed

    Conson, Massimiliano; Salzano, Sara; Grossi, Dario

    2011-08-01

    A striking special ability in arranging picture stories was reported in an Asperger child (C.M.) showing an exceptional performance on Wechsler picture arrangement subtest. Neuropsychological examination did not disclose visuoperceptual and spatial defects, or working memory, attention and executive disorders, but revealed an attentional bias towards local details of complex structures. A specific assessment of C.M.'s understanding of picture stories demonstrated that, with respect to normal controls, he showed an enhanced ability to detect causal links among elements of a story. These findings provide support to the hypothesis that savantism can be related to strong systemizing in autism.

  19. Instantaneous phase-shifting Fizeau interferometry with high-speed pixelated phase-mask camera

    NASA Astrophysics Data System (ADS)

    Yatagai, Toyohiko; Jackin, Boaz Jessie; Ono, Akira; Kiyohara, Kosuke; Noguchi, Masato; Yoshii, Minoru; Kiyohara, Motosuke; Niwa, Hayato; Ikuo, Kazuyuki; Onuma, Takashi

    2015-08-01

    A Fizeou interferometer with instantaneous phase-shifting ability using a Wollaston prism is designed. to measure dynamic phase change of objects, a high-speed video camera of 10-5s of shutter speed is used with a pixelated phase-mask of 1024 × 1024 elements. The light source used is a laser of wavelength 532 nm which is split into orthogonal polarization states by passing through a Wollaston prism. By adjusting the tilt of the reference surface it is possible to make the reference and object beam with orthogonal polarizations states to coincide and interfere. Then the pixelated phase-mask camera calculate the phase changes and hence the optical path length difference. Vibration of speakers and turbulence of air flow were successfully measured in 7,000 frames/sec.

  20. Opto-mechanical design of PANIC

    NASA Astrophysics Data System (ADS)

    Fried, Josef W.; Baumeister, Harald; Huber, Armin; Laun, Werner; Rohloff, Ralf-Rainer; Concepción Cárdenas, M.

    2010-07-01

    PANIC, the Panoramic Near-Infrared Camera, is a new instrument for the Calar Alto Observatory. A 4x4 k detector yields a field of view of 0.5x0.5 degrees at a pixel scale of 0.45 arc sec/pixel at the 2.2m telescope. PANIC can be used also at the 3.5m telescope with half the pixel scale. The optics consists of 9 lenses and 3 folding mirrors. Mechanical tolerances are as small as 50 microns for some elements. PANIC will have a low thermal background due to cold stops. Read-out is done with MPIA's own new electronics which allows read-out of 132 channels in parallel. Weight and size limits lead to interesting design features. Here we describe the opto-mechanical design.

  1. Advancements in DEPMOSFET device developments for XEUS

    NASA Astrophysics Data System (ADS)

    Treis, J.; Bombelli, L.; Eckart, R.; Fiorini, C.; Fischer, P.; Hälker, O.; Herrmann, S.; Lechner, P.; Lutz, G.; Peric, I.; Porro, M.; Richter, R. H.; Schaller, G.; Schopper, F.; Soltau, H.; Strüder, L.; Wölfel, S.

    2006-06-01

    DEPMOSFET based Active Pixel Sensor (APS) matrices are a new detector concept for X-ray imaging spectroscopy missions. They can cope with the challenging requirements of the XEUS Wide Field Imager and combine excellent energy resolution, high speed readout and low power consumption with the attractive feature of random accessibility of pixels. From the evaluation of first prototypes, new concepts have been developed to overcome the minor drawbacks and problems encountered for the older devices. The new devices will have a pixel size of 75 μm × 75 μm. Besides 64 × 64 pixel arrays, prototypes with a sizes of 256 × 256 pixels and 128 × 512 pixels and an active area of about 3.6 cm2 will be produced, a milestone on the way towards the fully grown XEUS WFI device. The production of these improved devices is currently on the way. At the same time, the development of the next generation of front-end electronics has been started, which will permit to operate the sensor devices with the readout speed required by XEUS. Here, a summary of the DEPFET capabilities, the concept of the sensors of the next generation and the new front-end electronics will be given. Additionally, prospects of new device developments using the DEPFET as a sensitive element are shown, e.g. so-called RNDR-pixels, which feature repetitive non-destructive readout to lower the readout noise below the 1 e - ENC limit.

  2. Programmable architecture for pixel level processing tasks in lightweight strapdown IR seekers

    NASA Astrophysics Data System (ADS)

    Coates, James L.

    1993-06-01

    Typical processing tasks associated with missile IR seeker applications are described, and a straw man suite of algorithms is presented. A fully programmable multiprocessor architecture is realized on a multimedia video processor (MVP) developed by Texas Instruments. The MVP combines the elements of RISC, floating point, advanced DSPs, graphics processors, display and acquisition control, RAM, and external memory. Front end pixel level tasks typical of missile interceptor applications, operating on 256 x 256 sensor imagery, can be processed at frame rates exceeding 100 Hz in a single MVP chip.

  3. ASRC RSS Data

    DOE Data Explorer

    Kiedron, Peter

    2008-01-15

    Once every minute between sunrise and sunset the Rotating Shadowband Spectroradiometer (RSS) measures simultaneously three irradiances: total horizontal, diffuse horizontal and direct normal in near ultraviolet, visible and near infrared range (approx. 370nm-1050nm) at 512 (RSS103) or 1024 (RSS102 and RSS105) adjacent spectral resolving elements (pixels). The resolution is pixel (wavelength) dependent and it differs from instrument to instrument. The reported irradiances are cosine response corrected. And their radiometric calibration is based on incandescent lamp calibrators that can be traced to the NIST irradiance scale. The units are W/m2/nm.

  4. On-chip skin color detection using a triple-well CMOS process

    NASA Astrophysics Data System (ADS)

    Boussaid, Farid; Chai, Douglas; Bouzerdoum, Abdesselam

    2004-03-01

    In this paper, a current-mode VLSI architecture enabling on read-out skin detection without the need for any on-chip memory elements is proposed. An important feature of the proposed architecture is that it removes the need for demosaicing. Color separation is achieved using the strong wavelength dependence of the absorption coefficient in silicon. This wavelength dependence causes a very shallow absorption of blue light and enables red light to penetrate deeply in silicon. A triple-well process, allowing a P-well to be placed inside an N-well, is chosen to fabricate three vertically integrated photodiodes acting as the RGB color detector for each pixel. Pixels of an input RGB image are classified as skin or non-skin pixels using a statistical skin color model, chosen to offer an acceptable trade-off between skin detection performance and implementation complexity. A single processing unit is used to classify all pixels of the input RGB image. This results in reduced mismatch and also in an increased pixel fill-factor. Furthermore, the proposed current-mode architecture is programmable, allowing external control of all classifier parameters to compensate for mismatch and changing lighting conditions.

  5. Charge Resolution of the Silicon Matrix of the ATIC Experiment

    NASA Technical Reports Server (NTRS)

    Zatsepin, V. I.; Adams, J. H., Jr.; Ahn, H. S.; Bashindzhagyan, G. L.; Batkov, K. E.; Case, G.; Christl, M.; Ganel, O.; Fazely, A. R.; Ganel, O.; hide

    2002-01-01

    ATIC (Advanced Thin Ionization Calorimeter) is a balloon borne experiment designed to measure the cosmic ray composition for elements from hydrogen to iron and their energy spectra from approx.50 GeV to near 100 TeV. It consists of a Si-matrix detector to determine the charge of a CRT particle, a scintillator hodoscope for tracking, carbon interaction targets and a fully active BGO calorimeter. ATIC had its first flight from McMurdo, Antarctica from 28/12/2000 to 13/01/2001. The ATIC flight collected approximately 25 million events. The silicon matrix of the ATIC spectrometer is designed to resolve individual elements from proton to iron. To provide this resolution careful calibration of each pixel of the silicon matrix is required. Firstly, for each electronic channel of the matrix the pedestal value was subtracted taking into account its drift during the flight. The muon calibration made before the flight was used then to convert electric signals (in ADC channel number) to energy deposits in each pixel. However, the preflight muon calibration was not accurate enough for the purpose, because of lack of statistics in each pixel. To improve charge resolution the correction was done for the position of Helium peak in each pixel during the flight . The other way to set electric signals in electronics channels of the Si-matrix to one scale was correction for electric channel gains accurately measured in laboratory. In these measurements it was found that small different nonlinearities for different channels are present in the region of charge Z > 20. The correction for these non-linearities was not done yet. In linear approximation the method provides practically the same resolution as muon calibration plus He-peak correction. For searching a pixel with the signal of primary particle an indication from the cascade in the calorimeter was used. For this purpose a trajectory was reconstructed using weight centers of energy deposits in BGO layers. The point of intersection of this trajectory with Si-matrix and its RMS was determined. The pixel with maximal signal in 3sigma region was taken as sought. The signal in this pixel was corrected by trajectory zenith angle. The preliminary results on charge resolution of the Si-matrix in the range from protons to iron are presented.

  6. Delayed matching to two-picture samples by individuals with and without disabilities: an analysis of the role of naming.

    PubMed

    Gutowski, Stanley J; Stromer, Robert

    2003-01-01

    Delayed matching to complex, two-picture samples (e.g., cat-dog) may be improved when the samples occasion differential verbal behavior. In Experiment 1, individuals with mental retardation matched picture comparisons to identical single-picture samples or to two-picture samples, one of which was identical to a comparison. Accuracy scores were typically high on single-picture trials under both simultaneous and delayed matching conditions. Scores on two-picture trials were also high during the simultaneous condition but were lower during the delay condition. However, scores improved on delayed two-picture trials when each of the sample pictures was named aloud before comparison responding. Experiment 2 replicated these results with preschoolers with typical development and a youth with mental retardation. Sample naming also improved the preschoolers' matching when the samples were pairs of spoken names and the correct comparison picture matched one of the names. Collectively, the participants could produce the verbal behavior that might have improved performance, but typically did not do so unless the procedure required it. The success of the naming intervention recommends it for improving the observing and remembering of multiple elements of complex instructional stimuli.

  7. Super-Resolution Enhancement From Multiple Overlapping Images: A Fractional Area Technique

    NASA Astrophysics Data System (ADS)

    Michaels, Joshua A.

    With the availability of large quantities of relatively low-resolution data from several decades of space borne imaging, methods of creating an accurate, higher-resolution image from the multiple lower-resolution images (i.e. super-resolution), have been developed almost since such imagery has been around. The fractional-area super-resolution technique developed in this thesis has never before been documented. Satellite orbits, like Landsat, have a quantifiable variation, which means each image is not centered on the exact same spot more than once and the overlapping information from these multiple images may be used for super-resolution enhancement. By splitting a single initial pixel into many smaller, desired pixels, a relationship can be created between them using the ratio of the area within the initial pixel. The ideal goal for this technique is to obtain smaller pixels with exact values and no error, yielding a better potential result than those methods that yield interpolated pixel values with consequential loss of spatial resolution. A Fortran 95 program was developed to perform all calculations associated with the fractional-area super-resolution technique. The fractional areas are calculated using traditional trigonometry and coordinate geometry and Linear Algebra Package (LAPACK; Anderson et al., 1999) is used to solve for the higher-resolution pixel values. In order to demonstrate proof-of-concept, a synthetic dataset was created using the intrinsic Fortran random number generator and Adobe Illustrator CS4 (for geometry). To test the real-life application, digital pictures from a Sony DSC-S600 digital point-and-shoot camera with a tripod were taken of a large US geological map under fluorescent lighting. While the fractional-area super-resolution technique works in perfect synthetic conditions, it did not successfully produce a reasonable or consistent solution in the digital photograph enhancement test. The prohibitive amount of processing time (up to 60 days for a relatively small enhancement area) severely limits the practical usefulness of fraction-area super-resolution. Fractional-area super-resolution is very sensitive to relative input image co-registration, which must be accurate to a sub-pixel degree. However, use of this technique, if input conditions permit, could be applied as a "pinpoint" super-resolution technique. Such an application could be possible by only applying it to only very small areas with very good input image co-registration.

  8. Heavy element synthesis in the Universe

    NASA Astrophysics Data System (ADS)

    Ramirez-Ruiz, Enrico

    2018-06-01

    The source of about half of the heaviest elements in the Universe has been a mystery for a long time. Although the general picture of element formation is well understood, many questions about the nuclear physics processes and particularly the astrophysical details remain to be answered. Here I focus on recent advances in our understanding of the origin of the heaviest and rarest elements in the Universe

  9. Aesthetics and Children's Picture-Books

    ERIC Educational Resources Information Center

    Leddy, Thomas

    2002-01-01

    Some writers on children's picture-books seem to believe that comfort and reassurance are their most important goals. Yet these books may also provide a context for imaginative adventure, and even for a journey into the dark night of the soul. As Nietzsche would put it, there is a Dionysian as well as an Apollonian element in children's…

  10. Background, Foreground and the Ground in Between: Layers of Depth!

    ERIC Educational Resources Information Center

    McGovern, Cynthia

    2009-01-01

    This article describes a lesson in picture depth. The lesson involves tracing paper layers, object size and raising the horizon line per page layer. Students learn to look at and scan a photograph or print, break down the picture's qualities, and design elements and principles. Students also gain an increased appreciation for design and artist…

  11. Martian "Swiss Cheese"

    NASA Image and Video Library

    2000-04-24

    This image is illuminated by sunlight from the upper left. Looking like pieces of sliced and broken swiss cheese, the upper layer of the martian south polar residual cap has been eroded, leaving flat-topped mesas into which are set circular depressions such as those shown here. The circular features are depressions, not hills. The largest mesas here stand about 4 meters (13 feet) high and may be composed of frozen carbon dioxide and/or water. Nothing like this has ever been seen anywhere on Mars except within the south polar cap, leading to some speculation that these landforms may have something to do with the carbon dioxide thought to be frozen in the south polar region. On Earth, we know frozen carbon dioxide as "dry ice." On Mars, as this picture might be suggesting, there may be entire landforms larger than a small town and taller than 2 to 3 men and women that consist, in part, of dry ice. No one knows for certain whether frozen carbon dioxide has played a role in the creation of the "swiss cheese" and other bizarre landforms seen in this picture. The picture covers an area 3 x 9 kilometers (1.9 x 5.6 miles) near 85.6°S, 74.4°W at a resolution of 7.3 meters (24 feet) per pixel. This picture was taken by the Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) during early southern spring on August 3, 1999. http://photojournal.jpl.nasa.gov/catalog/PIA02367

  12. Sensitivity of Marine Warm Cloud Retrieval Statistics to Algorithm Choices: Examples from MODIS Collection 6

    NASA Technical Reports Server (NTRS)

    Platnick, Steven; Wind, Galina; Zhang, Zhibo; Ackerman, Steven A.; Maddux, Brent

    2012-01-01

    The optical and microphysical structure of warm boundary layer marine clouds is of fundamental importance for understanding a variety of cloud radiation and precipitation processes. With the advent of MODIS (Moderate Resolution Imaging Spectroradiometer) on the NASA EOS Terra and Aqua platforms, simultaneous global/daily 1km retrievals of cloud optical thickness and effective particle size are provided, as well as the derived water path. In addition, the cloud product (MOD06/MYD06 for MODIS Terra and Aqua, respectively) provides separate effective radii results using the l.6, 2.1, and 3.7 m spectral channels. Cloud retrieval statistics are highly sensitive to how a pixel identified as being "notclear" by a cloud mask (e.g., the MOD35/MYD35 product) is determined to be useful for an optical retrieval based on a 1-D cloud model. The Collection 5 MODIS retrieval algorithm removed pixels associated with cloud'edges as well as ocean pixels with partly cloudy elements in the 250m MODIS cloud mask - part of the so-called Clear Sky Restoral (CSR) algorithm. Collection 6 attempts retrievals for those two pixel populations, but allows a user to isolate or filter out the populations via CSR pixel-level Quality Assessment (QA) assignments. In this paper, using the preliminary Collection 6 MOD06 product, we present global and regional statistical results of marine warm cloud retrieval sensitivities to the cloud edge and 250m partly cloudy pixel populations. As expected, retrievals for these pixels are generally consistent with a breakdown of the ID cloud model. While optical thickness for these suspect pixel populations may have some utility for radiative studies, the retrievals should be used with extreme caution for process and microphysical studies.

  13. Contact CMOS imaging of gaseous oxygen sensor array

    PubMed Central

    Daivasagaya, Daisy S.; Yao, Lei; Yi Yung, Ka; Hajj-Hassan, Mohamad; Cheung, Maurice C.; Chodavarapu, Vamsy P.; Bright, Frank V.

    2014-01-01

    We describe a compact luminescent gaseous oxygen (O2) sensor microsystem based on the direct integration of sensor elements with a polymeric optical filter and placed on a low power complementary metal-oxide semiconductor (CMOS) imager integrated circuit (IC). The sensor operates on the measurement of excited-state emission intensity of O2-sensitive luminophore molecules tris(4,7-diphenyl-1,10-phenanthroline) ruthenium(II) ([Ru(dpp)3]2+) encapsulated within sol–gel derived xerogel thin films. The polymeric optical filter is made with polydimethylsiloxane (PDMS) that is mixed with a dye (Sudan-II). The PDMS membrane surface is molded to incorporate arrays of trapezoidal microstructures that serve to focus the optical sensor signals on to the imager pixels. The molded PDMS membrane is then attached with the PDMS color filter. The xerogel sensor arrays are contact printed on top of the PDMS trapezoidal lens-like microstructures. The CMOS imager uses a 32 × 32 (1024 elements) array of active pixel sensors and each pixel includes a high-gain phototransistor to convert the detected optical signals into electrical currents. Correlated double sampling circuit, pixel address, digital control and signal integration circuits are also implemented on-chip. The CMOS imager data is read out as a serial coded signal. The CMOS imager consumes a static power of 320 µW and an average dynamic power of 625 µW when operating at 100 Hz sampling frequency and 1.8 V DC. This CMOS sensor system provides a useful platform for the development of miniaturized optical chemical gas sensors. PMID:24493909

  14. Contact CMOS imaging of gaseous oxygen sensor array.

    PubMed

    Daivasagaya, Daisy S; Yao, Lei; Yi Yung, Ka; Hajj-Hassan, Mohamad; Cheung, Maurice C; Chodavarapu, Vamsy P; Bright, Frank V

    2011-10-01

    We describe a compact luminescent gaseous oxygen (O 2 ) sensor microsystem based on the direct integration of sensor elements with a polymeric optical filter and placed on a low power complementary metal-oxide semiconductor (CMOS) imager integrated circuit (IC). The sensor operates on the measurement of excited-state emission intensity of O 2 -sensitive luminophore molecules tris(4,7-diphenyl-1,10-phenanthroline) ruthenium(II) ([Ru(dpp) 3 ] 2+ ) encapsulated within sol-gel derived xerogel thin films. The polymeric optical filter is made with polydimethylsiloxane (PDMS) that is mixed with a dye (Sudan-II). The PDMS membrane surface is molded to incorporate arrays of trapezoidal microstructures that serve to focus the optical sensor signals on to the imager pixels. The molded PDMS membrane is then attached with the PDMS color filter. The xerogel sensor arrays are contact printed on top of the PDMS trapezoidal lens-like microstructures. The CMOS imager uses a 32 × 32 (1024 elements) array of active pixel sensors and each pixel includes a high-gain phototransistor to convert the detected optical signals into electrical currents. Correlated double sampling circuit, pixel address, digital control and signal integration circuits are also implemented on-chip. The CMOS imager data is read out as a serial coded signal. The CMOS imager consumes a static power of 320 µW and an average dynamic power of 625 µW when operating at 100 Hz sampling frequency and 1.8 V DC. This CMOS sensor system provides a useful platform for the development of miniaturized optical chemical gas sensors.

  15. The research of the style of angel lobster eye x-ray optical system

    NASA Astrophysics Data System (ADS)

    Cheng, HuaQi; Liu, YuJiao; Zhang, ShaoWei; Hou, LinBao; Li, XingLong

    2018-01-01

    This paper has developed the theory research of wide field Angel lobster eye X-ray optical system based on the principle of entirely incidence. Combine the configuration characteristic of lobster eye, recommend the conclusion of beam of light focus on the half of radius whether point or parallel light. Designed the microchannel through the principle of real ray trace and got the relationship among different parameters about it using numerical value emulation. Got pixels and illumination pictures validated the wide field focus characteristic of Angel lobster eye by simulating the optical system.

  16. Perimetric Complexity of Binary Digital Images: Notes on Calculation and Relation to Visual Complexity

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.

    2011-01-01

    Perimetric complexity is a measure of the complexity of binary pictures. It is defined as the sum of inside and outside perimeters of the foreground, squared, divided by the foreground area, divided by 4p . Difficulties arise when this definition is applied to digital images composed of binary pixels. In this paper we identify these problems and propose solutions. Perimetric complexity is often used as a measure of visual complexity, in which case it should take into account the limited resolution of the visual system. We propose a measure of visual perimetric complexity that meets this requirement.

  17. MGS MOC Returns to Service Following Solar Conjunction Hiatus

    NASA Technical Reports Server (NTRS)

    2000-01-01

    PIA01043 PIA01044

    Many aspects of our studies of Mars from Earth are dictated by the different rates at which the two planets orbit the Sun. This difference allows Earth to pass Mars in its orbit, continue to lead Mars around the Sun, and then eventually overtake Mars again, every 26 months. This cycle governs opportunities to send rockets to Mars when the closest approaches between the two planets occur (opposition). The cycle also dictates when Mars will pass behind the Sun relative to Earth (conjunction). A Solar Conjunction period has just ended. During this time radio communications from the Mars Global Surveyor spacecraft, operating at Mars, were interrupted for a few weeks. Because it would not be able to send pictures back to Earth during this time, the Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) was turned off on June 21, 2000, and turned back on again July 13, 2000. The two pictures shown here are among the very first high resolution views of the martian surface that were received following the resumed operation of the MOC. Both pictures arrived on Earth via radio downlink on Saturday, July 15, 2000.

    The first picture (above left) shows a ridged and cratered plain in southern Hesperia Planum around 32.8oS, 243.2oW. The second image (above right) shows the layered northeastern wall of a meteor impact crater in Noachis Terra at 32.9oS, 357.6oW. Both pictures cover an area 3 kilometers (1.9 miles) wide at a resolution of 6 meters per pixel. Both are illuminated by sunlight from the upper left.

  18. Landsat 3 return beam vidicon response artifacts

    USGS Publications Warehouse

    ,; Clark, B.

    1981-01-01

    The return beam vidicon (RBV) sensing systems employed aboard Landsats 1, 2, and 3 have all been similar in that they have utilized vidicon tube cameras. These are not mirror-sweep scanning devices such as the multispectral scanner (MSS) sensors that have also been carried aboard the Landsat satellites. The vidicons operate more like common television cameras, using an electron gun to read images from a photoconductive faceplate.In the case of Landsats 1 and 2, the RBV system consisted of three such vidicons which collected remote sensing data in three distinct spectral bands. Landsat 3, however, utilizes just two vidicon cameras, both of which sense data in a single broad band. The Landsat 3 RBV system additionally has a unique configuration. As arranged, the two cameras can be shuttered alternately, twice each, in the same time it takes for one MSS scene to be acquired. This shuttering sequence results in four RBV "subscenes" for every MSS scene acquired, similar to the four quadrants of a square. See Figure 1. Each subscene represents a ground area of approximately 98 by 98 km. The subscenes are designated A, B, C, and D, for the northwest, northeast, southwest, and southeast quarters of the full scene, respectively. RBV data products are normally ordered, reproduced, and sold on a subscene basis and are in general referred to in this way. Each exposure from the RBV camera system presents an image which is 98 km on a side. When these analog video data are subsequently converted to digital form, the picture element, or pixel, that results is 19 m on a side with an effective resolution element of 30 m. This pixel size is substantially smaller than that obtainable in MSS images (the MSS has an effective resolution element of 73.4 m), and, when RBV images are compared to equivalent MSS images, better resolution in the RBV data is clearly evident. It is for this reason that the RBV system can be a valuable tool for remote sensing of earth resources.Until recently, RBV imagery was processed directly from wideband video tape data onto 70-mm film. This changed in September 1980 when digital production of RBV data at the NASA Goddard Space Flight Center (GSFC) began. The wideband video tape data are now subjected to analog-to-digital preprocessing and corrected both radiometrically and geometrically to produce high-density digital tapes (HDT's). The HDT data are subsequently transmitted via satellite (Domsat) to the EROS Data Center (EDC) where they are used to generate 241-mm photographic images at a scale of 1:500,000. Computer-compatible tapes of the data are also generated as digital products. Of the RBV data acquired since September 1, 1980, approximately 2,800 subscenes per month have been processed at EDC.

  19. Low-altitude remote sensing dataset of DEM and RGB mosaic for AB corridor on July 13 2013 and L2 corridor on July 21 2013

    DOE Data Explorer

    Baptiste Dafflon

    2015-04-07

    Low-altitude remote sensing dataset including DEM and RGB mosaic for AB (July 13 2013) and L2 corridor (July 21 2013).Processing flowchart for each corridor:Ground control points (GCP, 20.3 cm square white targets, every 20 m) surveyed with RTK GPS. Acquisition of RGB pictures using a Kite-based platform. Structure from Motion based reconstruction using hundreds of pictures and GCP coordinates. Export of DEM and RGB mosaic in geotiff format (NAD 83, 2012 geoid, UTM zone 4 north) with pixel resolution of about 2 cm, and x,y,z accuracy in centimeter range (less than 10 cm). High-accuracy and high-resolution inside GCPs zone for L2 corridor (500x20m), AB corridor (500x40) DEM will be updated once all GCPs will be measured. Only zones between GCPs are accurate although all the mosaic is provided.

  20. How many pixels make a memory? Picture memory for small pictures.

    PubMed

    Wolfe, Jeremy M; Kuzmova, Yoana I

    2011-06-01

    Torralba (Visual Neuroscience, 26, 123-131, 2009) showed that, if the resolution of images of scenes were reduced to the information present in very small "thumbnail images," those scenes could still be recognized. The objects in those degraded scenes could be identified, even though it would be impossible to identify them if they were removed from the scene context. Can tiny and/or degraded scenes be remembered, or are they like brief presentations, identified but not remembered. We report that memory for tiny and degraded scenes parallels the recognizability of those scenes. You can remember a scene to approximately the degree to which you can classify it. Interestingly, there is a striking asymmetry in memory when scenes are not the same size on their initial appearance and subsequent test. Memory for a large, full-resolution stimulus can be tested with a small, degraded stimulus. However, memory for a small stimulus is not retrieved when it is tested with a large stimulus.

  1. Mosaic of coded aperture arrays

    DOEpatents

    Fenimore, Edward E.; Cannon, Thomas M.

    1980-01-01

    The present invention pertains to a mosaic of coded aperture arrays which is capable of imaging off-axis sources with minimum detector size. Mosaics of the basic array pattern create a circular on periodic correlation of the object on a section of the picture plane. This section consists of elements of the central basic pattern as well as elements from neighboring patterns and is a cyclic version of the basic pattern. Since all object points contribute a complete cyclic version of the basic pattern, a section of the picture, which is the size of the basic aperture pattern, contains all the information necessary to image the object with no artifacts.

  2. The MIDAS processor. [Multivariate Interactive Digital Analysis System for multispectral scanner data

    NASA Technical Reports Server (NTRS)

    Kriegler, F. J.; Gordon, M. F.; Mclaughlin, R. H.; Marshall, R. E.

    1975-01-01

    The MIDAS (Multivariate Interactive Digital Analysis System) processor is a high-speed processor designed to process multispectral scanner data (from Landsat, EOS, aircraft, etc.) quickly and cost-effectively to meet the requirements of users of remote sensor data, especially from very large areas. MIDAS consists of a fast multipipeline preprocessor and classifier, an interactive color display and color printer, and a medium scale computer system for analysis and control. The system is designed to process data having as many as 16 spectral bands per picture element at rates of 200,000 picture elements per second into as many as 17 classes using a maximum likelihood decision rule.

  3. Data Mining for Forecasting Mississippi Cropland Data Layers

    NASA Astrophysics Data System (ADS)

    Shore, F. L.; Gregory, T. L.

    2011-12-01

    In 1999, Mississippi became an early adopter of the National Agricultural Statistics Service (NASS) Cropland Data Layer (CDL) program. With the support of the NASS Spatial Analysis Research Section (SARS), we have progressed from an annual crop picture to a pixel by pixel history of Mississippi farming. Much of our early work for Mississippi agriculture is now easily provided from the web based application CropScape, released by SARS in 2011. In this study, pixel history data from CDLs has been mined to give forecasts of Mississippi crop acres. Traditionally, such agricultural data mining emphasizes the trends of early adopters driven by factors such as global warming, technology, practices, or the marketplace. These studies provide forecasted CDL products produced using See5° and Imagine°, the same software used in Mississippi CDL production since 2006. Mississippi CDL forecasts were made using historical information available as soon as the CDL for the previous year was completed. For example, the CDL forecast for winter wheat, produced at a date when winter wheat was planted but not most crops, gave results of 104.6 +/- 5.4% of the official NASS estimates for winter wheat for the years 2009-2011. In 2012, all of the states of the contiguous US will have the historical CDL data to do this type of study. A CDL forecast is proposed as a useful addition to CropScape.

  4. Terahertz imaging with compressive sensing

    NASA Astrophysics Data System (ADS)

    Chan, Wai Lam

    Most existing terahertz imaging systems are generally limited by slow image acquisition due to mechanical raster scanning. Other systems using focal plane detector arrays can acquire images in real time, but are either too costly or limited by low sensitivity in the terahertz frequency range. To design faster and more cost-effective terahertz imaging systems, the first part of this thesis proposes two new terahertz imaging schemes based on compressive sensing (CS). Both schemes can acquire amplitude and phase-contrast images efficiently with a single-pixel detector, thanks to the powerful CS algorithms which enable the reconstruction of N-by- N pixel images with much fewer than N2 measurements. The first CS Fourier imaging approach successfully reconstructs a 64x64 image of an object with pixel size 1.4 mm using a randomly chosen subset of the 4096 pixels which defines the image in the Fourier plane. Only about 12% of the pixels are required for reassembling the image of a selected object, equivalent to a 2/3 reduction in acquisition time. The second approach is single-pixel CS imaging, which uses a series of random masks for acquisition. Besides speeding up acquisition with a reduced number of measurements, the single-pixel system can further cut down acquisition time by electrical or optical spatial modulation of random patterns. In order to switch between random patterns at high speed in the single-pixel imaging system, the second part of this thesis implements a multi-pixel electrical spatial modulator for terahertz beams using active terahertz metamaterials. The first generation of this device consists of a 4x4 pixel array, where each pixel is an array of sub-wavelength-sized split-ring resonator elements fabricated on a semiconductor substrate, and is independently controlled by applying an external voltage. The spatial modulator has a uniform modulation depth of around 40 percent across all pixels, and negligible crosstalk, at the resonant frequency. The second-generation spatial terahertz modulator, also based on metamaterials with a higher resolution (32x32), is under development. A FPGA-based circuit is designed to control the large number of modulator pixels. Once fully implemented, this second-generation device will enable fast terahertz imaging with both pulsed and continuous-wave terahertz sources.

  5. Design of a Low-Light-Level Image Sensor with On-Chip Sigma-Delta Analog-to- Digital Conversion

    NASA Technical Reports Server (NTRS)

    Mendis, Sunetra K.; Pain, Bedabrata; Nixon, Robert H.; Fossum, Eric R.

    1993-01-01

    The design and projected performance of a low-light-level active-pixel-sensor (APS) chip with semi-parallel analog-to-digital (A/D) conversion is presented. The individual elements have been fabricated and tested using MOSIS* 2 micrometer CMOS technology, although the integrated system has not yet been fabricated. The imager consists of a 128 x 128 array of active pixels at a 50 micrometer pitch. Each column of pixels shares a 10-bit A/D converter based on first-order oversampled sigma-delta (Sigma-Delta) modulation. The 10-bit outputs of each converter are multiplexed and read out through a single set of outputs. A semi-parallel architecture is chosen to achieve 30 frames/second operation even at low light levels. The sensor is designed for less than 12 e^- rms noise performance.

  6. Test beam measurement of the first prototype of the fast silicon pixel monolithic detector for the TT-PET project

    NASA Astrophysics Data System (ADS)

    Paolozzi, L.; Bandi, Y.; Benoit, M.; Cardarelli, R.; Débieux, S.; Forshaw, D.; Hayakawa, D.; Iacobucci, G.; Kaynak, M.; Miucci, A.; Nessi, M.; Ratib, O.; Ripiccini, E.; Rücker, H.; Valerio, P.; Weber, M.

    2018-04-01

    The TT-PET collaboration is developing a PET scanner for small animals with 30 ps time-of-flight resolution and sub-millimetre 3D detection granularity. The sensitive element of the scanner is a monolithic silicon pixel detector based on state-of-the-art SiGe BiCMOS technology. The first ASIC prototype for the TT-PET was produced and tested in the laboratory and with minimum ionizing particles. The electronics exhibit an equivalent noise charge below 600 e‑ RMS and a pulse rise time of less than 2 ns , in accordance with the simulations. The pixels with a capacitance of 0.8 pF were measured to have a detection efficiency greater than 99% and, although in the absence of the post-processing, a time resolution of approximately 200 ps .

  7. Pictorial Conversations.

    ERIC Educational Resources Information Center

    Hooper, Kristina

    1982-01-01

    Provides the rationale for considering communication in a graphic domain and suggests a specific goal for designing work stations which provide graphic capabilities in educational settings. The central element of this recommendation is the "pictorial conversation", a highly interactive exchange that includes pictures as the central elements.…

  8. The Terrain of Margaritifer Chaos

    NASA Technical Reports Server (NTRS)

    1999-01-01

    The jumbled and broken terrain in the picture on the left is known as chaotic terrain. Chaotic terrain was first observed in Mariner 6 and 7 images of Mars more than 30 years ago, and is thought to result from collapse after material--perhaps water or ice--was removed from the subsurface by events such as the formation of giant flood channels. The region shown here is named 'Margaritifer Chaos'. The left picture is a Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) red wide angle camera context frame that covers an area 115 km (71 miles) across. The small white box is centered at 10.3oS, 21.4oW and indicates the location of the high-resolution view on the right. The high resolution view (right) covers a small portion of the Margaritifer Chaos at 1.8 meters (6 feet) per pixel. The area shown is 3 km (1.9 miles) across. Uplands are lumpy with small bright outcrops of bedrock. Lowlands or valleys in the chaotic terrain have floors covered by light-toned windblown d rifts. This image is typical of the very highest-resolution views of the equatorial latitudes of Mars. Both pictures are illuminated from the left/upper left, north is toward the top.

  9. Spatial distribution analysis of the OMI aerosol layer height: a pixel-by-pixel comparison to CALIOP observations

    NASA Astrophysics Data System (ADS)

    Chimot, Julien; Pepijn Veefkind, J.; Vlemmix, Tim; Levelt, Pieternel F.

    2018-04-01

    A global picture of atmospheric aerosol vertical distribution with a high temporal resolution is of key importance not only for climate, cloud formation, and air quality research studies but also for correcting scattered radiation induced by aerosols in absorbing trace gas retrievals from passive satellite sensors. Aerosol layer height (ALH) was retrieved from the OMI 477 nm O2 - O2 band and its spatial pattern evaluated over selected cloud-free scenes. Such retrievals benefit from a synergy with MODIS data to provide complementary information on aerosols and cloudy pixels. We used a neural network approach previously trained and developed. Comparison with CALIOP aerosol level 2 products over urban and industrial pollution in eastern China shows consistent spatial patterns with an uncertainty in the range of 462-648 m. In addition, we show the possibility to determine the height of thick aerosol layers released by intensive biomass burning events in South America and Russia from OMI visible measurements. A Saharan dust outbreak over sea is finally discussed. Complementary detailed analyses show that the assumed aerosol properties in the forward modelling are the key factors affecting the accuracy of the results, together with potential cloud residuals in the observation pixels. Furthermore, we demonstrate that the physical meaning of the retrieved ALH scalar corresponds to the weighted average of the vertical aerosol extinction profile. These encouraging findings strongly suggest the potential of the OMI ALH product, and in more general the use of the 477 nm O2 - O2 band from present and future similar satellite sensors, for climate studies as well as for future aerosol correction in air quality trace gas retrievals.

  10. Effect of image resolution manipulation in rearfoot angle measurements obtained with photogrammetry

    PubMed Central

    Sacco, I.C.N.; Picon, A.P.; Ribeiro, A.P.; Sartor, C.D.; Camargo-Junior, F.; Macedo, D.O.; Mori, E.T.T.; Monte, F.; Yamate, G.Y.; Neves, J.G.; Kondo, V.E.; Aliberti, S.

    2012-01-01

    The aim of this study was to investigate the influence of image resolution manipulation on the photogrammetric measurement of the rearfoot static angle. The study design was that of a reliability study. We evaluated 19 healthy young adults (11 females and 8 males). The photographs were taken at 1536 pixels in the greatest dimension, resized into four different resolutions (1200, 768, 600, 384 pixels) and analyzed by three equally trained examiners on a 96-pixels per inch (ppi) screen. An experienced physiotherapist marked the anatomic landmarks of rearfoot static angles on two occasions within a 1-week interval. Three different examiners had marked angles on digital pictures. The systematic error and the smallest detectable difference were calculated from the angle values between the image resolutions and times of evaluation. Different resolutions were compared by analysis of variance. Inter- and intra-examiner reliability was calculated by intra-class correlation coefficients (ICC). The rearfoot static angles obtained by the examiners in each resolution were not different (P > 0.05); however, the higher the image resolution the better the inter-examiner reliability. The intra-examiner reliability (within a 1-week interval) was considered to be unacceptable for all image resolutions (ICC range: 0.08-0.52). The whole body image of an adult with a minimum size of 768 pixels analyzed on a 96-ppi screen can provide very good inter-examiner reliability for photogrammetric measurements of rearfoot static angles (ICC range: 0.85-0.92), although the intra-examiner reliability within each resolution was not acceptable. Therefore, this method is not a proper tool for follow-up evaluations of patients within a therapeutic protocol. PMID:22911379

  11. Recent History of Large-Scale Ecosystem Disturbances in North America Derived from the AVHRR Satellite Record

    NASA Technical Reports Server (NTRS)

    Potter, Christopher; Tan, Pang-Ning; Kumar, Vipin; Kicharik, Chris; Klooster, Steven; Genovese, Vanessa

    2004-01-01

    Ecosystem structure and function are strongly impacted by disturbance events, many of which in North America are associated with seasonal temperature extremes, wildfires, and tropical storms. This study was conducted to evaluate patterns in a 19-year record of global satellite observations of vegetation phenology from the Advanced Very High Resolution Radiometer (AVHRR) as a means to characterize major ecosystem disturbance events and regimes. The fraction absorbed of photosynthetically active radiation (FPAR) by vegetation canopies worldwide has been computed at a monthly time interval from 1982 to 2000 and gridded at a spatial resolution of 8-km globally. Potential disturbance events were identified in the FPAR time series by locating anomalously low values (FPAR-LO) that lasted longer than 12 consecutive months at any 8-km pixel. We can find verifiable evidence of numerous disturbance types across North America, including major regional patterns of cold and heat waves, forest fires, tropical storms, and large-scale forest logging. Summed over 19 years, areas potentially influenced by major ecosystem disturbances (one FPAR-LO event over the period 1982-2000) total to more than 766,000 km2. The periods of highest detection frequency were 1987-1989, 1995-1997, and 1999. Sub- continental regions of Alaska and Central Canada had the highest proportion (greater than 90%) of FPAR-LO pixels detected in forests, tundra shrublands, and wetland areas. The Great Lakes region showed the highest proportion (39%) of FPAR-LO pixels detected in cropland areas, whereas the western United States showed the highest proportion (16%) of FPAR-LO pixels detected in grassland areas. Based on this analysis, an historical picture is emerging of periodic droughts and heat waves, possibly coupled with herbivorous insect outbreaks, as among the most important causes of ecosystem disturbance in North America.

  12. Effect of image resolution manipulation in rearfoot angle measurements obtained with photogrammetry.

    PubMed

    Sacco, I C N; Picon, A P; Ribeiro, A P; Sartor, C D; Camargo-Junior, F; Macedo, D O; Mori, E T T; Monte, F; Yamate, G Y; Neves, J G; Kondo, V E; Aliberti, S

    2012-09-01

    The aim of this study was to investigate the influence of image resolution manipulation on the photogrammetric measurement of the rearfoot static angle. The study design was that of a reliability study. We evaluated 19 healthy young adults (11 females and 8 males). The photographs were taken at 1536 pixels in the greatest dimension, resized into four different resolutions (1200, 768, 600, 384 pixels) and analyzed by three equally trained examiners on a 96-pixels per inch (ppi) screen. An experienced physiotherapist marked the anatomic landmarks of rearfoot static angles on two occasions within a 1-week interval. Three different examiners had marked angles on digital pictures. The systematic error and the smallest detectable difference were calculated from the angle values between the image resolutions and times of evaluation. Different resolutions were compared by analysis of variance. Inter- and intra-examiner reliability was calculated by intra-class correlation coefficients (ICC). The rearfoot static angles obtained by the examiners in each resolution were not different (P > 0.05); however, the higher the image resolution the better the inter-examiner reliability. The intra-examiner reliability (within a 1-week interval) was considered to be unacceptable for all image resolutions (ICC range: 0.08-0.52). The whole body image of an adult with a minimum size of 768 pixels analyzed on a 96-ppi screen can provide very good inter-examiner reliability for photogrammetric measurements of rearfoot static angles (ICC range: 0.85-0.92), although the intra-examiner reliability within each resolution was not acceptable. Therefore, this method is not a proper tool for follow-up evaluations of patients within a therapeutic protocol.

  13. A Regional View of the Libya Montes

    NASA Technical Reports Server (NTRS)

    2000-01-01

    [figure removed for brevity, see original site]

    The Libya Montes are a ring of mountains up-lifted by the giant impact that created the Isidis basin to the north. During 1999, this region became one of the top two that were being considered for the now-canceled Mars Surveyor 2001 Lander. The Isidis basin is very, very ancient. Thus, the mountains that form its rims would contain some of the oldest rocks available at the Martian surface, and a landing in this region might potentially provide information about conditions on early Mars. In May 1999, the wide angle cameras of the Mars Global Surveyor Mars Orbiter Camera system were used in what was called the 'Geodesy Campaign' to obtain nearly global maps of the planet in color and in stereo at resolutions of 240 m/pixel (787 ft/pixel) for the red camera and 480 m/pixel (1575 ft/pixel) for the blue. Shown here are color and stereo views constructed from mosaics of the Geodesy Campaign images for the Libya Montes region of Mars. After they formed by giant impact, the Libya Mountains and valleys were subsequently modified and eroded by other processes, including wind, impact cratering, and flow of liquid water to make the many small valleys that can be seen running northward in the scene. The pictures shown here cover nearly 122,000 square kilometers (47,000 square miles) between latitudes 0.1oN and 4.0oN, longitudes 271.5oW and 279.9oW. The mosaics are about 518 km (322 mi) wide by 235 km (146 mi)high. Red-blue '3-D' glasses are needed to view the stereo image.

  14. The Identification of Variation in Students' Understandings of Disciplinary Concepts: The Application of the SOLO Taxonomy within Introductory Accounting

    ERIC Educational Resources Information Center

    Lucas, Ursula; Mladenovic, Rosina

    2009-01-01

    Insights into students' understandings of disciplinary concepts are fundamental to effective curriculum development. This paper argues that a rounded picture of students' understandings is required to support such development. It is argued that one element of this picture may be provided through the use of the Structure of Observed Learning…

  15. Music as the Representative of the World Picture, the Phenomenon of Culture

    ERIC Educational Resources Information Center

    Kossanova, Aigul Sh.; Yermanov, Zhanat R.; Bekenova, Aizhan S.; Julmukhamedova, Aizhan A.; Takezhanova, Roza Ph.; Zhussupova, Saule S.

    2016-01-01

    The purpose of this article is to the study of music as a representative of the picture of the world nomadic culture. With a systemic organization, rich expressive means, music reflects the diversity of the world in its complex, subtle and profound manifestations being the artistic value, key world modeling element. Music can satisfy the aesthetic…

  16. Exploring Second Graders' Understanding of the Text-Illustration Relationship in Picture Storybooks and Informational Picture Books

    ERIC Educational Resources Information Center

    Thomas, Lisa Carol

    2010-01-01

    Our society is increasingly bombarded with visual imagery; therefore, it is important for educators to be knowledgeable about the elements of art and to use our knowledge to help students deepen their reading understanding. Arizpe & Styles (2003) noted that students must be prepared to work with imagery in the future at high levels of…

  17. Understanding of the Geomorphological Elements in Discrimination of Typical Mediterranean Land Cover Types

    NASA Astrophysics Data System (ADS)

    Elhag, Mohamed; Boteva, Silvena

    2017-12-01

    Quantification of geomorphometric features is the keystone concern of the current study. The quantification was based on the statistical approach in term of multivariate analysis of local topographic features. The implemented algorithm utilizes the Digital Elevation Model (DEM) to categorize and extract the geomorphometric features embedded in the topographic dataset. The morphological settings were exercised on the central pixel of 3x3 per-defined convolution kernel to evaluate the surrounding pixels under the right directional pour point model (D8) of the azimuth viewpoints. Realization of unsupervised classification algorithm in term of Iterative Self-Organizing Data Analysis Technique (ISODATA) was carried out on ASTER GDEM within the boundary of the designated study area to distinguish 10 morphometric classes. The morphometric classes expressed spatial distribution variation in the study area. The adopted methodology is successful to appreciate the spatial distribution of the geomorphometric features under investigation. The conducted results verified the superimposition of the delineated geomorphometric elements over a given remote sensing imagery to be further analyzed. Robust relationship between different Land Cover types and the geomorphological elements was established in the context of the study area. The domination and the relative association of different Land Cover types in corresponding to its geomorphological elements were demonstrated.

  18. Estimation of urban surface water at subpixel level from neighborhood pixels using multispectral remote sensing image (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Xie, Huan; Luo, Xin; Xu, Xiong; Wang, Chen; Pan, Haiyan; Tong, Xiaohua; Liu, Shijie

    2016-10-01

    Water body is a fundamental element in urban ecosystems and water mapping is critical for urban and landscape planning and management. As remote sensing has increasingly been used for water mapping in rural areas, this spatially explicit approach applied in urban area is also a challenging work due to the water bodies mainly distributed in a small size and the spectral confusion widely exists between water and complex features in the urban environment. Water index is the most common method for water extraction at pixel level, and spectral mixture analysis (SMA) has been widely employed in analyzing urban environment at subpixel level recently. In this paper, we introduce an automatic subpixel water mapping method in urban areas using multispectral remote sensing data. The objectives of this research consist of: (1) developing an automatic land-water mixed pixels extraction technique by water index; (2) deriving the most representative endmembers of water and land by utilizing neighboring water pixels and adaptive iterative optimal neighboring land pixel for respectively; (3) applying a linear unmixing model for subpixel water fraction estimation. Specifically, to automatically extract land-water pixels, the locally weighted scatter plot smoothing is firstly used to the original histogram curve of WI image . And then the Ostu threshold is derived as the start point to select land-water pixels based on histogram of the WI image with the land threshold and water threshold determination through the slopes of histogram curve . Based on the previous process at pixel level, the image is divided into three parts: water pixels, land pixels, and mixed land-water pixels. Then the spectral mixture analysis (SMA) is applied to land-water mixed pixels for water fraction estimation at subpixel level. With the assumption that the endmember signature of a target pixel should be more similar to adjacent pixels due to spatial dependence, the endmember of water and land are determined by neighboring pure land or pure water pixels within a distance. To obtaining the most representative endmembers in SMA, we designed an adaptive iterative endmember selection method based on the spatial similarity of adjacent pixels. According to the spectral similarity in a spatial adjacent region, the spectrum of land endmember is determined by selecting the most representative land pixel in a local window, and the spectrum of water endmember is determined by calculating an average of the water pixels in the local window. The proposed hierarchical processing method based on WI and SMA (WISMA) is applied to urban areas for reliability evaluation using the Landsat-8 Operational Land Imager (OLI) images. For comparison, four methods at pixel level and subpixel level were chosen respectively. Results indicate that the water maps generated by the proposed method correspond as closely with the truth water maps with subpixel precision. And the results showed that the WISMA achieved the best performance in water mapping with comprehensive analysis of different accuracy evaluation indexes (RMSE and SE).

  19. Sensitivity of Marine Warm Cloud Retrieval Statistics to Algorithm Choices: Examples from MODIS Collection 6

    NASA Astrophysics Data System (ADS)

    Platnick, S.; Wind, G.; Zhang, Z.; Ackerman, S. A.; Maddux, B. C.

    2012-12-01

    The optical and microphysical structure of warm boundary layer marine clouds is of fundamental importance for understanding a variety of cloud radiation and precipitation processes. With the advent of MODIS (Moderate Resolution Imaging Spectroradiometer) on the NASA EOS Terra and Aqua platforms, simultaneous global/daily 1km retrievals of cloud optical thickness and effective particle size are provided, as well as the derived water path. In addition, the cloud product (MOD06/MYD06 for MODIS Terra and Aqua, respectively) provides separate effective radii results using the 1.6, 2.1, and 3.7 μm spectral channels. Cloud retrieval statistics are highly sensitive to how a pixel identified as being "not-clear" by a cloud mask (e.g., the MOD35/MYD35 product) is determined to be useful for an optical retrieval based on a 1-D cloud model. The Collection 5 MODIS retrieval algorithm removed pixels associated with cloud edges (defined by immediate adjacency to "clear" MOD/MYD35 pixels) as well as ocean pixels with partly cloudy elements in the 250m MODIS cloud mask - part of the so-called Clear Sky Restoral (CSR) algorithm. Collection 6 attempts retrievals for those two pixel populations, but allows a user to isolate or filter out the populations via CSR pixel-level Quality Assessment (QA) assignments. In this paper, using the preliminary Collection 6 MOD06 product, we present global and regional statistical results of marine warm cloud retrieval sensitivities to the cloud edge and 250m partly cloudy pixel populations. As expected, retrievals for these pixels are generally consistent with a breakdown of the 1D cloud model. While optical thickness for these suspect pixel populations may have some utility for radiative studies, the retrievals should be used with extreme caution for process and microphysical studies.

  20. Compensatable muon collider calorimeter with manageable backgrounds

    DOEpatents

    Raja, Rajendran

    2015-02-17

    A method and system for reducing background noise in a particle collider, comprises identifying an interaction point among a plurality of particles within a particle collider associated with a detector element, defining a trigger start time for each of the pixels as the time taken for light to travel from the interaction point to the pixel and a trigger stop time as a selected time after the trigger start time, and collecting only detections that occur between the start trigger time and the stop trigger time in order to thereafter compensate the result from the particle collider to reduce unwanted background detection.

  1. Spatial Light Modulators and Applications: Summaries of Papers Presented at the Spatial Light Modulators and Applications Topical Meeting Held on March 15-17, 1993 in Palm Springs, California

    DTIC Science & Technology

    1993-03-17

    modulator: Number of Elements 16 x 16 Pixel Size 1 mmxl mm Area Fill Factor > 90% Reflectance > 90% Phase Shift 900 Frame Rate > 1 kHz Operational Spectral...electro-optic constants. By using reflected light from the second interface a factor of two increase in phase shift is obtained for an applied voltage vs...wavelengths in general require thinner PLZT wafers. One of the objectives of the SLM design was to maximize pixel area fill factor and thereby the

  2. Synthesis of a fiber-optic magnetostrictive sensor (FOMS) pixel for RF magnetic field imaging

    NASA Astrophysics Data System (ADS)

    Rengarajan, Suraj

    The principal objective of this dissertation was to synthesize a sensor element with properties specifically optimized for integration into arrays capable of imaging RF magnetic fields. The dissertation problem was motivated by applications in nondestructive eddy current testing, smart skins, etc., requiring sensor elements that non-invasively detect millimeter-scale variations over several square meters, in low level magnetic fields varying at frequencies in the 100 kHz-1 GHz range. The poor spatial and temporal resolution of FOMS elements available prior to this dissertation research, precluded their use in non-invasive large area mapping applications. Prior research had been focused on large, discrete devices for detecting extremely low level magnetic fields varying at a few kHz. These devices are incompatible with array integration and imaging applications. The dissertation research sought to overcome the limitations of current technology by utilizing three new approaches; synthesizing magnetostrictive thin films and optimizing their properties for sensor applications, integrating small sensor elements into an array compatible fiber optic interferometer, and devising a RF mixing approach to measure high frequency magnetic fields using the integrated sensor element. Multilayer thin films were used to optimize the magnetic properties of the magnetostrictive elements. Alternating soft (Nisb{80}Fesb{20}) and hard (Cosb{50}Fesb{50}) magnetic alloy layers were selected for the multilayer and the layer thicknesses were varied to obtain films with a combination of large magnetization, high frequency permeability and large magnetostrictivity. X-Ray data and measurement of the variations in the magnetization, resistivity and magnetostriction with layer thicknesses, indicated that an interfacial layer was responsible for enhancing the sensing performance of the multilayers. A FOMS pixel was patterned directly onto the sensing arm of a fiber-optic interferometer, by sputtering a multilayer film with favorable sensor properties. After calibrating the interferometer response with a piezo, the mechanical and magnetic responses of the FOMS element were evaluated for various test fields. High frequency magnetic fields were detected using a local oscillator field to downconvert the RF signal fields to the lower mechanical resonant frequency of the element. A field sensitivity of 0.3 Oe/cm sensor element length was demonstrated at 1 MHz. A coherent magnetization rotation model was developed to predict the magnetostrictive response of the element, and identify approaches for optimizing its performance. This model predicts that an optimized element could resolve ˜1 mm variations in fields varying at frequencies >10 MHz with a sensitivity of ˜10sp{-3} Oe/mm. The results demonstrate the potential utility of integrating this device as a FOMS pixel in RF magnetic field imaging arrays.

  3. Television News Without Pictures?

    ERIC Educational Resources Information Center

    Graber, Doris A.

    1987-01-01

    Describes "gestalt" coding procedures that concentrate on the meanings conveyed by audio-visual messages rather than on coding individual pictorial elements shown in a news story. Discusses the totality of meaning that results from the interaction of verbal and visual story elements, external settings, and the decoding proclivities of…

  4. LUNA, an underground nuclear astrophysics laboratory: recent results and future perspectives

    NASA Astrophysics Data System (ADS)

    Corvisiero, P.

    2005-05-01

    It is known that the chemical elements and their isotopes were created by nuclear fusion reactions in the hot interiors of remote and long-vanished stars over many billions of years. The present picture is that all elements from carbon to uranium have been produced entirely within stars during their fiery lifetimes and explosive deaths. The detailed understanding of the origin of the chemical elements and their isotopes combines astrophysics and nuclear physics, and forms what is called nuclear astrophysics. In turn, nuclear reactions are at the heart of nuclear astrophysics: they influence sensitively the nucleosynthesis of the elements in the earliest stages of the universe and in all the objects formed thereafter, and control the associated energy generation, neutrino luminosity, and evolution of stars. A good knowledge of the rates of these fusion reactions is essential to understanding this broad picture. Some of the most important experimental techniques to measure the corresponding cross sections, based both on direct and indirect methods, will be described in this paper.

  5. Fast Readout Architectures for Large Arrays of Digital Pixels: Examples and Applications

    PubMed Central

    Gabrielli, A.

    2014-01-01

    Modern pixel detectors, particularly those designed and constructed for applications and experiments for high-energy physics, are commonly built implementing general readout architectures, not specifically optimized in terms of speed. High-energy physics experiments use bidimensional matrices of sensitive elements located on a silicon die. Sensors are read out via other integrated circuits bump bonded over the sensor dies. The speed of the readout electronics can significantly increase the overall performance of the system, and so here novel forms of readout architectures are studied and described. These circuits have been investigated in terms of speed and are particularly suited for large monolithic, low-pitch pixel detectors. The idea is to have a small simple structure that may be expanded to fit large matrices without affecting the layout complexity of the chip, while maintaining a reasonably high readout speed. The solutions might be applied to devices for applications not only in physics but also to general-purpose pixel detectors whenever online fast data sparsification is required. The paper presents also simulations on the efficiencies of the systems as proof of concept for the proposed ideas. PMID:24778588

  6. Comparison View of Mars Cloud Cover

    NASA Technical Reports Server (NTRS)

    1997-01-01

    These color and black and white pictures of Mars were taken by NASA's Hubble Space Telescope just two weeks after Earth made its closest approach to the Red Planet during the 1997 opposition. When the Hubble pictures were taken Mars was at a distance of 62 million miles (100 million kilometers) and the resolution at the center of the disk is 13.5 miles/pixel (22 kilometers/pixel). Both images were made with the Wide Field and Planetary Camera 2. The color composite (left image) is constructed from three images taken in red (673 nanometers), green (502 nm) and blue (410 nm) light. The right image, in blue light only, brings out details in the cloud structure and is remarkably similar to weather satellite pictures taken of Earth. A planetary-scale wave curls around the north pole, similar in behavior to high latitude cold fronts which descend over North America and Europe during springtime.

    The picture was taken when Mars was near aphelion, its farthest point from the Sun. The faint sunlight results in cold atmospheric conditions which stimulate the formation of water ice clouds. The clouds themselves further reduce atmospheric temperatures. Atmospheric heating, resulting when sunlight is absorbed by the dust, is reduced when ice forms around the dust particles and causes the dust to gravitationally settle to the ground.

    These images of Mars are centered at approximately 94 degrees longitude and 23 degrees N latitude (oriented with north up). The four largest Tharsis Montes (massive extinct volcanoes) are visible as dark spots extending through the clouds. The vast canyon system, Valles Marineris, stretches across the eastern (lower right) half of the image; the Pathfinder landing site is near the eastern edge of the image. It is early summer in the northern hemisphere, and the North polar cap has retreated to about 80 degrees N latitude; the 'residual' summer cap, which is composed of water ice, is about one-third the size of the 'seasonal' winter cap, which consists mostly of carbon-dioxide frost (dry ice) condensed on the surface. The polar cap is surrounded by a 'sand sea' made up of dark sand dunes. A distinct belt of water-ice clouds extends over much of this hemisphere.

    This image and other images and data received from the Hubble Space Telescope are posted on the World Wide Web on the Space Telescope Science Institute home page at URL http://oposite.stsci.edu/pubinfo/

  7. COMPARISON VIEW OF MARS CLOUD COVER

    NASA Technical Reports Server (NTRS)

    2002-01-01

    These color and black and white pictures of Mars were taken by NASA's Hubble Space Telescope just two weeks after Earth made its closest approach to the Red Planet during the 1997 opposition. When the Hubble pictures were taken Mars was at a distance of 62 million miles (100 million kilometers) and the resolution at the center of the disk is 13.5 miles/pixel (22 kilometers/pixel). Both images were made with the Wide Field and Planetary Camera 2. The color composite (left image) is constructed from three images taken in red (673 nanometers), green (502 nm) and blue (410 nm) light. The right image, in blue light only, brings out details in the cloud structure and is remarkably similar to weather satellite pictures taken of Earth. A planetary-scale wave curls around the north pole, similar in behavior to high latitude cold fronts which descend over North America and Europe during springtime. The picture was taken when Mars was near aphelion, its farthest point from the Sun. The faint sunlight results in cold atmospheric conditions which stimulate the formation of water ice clouds. The clouds themselves further reduce atmospheric temperatures. Atmospheric heating, resulting when sunlight is absorbed by the dust, is reduced when ice forms around the dust particles and causes the dust to gravitationally settle to the ground. These images of Mars are centered at approximately 94 degrees longitude and 23 degrees N latitude (oriented with north up). The four largest Tharsis Montes (massive extinct volcanoes) are visible as dark spots extending through the clouds. The vast canyon system, Valles Marineris, stretches across the eastern (lower right) half of the image; the Pathfinder landing site is near the eastern edge of the image. It is early summer in the northern hemisphere, and the North polar cap has retreated to about 80 degrees N latitude; the 'residual' summer cap, which is composed of water ice, is about one-third the size of the 'seasonal' winter cap, which consists mostly of carbon-dioxide frost (dry ice) condensed on the surface. The polar cap is surrounded by a 'sand sea' made up of dark sand dunes. A distinct belt of water-ice clouds extends over much of this hemisphere. Credit: Phil James (Univ. Toledo), Todd Clancy (Space Science Inst., Boulder, CO), Steve Lee (Univ. Colorado), and NASA Image files in GIF and JPEG format and captions may be accessed on the Internet via anonymous ftp from oposite.stsci.edu in /pubinfo.

  8. The NUC and blind pixel eliminating in the DTDI application

    NASA Astrophysics Data System (ADS)

    Su, Xiao Feng; Chen, Fan Sheng; Pan, Sheng Da; Gong, Xue Yi; Dong, Yu Cui

    2013-12-01

    AS infrared CMOS Digital TDI (Time Delay and integrate) has a simple structure, excellent performance and flexible operation, it has been used in more and more applications. Because of the limitation of the Production process level, the plane array of the infrared detector has a large NU (non-uniformity) and a certain blind pixel rate. Both of the two will raise the noise and lead to the TDI works not very well. In this paper, for the impact of the system performance, the most important elements are analyzed, which are the NU of the optical system, the NU of the Plane array and the blind pixel in the Plane array. Here a reasonable algorithm which considers the background removal and the linear response model of the infrared detector is used to do the NUC (Non-uniformity correction) process, when the infrared detector array is used as a Digital TDI. In order to eliminate the impact of the blind pixel, the concept of surplus pixel method is introduced in, through the method, the SNR (signal to noise ratio) can be improved and the spatial and temporal resolution will not be changed. Finally we use a MWIR (Medium Ware Infrared) detector to do the experiment and the result proves the effectiveness of the method.

  9. Sub-pixel mapping of hyperspectral imagery using super-resolution

    NASA Astrophysics Data System (ADS)

    Sharma, Shreya; Sharma, Shakti; Buddhiraju, Krishna M.

    2016-04-01

    With the development of remote sensing technologies, it has become possible to obtain an overview of landscape elements which helps in studying the changes on earth's surface due to climate, geological, geomorphological and human activities. Remote sensing measures the electromagnetic radiations from the earth's surface and match the spectral similarity between the observed signature and the known standard signatures of the various targets. However, problem lies when image classification techniques assume pixels to be pure. In hyperspectral imagery, images have high spectral resolution but poor spatial resolution. Therefore, the spectra obtained is often contaminated due to the presence of mixed pixels and causes misclassification. To utilise this high spectral information, spatial resolution has to be enhanced. Many factors make the spatial resolution one of the most expensive and hardest to improve in imaging systems. To solve this problem, post-processing of hyperspectral images is done to retrieve more information from the already acquired images. The algorithm to enhance spatial resolution of the images by dividing them into sub-pixels is known as super-resolution and several researches have been done in this domain.In this paper, we propose a new method for super-resolution based on ant colony optimization and review the popular methods of sub-pixel mapping of hyperspectral images along with their comparative analysis.

  10. [Construction of platform on the three-dimensional finite element model of the dentulous mandibular body of a normal person].

    PubMed

    Gong, Lu-Lu; Zhu, Jing; Ding, Zu-Quan; Li, Guo-Qiang; Wang, Li-Ming; Yan, Bo-Yong

    2008-04-01

    To develop a method to construct a three-dimensional finite element model of the dentulous mandibular body of a normal person. A series of pictures with the interval of 0.1 mm were taken by CT scanning. After extracting the coordinates of key points of some pictures by the procedure, we used a C program to process the useful data, and constructed a platform of the three-dimensional finite element model of the dentulous mandibular body with the Ansys software for finite element analysis. The experimental results showed that the platform of the three-dimensional finite element model of the dentulous mandibular body was more accurate and applicable. The exact three-dimensional shape of model was well constructed, and each part of this model, such as one single tooth, can be deleted, which can be used to emulate various tooth-loss clinical cases. The three-dimensional finite element model is constructed with life-like shapes of dental cusps. Each part of this model can be easily removed. In conclusion, this experiment provides a good platform of biomechanical analysis on various tooth-loss clinical cases.

  11. Reconstruction of 2D PET data with Monte Carlo generated system matrix for generalized natural pixels

    NASA Astrophysics Data System (ADS)

    Vandenberghe, Stefaan; Staelens, Steven; Byrne, Charles L.; Soares, Edward J.; Lemahieu, Ignace; Glick, Stephen J.

    2006-06-01

    In discrete detector PET, natural pixels are image basis functions calculated from the response of detector pairs. By using reconstruction with natural pixel basis functions, the discretization of the object into a predefined grid can be avoided. Here, we propose to use generalized natural pixel reconstruction. Using this approach, the basis functions are not the detector sensitivity functions as in the natural pixel case but uniform parallel strips. The backprojection of the strip coefficients results in the reconstructed image. This paper proposes an easy and efficient way to generate the matrix M directly by Monte Carlo simulation. Elements of the generalized natural pixel system matrix are formed by calculating the intersection of a parallel strip with the detector sensitivity function. These generalized natural pixels are easier to use than conventional natural pixels because the final step from solution to a square pixel representation is done by simple backprojection. Due to rotational symmetry in the PET scanner, the matrix M is block circulant and only the first blockrow needs to be stored. Data were generated using a fast Monte Carlo simulator using ray tracing. The proposed method was compared to a listmode MLEM algorithm, which used ray tracing for doing forward and backprojection. Comparison of the algorithms with different phantoms showed that an improved resolution can be obtained using generalized natural pixel reconstruction with accurate system modelling. In addition, it was noted that for the same resolution a lower noise level is present in this reconstruction. A numerical observer study showed the proposed method exhibited increased performance as compared to a standard listmode EM algorithm. In another study, more realistic data were generated using the GATE Monte Carlo simulator. For these data, a more uniform contrast recovery and a better contrast-to-noise performance were observed. It was observed that major improvements in contrast recovery were obtained with MLEM when the correct system matrix was used instead of simple ray tracing. The correct modelling was the major cause of improved contrast for the same background noise. Less important factors were the choice of the algorithm (MLEM performed better than ART) and the basis functions (generalized natural pixels gave better results than pixels).

  12. Mitigation of image artifacts in LWIR microgrid polarimeter images

    NASA Astrophysics Data System (ADS)

    Ratliff, Bradley M.; Tyo, J. Scott; Boger, James K.; Black, Wiley T.; Bowers, David M.; Kumar, Rakesh

    2007-09-01

    Microgrid polarimeters, also known as division of focal plane (DoFP) polarimeters, are composed of an integrated array of micropolarizing elements that immediately precedes the FPA. The result of the DoFP device is that neighboring pixels sense different polarization states. The measurements made at each pixel can be combined to estimate the Stokes vector at every reconstruction point in a scene. DoFP devices have the advantage that they are mechanically rugged and inherently optically aligned. However, they suffer from the severe disadvantage that the neighboring pixels that make up the Stokes vector estimates have different instantaneous fields of view (IFOV). This IFOV error leads to spatial differencing that causes false polarization signatures, especially in regions of the image where the scene changes rapidly in space. Furthermore, when the polarimeter is operating in the LWIR, the FPA has inherent response problems such as nonuniformity and dead pixels that make the false polarization problem that much worse. In this paper, we present methods that use spatial information from the scene to mitigate two of the biggest problems that confront DoFP devices. The first is a polarimetric dead pixel replacement (DPR) scheme, and the second is a reconstruction method that chooses the most appropriate polarimetric interpolation scheme for each particular pixel in the image based on the scene properties. We have found that these two methods can greatly improve both the visual appearance of polarization products as well as the accuracy of the polarization estimates, and can be implemented with minimal computational cost.

  13. Compressive hyperspectral sensor for LWIR gas detection

    NASA Astrophysics Data System (ADS)

    Russell, Thomas A.; McMackin, Lenore; Bridge, Bob; Baraniuk, Richard

    2012-06-01

    Focal plane arrays with associated electronics and cooling are a substantial portion of the cost, complexity, size, weight, and power requirements of Long-Wave IR (LWIR) imagers. Hyperspectral LWIR imagers add significant data volume burden as they collect a high-resolution spectrum at each pixel. We report here on a LWIR Hyperspectral Sensor that applies Compressive Sensing (CS) in order to achieve benefits in these areas. The sensor applies single-pixel detection technology demonstrated by Rice University. The single-pixel approach uses a Digital Micro-mirror Device (DMD) to reflect and multiplex the light from a random assortment of pixels onto the detector. This is repeated for a number of measurements much less than the total number of scene pixels. We have extended this architecture to hyperspectral LWIR sensing by inserting a Fabry-Perot spectrometer in the optical path. This compressive hyperspectral imager collects all three dimensions on a single detection element, greatly reducing the size, weight and power requirements of the system relative to traditional approaches, while also reducing data volume. The CS architecture also supports innovative adaptive approaches to sensing, as the DMD device allows control over the selection of spatial scene pixels to be multiplexed on the detector. We are applying this advantage to the detection of plume gases, by adaptively locating and concentrating target energy. A key challenge in this system is the diffraction loss produce by the DMD in the LWIR. We report the results of testing DMD operation in the LWIR, as well as system spatial and spectral performance.

  14. Journal of Chemical Education: Software.

    ERIC Educational Resources Information Center

    Journal of Chemical Education, 1989

    1989-01-01

    Discusses a visual database of information about chemical elements. Uses a single sided 12-inch, 30-minute, CAV-type videodisk. Contains a picture of almost every element in its stable form at room temperature and normal atmospheric pressure. Can be used with the video controller from "KC? Discoverer." (MVL)

  15. Electrically stimulated contractions of Vorticella convallaria

    NASA Astrophysics Data System (ADS)

    Kantha, Deependra; van Winkle, David

    2009-03-01

    The contraction of Vorticella convallaria was triggered by applying a voltage pulse in its host culturing medium. The 50V, 1ms wide pulse was applied across platinum wires separated by 0.7 cm on a microscope slide. The contractions were recorded as cines (image sequences) by a Phantom V5 camera (Vision Research) on a bright field microscope with 20X objective, with the image size of 256 pixels x 128 pixels at 7352 pictures per second. The starting time of the cines was synchronized with the starting of the electrical pulse. We recorded five contractions of each of 12 organisms. The cines were analyzed to obtain the initiation time, defined as the difference in time between the leading edge of the electrical pulse and the first frame showing zooid movement. From multiple contractions of same organism, we found the initiation time is reproducible. In comparing different organisms, we found the average initiation time of 1.73 ms with a standard deviation of 0.63 ms. This research is supported by the state of Florida (MARTECH) and Research Corporation.

  16. Parallel processing for digital picture comparison

    NASA Technical Reports Server (NTRS)

    Cheng, H. D.; Kou, L. T.

    1987-01-01

    In picture processing an important problem is to identify two digital pictures of the same scene taken under different lighting conditions. This kind of problem can be found in remote sensing, satellite signal processing and the related areas. The identification can be done by transforming the gray levels so that the gray level histograms of the two pictures are closely matched. The transformation problem can be solved by using the packing method. Researchers propose a VLSI architecture consisting of m x n processing elements with extensive parallel and pipelining computation capabilities to speed up the transformation with the time complexity 0(max(m,n)), where m and n are the numbers of the gray levels of the input picture and the reference picture respectively. If using uniprocessor and a dynamic programming algorithm, the time complexity will be 0(m(3)xn). The algorithm partition problem, as an important issue in VLSI design, is discussed. Verification of the proposed architecture is also given.

  17. Getting the Picture and Changing the Picture: Visual Methodologies and Educational Research in South Africa

    ERIC Educational Resources Information Center

    Mitchell, Claudia

    2008-01-01

    At the risk of seeming to make exaggerated claims for visual methodologies, what I set out to do is lay bare some of the key elements of working with the visual as a set of methodologies and practices. In particular, I address educational research in South Africa at a time when questions of the social responsibility of the academic researcher…

  18. Model of the lines of sight for an off-axis optical instrument Pleiades

    NASA Astrophysics Data System (ADS)

    Sauvage, Dominique; Gaudin-Delrieu, Catherine; Tournier, Thierry

    2017-11-01

    The future Earth observation missions aim at delivering images with a high resolution and a large field of view. These images have to be processed to get a very accurate localisation. In that goal, the individual lines of sight of each photosensitive element must be evaluated according to the localisation of the pixels in the focal plane. But, with off-axis Korsch telescope (like PLEIADES), the classical model has to be adapted. This is possible by using optical ground measurements made after the integration of the instrument. The processing of these results leads to several parameters, which are function of the offsets of the focal plane and the real focal length. All this study which has been proposed for the PLEIADES mission leads to a more elaborated model which provides the relation between the lines of sight and the location of the pixels, with a very good accuracy, close to the pixel size.

  19. Microcomputer control of infrared detector arrays used in direct imaging and in Fabry-Perot spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rossano, G.S.

    1989-02-01

    A microcomputer based data acquisition system has been developed for astronomical observing with two-dimensional infrared detector arrays operating at high pixel rates. The system is based on a 16-bit 8086/8087 microcomputer operating at 10 MHz. Data rates of up to 560,000 pixels/sec from arrays of up to 4096 elements are supported using the microcomputer system alone. A hardware co-adder the authors are developing permits data accumulation at rates of up to 1.67 million pixels/sec in both staring and chopped data acquisition modes. The system has been used for direct imaging and for data acquisition in a Fabry-Perot Spectrometer developed bymore » NRL. The hardware is operated using interactive software which supports the several available modes of data acquisition, and permits data display and reduction during observing sessions.« less

  20. Ganymede Groove Lanes

    NASA Technical Reports Server (NTRS)

    1997-01-01

    An ancient dark terrain surface is cut by orthogonal sets of fractures on Jupiter's moon Ganymede. Subdued pits visible on unbroken blocks are the remnants of impact craters which have degraded with time. Across the top of the image, a line of these subdued pits may have been a chain of craters which are now cut apart by the northwest to southeast trending fractures. North is to the top. Younger craters appear as bright circles. The fractures in this image range from less than 100 meters (328 feet) to over a kilometer (0.62 miles) in width. They display bright walls where cleaner ice may be exposed, and deposits of dark material fill their floors. This 27 by 22 kilometer (17 by 14 mile) image of northern Marius Regio was obtained on September 6, 1996 by NASA's Galileo spacecraft at a resolution of 85 meters (278 feet) per picture element (pixel).

    The Jet Propulsion Laboratory, Pasadena, CA manages the Galileo mission for NASA's Office of Space Science, Washington, DC. JPL is an operating division of California Institute of Technology (Caltech).

    This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://galileo.jpl.nasa.gov. Background information and educational context for the images can be found at URL http://www.jpl.nasa.gov/galileo/sepo

  1. Singular Stokes-polarimetry as new technique for metrology and inspection of polarized speckle fields

    NASA Astrophysics Data System (ADS)

    Soskin, Marat S.; Denisenko, Vladimir G.; Egorov, Roman I.

    2004-08-01

    Polarimetry is effective technique for polarized light fields characterization. It was shown recently that most full "finger-print" of light fields with arbitrary complexity is network of polarization singularities: C points with circular polarization and L lines with variable azimuth. The new singular Stokes-polarimetry was elaborated for such measurements. It allows define azimuth, eccentricity and handedness of elliptical vibrations in each pixel of receiving CCD camera in the range of mega-pixels. It is based on precise measurement of full set of Stokes parameters by the help of high quality analyzers and quarter-wave plates with λ/500 preciseness and 4" adjustment. The matrices of obtained data are processed in PC by special programs to find positions of polarization singularities and other needed topological features. The developed SSP technique was proved successfully by measurements of topology of polarized speckle-fields produced by multimode "photonic-crystal" fibers, double side rubbed polymer films, biomedical samples. Each singularity is localized with preciseness up to +/- 1 pixel in comparison with 500 pixels dimensions of typical speckle. It was confirmed that network of topological features appeared in polarized light field after its interaction with specimen under inspection is exact individual "passport" for its characterization. Therefore, SSP can be used for smart materials characterization. The presented data show that SSP technique is promising for local analysis of properties and defects of thin films, liquid crystal cells, optical elements, biological samples, etc. It is able discover heterogeneities and defects, which define essentially merits of specimens under inspection and can"t be checked by usual polarimetry methods. The detected extra high sensitivity of polarization singularities position and network to any changes of samples position and deformation opens quite new possibilities for sensing of deformations and displacement of checked elements in the sub-micron range.

  2. Pixel-by-Pixel SED Fitting of Intermediate Redshift Galaxies

    NASA Astrophysics Data System (ADS)

    Cohen, Seth H.; Kim, Hwihyun; Petty, Sara M.; Farrah, Duncan

    2015-01-01

    We select intermediate redshift galaxies from the Hubble Space Telescope CANDELS and GOODS surveys to study their stellar populations on sub-kilo-parsec scales by fitting SED models on a pixel-by-pixel basis. Galaxies are chosen to have measured spectroscopic redshifts (z<1.5), to be bright (H_AB<21 mag), to be relatively face-on (b/a > 0.6), and have a minimum of ten individual resolution elements across the face of the galaxy, as defined by the broadest PSF (F160W-band) in the data. The sample contains ~200 galaxies with BViz(Y)JH band HST photometry. The main goal of the study is to better understand the effects of population blending when using a pixel-by-pixel SED fitting (pSED) approach. We outline our pSED fitting method which gives maps of stellar mass, age, star-formation rate, etc. Several examples of individual pSED-fit maps are presented in detail, as well as some preliminary results on the full sample. The pSED method is necessarily biased by the brightest population in a given pixel outshining the rest of the stars, and, therefore, we intend to study this apparent population blending in a set of artificially redshifted images of nearby galaxies, for which we have star-by-star measurements of their stellar populations. This local sample will be used to better interpret the measurements for the higher redshift galaxies.Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the Data Archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. This archival research is associated with program #13241.

  3. Resolution-enhanced Mapping Spectrometer

    NASA Technical Reports Server (NTRS)

    Kumer, J. B.; Aubrun, J. N.; Rosenberg, W. J.; Roche, A. E.

    1993-01-01

    A familiar mapping spectrometer implementation utilizes two dimensional detector arrays with spectral dispersion along one direction and spatial along the other. Spectral images are formed by spatially scanning across the scene (i.e., push-broom scanning). For imaging grating and prism spectrometers, the slit is perpendicular to the spatial scan direction. For spectrometers utilizing linearly variable focal-plane-mounted filters the spatial scan direction is perpendicular to the direction of spectral variation. These spectrometers share the common limitation that the number of spectral resolution elements is given by the number of pixels along the spectral (or dispersive) direction. Resolution enhancement by first passing the light input to the spectrometer through a scanned etalon or Michelson is discussed. Thus, while a detector element is scanned through a spatial resolution element of the scene, it is also temporally sampled. The analysis for all the pixels in the dispersive direction is addressed. Several specific examples are discussed. The alternate use of a Michelson for the same enhancement purpose is also discussed. Suitable for weight constrained deep space missions, hardware systems were developed including actuators, sensor, and electronics such that low-resolution etalons with performance required for implementation would weigh less than one pound.

  4. VAMP: A computer program for calculating volume, area, and mass properties of aerospace vehicles

    NASA Technical Reports Server (NTRS)

    Norton, P. J.; Glatt, C. R.

    1974-01-01

    A computerized procedure developed for analyzing aerospace vehicles evaluates the properties of elemental surface areas with specified thickness by accumulating and combining them with arbitrarily specified mass elements to form a complete evaluation. Picture-like images of the geometric description are capable of being generated.

  5. Imaging regional renal function parameters using radionuclide tracers

    NASA Astrophysics Data System (ADS)

    Qiao, Yi

    A compartmental model is given for evaluating kidney function accurately and noninvasively. This model is cast into a parallel multi-compartment structure and each pixel region (picture element) of kidneys is considered as a single kidney compartment. The loss of radionuclide tracers from the blood to the kidney and from the kidney to the bladder are modelled in great detail. Both the uptake function and the excretion function of the kidneys can be evaluated pixel by pixel, and regional diagnostic information on renal function is obtained. Gamma Camera image data are required by this model and a screening test based renal function measurement is provided. The regional blood background is subtracted from the kidney region of interest (ROI) and the kidney regional rate constants are estimated analytically using the Kuhn-Pucker multiplier method in convex programming by considering the input/output behavior of the kidney compartments. The detailed physiological model of the peripheral compartments of the system, which is not available for most radionuclide tracers, is not required in the determination of the kidney regional rate constants and the regional blood background factors within the kidney ROI. Moreover, the statistical significance of measurements is considered to assure the improved statistical properties of the estimated kidney rate constants. The relations between various renal function parameters and the kidney rate constants are established. Multiple renal function measurements can be found from the renal compartmental model. The blood radioactivity curve and the regional (or total) radiorenogram determining the regional (or total) summed behavior of the kidneys are obtained analytically with the consideration of the statistical significance of measurements using convex programming methods for a single peripheral compartment system. In addition, a new technique for the determination of 'initial conditions' in both the blood compartment and the kidney compartment is presented. The blood curve and the radiorenogram are analyzed in great detail and a physiological analysis from the radiorenogram is given. Applications of Kuhn-Tucker multiplier methods are illustrated for the renal compartmental model in the field of nuclear medicine. Conventional kinetic data analysis methods, the maximum likehood method, and the weighted integration method are investigated and used for comparisons. Moreover, the effect of the blood background subtraction is shown by using the gamma camera images in man. Several functional images are calculated and the functional imaging technique is applied for evaluating renal function in man quantitatively and visually and compared with comments from a physician.

  6. Introduction to the GEOBIA 2010 special issue: From pixels to geographic objects in remote sensing image analysis

    NASA Astrophysics Data System (ADS)

    Addink, Elisabeth A.; Van Coillie, Frieke M. B.; De Jong, Steven M.

    2012-04-01

    Traditional image analysis methods are mostly pixel-based and use the spectral differences of landscape elements at the Earth surface to classify these elements or to extract element properties from the Earth Observation image. Geographic object-based image analysis (GEOBIA) has received considerable attention over the past 15 years for analyzing and interpreting remote sensing imagery. In contrast to traditional image analysis, GEOBIA works more like the human eye-brain combination does. The latter uses the object's color (spectral information), size, texture, shape and occurrence to other image objects to interpret and analyze what we see. GEOBIA starts by segmenting the image grouping together pixels into objects and next uses a wide range of object properties to classify the objects or to extract object's properties from the image. Significant advances and improvements in image analysis and interpretation are made thanks to GEOBIA. In June 2010 the third conference on GEOBIA took place at the Ghent University after successful previous meetings in Calgary (2008) and Salzburg (2006). This special issue presents a selection of the 2010 conference papers that are worked out as full research papers for JAG. The papers cover GEOBIA applications as well as innovative methods and techniques. The topics range from vegetation mapping, forest parameter estimation, tree crown identification, urban mapping, land cover change, feature selection methods and the effects of image compression on segmentation. From the original 94 conference papers, 26 full research manuscripts were submitted; nine papers were selected and are presented in this special issue. Selection was done on the basis of quality and topic of the studies. The next GEOBIA conference will take place in Rio de Janeiro from 7 to 9 May 2012 where we hope to welcome even more scientists working in the field of GEOBIA.

  7. Miocene Basaltic Lava Flows and Dikes of the Intervening Area Between Picture Gorge and Steens Basalt of the CRBG, Eastern Oregon

    NASA Astrophysics Data System (ADS)

    Cahoon, E. B.; Streck, M. J.

    2016-12-01

    Mid-Miocene basaltic lavas and dikes are exposed in the area between the southern extent of the Picture Gorge Basalt (PGB) and the northern extent of Steens Basalt in a wide corridor of the Malheur National Forest, eastern Oregon. An approximate mid-Miocene age of sampled basaltic units is indicated by stratigraphic relationships to the 16 Ma Dinner Creek Tuff. Lavas provide an opportunity to extend and/or revise distribution areas of either CRBG unit and explore the petrologic transition between them. The PGB and the Steens Basalt largely represent geochemically distinct tholeiitic units of the CRBG; although each unit displays internal complexity. Lavas of PGB are relatively primitive (MgO 5-9 wt.%) while Steens Basalt ranges in MgO from >9 to 3 wt.% but both units are commonly coarsely porphyritic. Conversely, Steens Basalt compositions are on average more enriched in highly incompatible elements (e.g. Rb, Th) and relatively enriched in the lesser incompatible elements (e.g. Y, Yb) compared to the Picture Gorge basalts. These compositional signatures produce inclined and flat patterns on mantle-normalized incompatible trace element plots but with similar troughs and spikes, respectively. New compositional data from our study area indicate basaltic lavas can be assigned as PGB lava flows and dikes, and also to a compositional group chemically distinct between Steens Basalt and PGB. Distribution of lava flows with PGB composition extend this CRBG unit significantly south/southeast closing the exposure gap between PGB and Steens Basalt. We await data that match Steens Basalt compositions but basaltic lavas with petrographic features akin to Steens Basalt have been identified in the study area. Lavas of the transitional unit share characteristics with Upper Steens and Picture Gorge basalt types, but identify a new seemingly unique composition. This composition is slightly more depleted in the lesser incompatible elements (i.e. steeper pattern) on mantle normalized incompatible element diagrams, relatively enriched in Sr, and overall reflects more HFSE depletion than Upper Steens Basalt. Similar compositional patterns have also been observed among lavas of the Strawberry Volcanics located immediately east of our study area.

  8. Development of remote sensing technology in New Zealand, part 1. Mapping land use and environmental studies in New Zealand, part 2. Indigenous forest assessment, part 3. Seismotectonic, structural, volcanologic and geomorphic study of New Zealand, part 4

    NASA Technical Reports Server (NTRS)

    Probine, M. C.; Suggate, R. P.; Stirling, I. F.; Mcgreevy, M. G. (Principal Investigator)

    1976-01-01

    The author has identified the following significant results. As part of the tape reformatting process, a simple coded picture output program was developed. This represents Pixel's radiance level by one of a 47 character set on a nonoverprinting line printer. It not only has aided in locating areas for the reformatting process, but has also formed the foundation for a supervised clustering package. This in turn has led to a simplistic but effective thematic mapping package.

  9. Night Sky Weather Monitoring System Using Fish-Eye CCD

    NASA Astrophysics Data System (ADS)

    Tomida, Takayuki; Saito, Yasunori; Nakamura, Ryo; Yamazaki, Katsuya

    Telescope Array (TA) is international joint experiment observing ultra-high energy cosmic rays. TA employs fluorescence detection technique to observe cosmic rays. In this technique, tho existence of cloud significantly affects quality of data. Therefore, cloud monitoring provides important information. We are developing two new methods for evaluating night sky weather with pictures taken by charge-coupled device (CCD) camera. One is evaluating the amount of cloud with pixels brightness. The other is counting the number of stars with contour detection technique. The results of these methods show clear correlation, and we concluded both the analyses are reasonable methods for weather monitoring. We discuss reliability of the star counting method.

  10. Hubble Watches the Red Planet as Mars Global Surveyor Begins Aerobraking

    NASA Technical Reports Server (NTRS)

    1997-01-01

    [RIGHT] This NASA Hubble Space Telescope picture of Mars was taken on Sept. 12, one day after the arrival of the Mars Global Surveyor (MGS) spacecraft and only five hours before the beginning of autumn in the Martian northern hemisphere. (Mars is tilted on its axis like Earth, so it has similar seasonal changes, including an autumnal equinox when the Sun crosses Mars' equator from the northern to the southern hemisphere).

    This Hubble picture was taken in support of the MGS mission. Hubble is monitoring the Martian weather conditions during the early phases of MGS aerobraking; in particular, the detection of large dust storms are important inputs into the atmospheric models used by the MGS mission to plan aerobraking operations.

    Though a dusty haze fills the giant Hellas impact basin south of the dark fin-shaped feature Syrtis Major, the dust appears to be localized within Hellas. Unless the region covered expands significantly, the dust will not be of concern for MGS aerobraking.

    Other early signs of seasonal transitions on Mars are apparent in the Hubble picture. The northern polar ice cap is blanketed under a polar hood of clouds that typically start forming in late northern summer. As fall progresses, sunlight will dwindle in the north polar region and the seasonal polar cap of frozen carbon dioxide will start condensing onto the surface under these clouds.

    Hubble observations will continue until October 13, as MGS carefully uses the drag of the Martian atmosphere to circularize its orbit about the Red Planet. After mid-October, Mars will be too close to the Sun, in angular separation, for Hubble to safely view.

    The image is a composite of three separately filtered colored images taken with the Wide Field Planetary Camera 2 (WFPC2). Resolution is 35 miles (57 kilometers) per pixel (picture element). The Pathfinder landing site near Ares Valles is about 2200 miles (3600 kilometers) west of the center of this image, so was not visible during this observation. Mars was 158 million miles (255 million kilometers) from Earth at the time.

    [LEFT]

    An image of this region of Mars, taken in June 1997, is shown for comparison. The Hellas basin is filled with bright clouds and/or surface frost. More water ice clouds are visible across the planet than in the Sept. image, reflecting the effects of the changing season. Mars appears larger because it was 44 million miles (77 million kilometers) closer to Earth than in the September image.

    This image and other images and data received from the Hubble Space Telescope are posted on the World Wide Web on the Space Telescope Science Institute home page at URL http://oposite.stsci.edu/pubinfo/

  11. Precision Timing with shower maximum detectors based on pixelated micro-channel plates

    NASA Astrophysics Data System (ADS)

    Bornheim, A.; Apresyan, A.; Ronzhin, A.; Xie, S.; Spiropulu, M.; Trevor, J.; Pena, C.; Presutti, F.; Los, S.

    2017-11-01

    Future calorimeters and shower maximum detectors at high luminosity colliders need to be highly radiation resistant and very fast. One exciting option for such a detector is a calorimeter composed of a secondary emitter as the active element. In this report we outline the study and development of a secondary emission calorimeter prototype using micro-channel plates (MCP) as the active element, which directly amplify the electromagnetic shower signal. We demonstrate the feasibility of using a bare MCP within an inexpensive and robust housing without the need for any photo cathode, which is a key requirement for high radiation tolerance. Test beam measurements of the prototype were performed with 120 GeV primary protons and secondary beams at the Fermilab Test Beam Facility, demonstrating basic calorimetric measurements and precision timing capabilities. Using multiple pixel readout on the MCP, we demonstrate a transverse spatial resolution of 0.8 mm, and time resolution better than 40 ps for electromagnetic showers.

  12. Microcomputer-based classification of environmental data in municipal areas

    NASA Astrophysics Data System (ADS)

    Thiergärtner, H.

    1995-10-01

    Multivariate data-processing methods used in mineral resource identification can be used to classify urban regions. Using elements of expert systems, geographical information systems, as well as known classification and prognosis systems, it is possible to outline a single model that consists of resistant and of temporary parts of a knowledge base including graphical input and output treatment and of resistant and temporary elements of a bank of methods and algorithms. Whereas decision rules created by experts will be stored in expert systems directly, powerful classification rules in form of resistant but latent (implicit) decision algorithms may be implemented in the suggested model. The latent functions will be transformed into temporary explicit decision rules by learning processes depending on the actual task(s), parameter set(s), pixels selection(s), and expert control(s). This takes place both at supervised and nonsupervised classification of multivariately described pixel sets representing municipal subareas. The model is outlined briefly and illustrated by results obtained in a target area covering a part of the city of Berlin (Germany).

  13. Surface topography of 1€ coin measured by stereo-PIXE

    NASA Astrophysics Data System (ADS)

    Gholami-Hatam, E.; Lamehi-Rachti, M.; Vavpetič, P.; Grlj, N.; Pelicon, P.

    2013-07-01

    We demonstrate the stereo-PIXE method by measurement of surface topography of the relief details on 1€ coin. Two X-ray elemental maps were simultaneously recorded by two X-ray detectors positioned at the left and the right side of the proton microbeam. The asymmetry of the yields in the pixels of the two X-ray maps occurs due to different photon attenuation on the exit travel path of the characteristic X-rays from the point of emission through the sample into the X-ray detectors. In order to calibrate the inclination angle with respect to the X-ray asymmetry, a flat inclined surface model was at first applied for the sample in which the matrix composition and the depth elemental concentration profile is known. After that, the yield asymmetry in each image pixel was transferred into corresponding local inclination angle using calculated dependence of the asymmetry on the surface inclination. Finally, the quantitative topography profile was revealed by integrating the local inclination angle over the lateral displacement of the probing beam.

  14. Use of High-Resolution Continuum Source Flame Atomic Absorption Spectrometry (HR-CS FAAS) for Sequential Multi-Element Determination of Metals in Seawater and Wastewater Samples

    NASA Astrophysics Data System (ADS)

    Peña-Vázquez, E.; Barciela-Alonso, M. C.; Pita-Calvo, C.; Domínguez-González, R.; Bermejo-Barrera, P.

    2015-09-01

    The objective of this work is to develop a method for the determination of metals in saline matrices using high-resolution continuum source flame atomic absorption spectrometry (HR-CS FAAS). Module SFS 6 for sample injection was used in the manual mode, and flame operating conditions were selected. The main absorption lines were used for all the elements, and the number of selected analytical pixels were 5 (CP±2) for Cd, Cu, Fe, Ni, Pb and Zn, and 3 pixels for Mn (CP±1). Samples were acidified (0.5% (v/v) nitric acid), and the standard addition method was used for the sequential determination of the analytes in diluted samples (1:2). The method showed good precision (RSD(%) < 4%, except for Pb (6.5%)) and good recoveries. Accuracy was checked after the analysis of an SPS-WW2 wastewater reference material diluted with synthetic seawater (dilution 1:2), showing a good agreement between certified and experimental results.

  15. Precision Timing with shower maximum detectors based on pixelated micro-channel plates

    DOE PAGES

    Bornheim, A.; Apresyan, A.; Ronzhin, A.; ...

    2017-11-27

    Future calorimeters and shower maximum detectors at high luminosity colliders need to be highly radiation resistant and very fast. One exciting option for such a detector is a calorimeter composed of a secondary emitter as the active element. Here, we outline the study and development of a secondary emission calorimeter prototype using micro-channel plates (MCP) as the active element, which directly amplify the electromagnetic shower signal. We also demonstrate the feasibility of using a bare MCP within an inexpensive and robust housing without the need for any photo cathode, which is a key requirement for high radiation tolerance. Test beammore » measurements of the prototype were performed with 120 GeV primary protons and secondary beams at the Fermilab Test Beam Facility, demonstrating basic calorimetric measurements and precision timing capabilities. Using multiple pixel readout on the MCP, we demonstrate a transverse spatial resolution of 0.8 mm, and time resolution better than 40 ps for electromagnetic showers.« less

  16. Image recording requirements for earth observation applications in the next decade

    NASA Technical Reports Server (NTRS)

    Peavey, B.; Sos, J. Y.

    1975-01-01

    Future requirements for satellite-borne image recording systems are examined from the standpoints of system performance, system operation, product type, and product quality. Emphasis is on total system design while keeping in mind that the image recorder or scanner is the most crucial element which will affect the end product quality more than any other element within the system. Consideration of total system design and implementation for sustained operational usage must encompass the requirements for flexibility of input data and recording speed, pixel density, aspect ratio, and format size. To produce this type of system requires solution of challenging problems in interfacing the data source with the recorder, maintaining synchronization between the data source and the recorder, and maintaining a consistent level of quality. Film products of better quality than is currently achieved in a routine manner are needed. A 0.1 pixel geometric accuracy and 0.0001 d.u. radiometric accuracy on standard (240 mm) size format should be accepted as a goal to be reached in the near future.

  17. Precision Timing with shower maximum detectors based on pixelated micro-channel plates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bornheim, A.; Apresyan, A.; Ronzhin, A.

    Future calorimeters and shower maximum detectors at high luminosity colliders need to be highly radiation resistant and very fast. One exciting option for such a detector is a calorimeter composed of a secondary emitter as the active element. Here, we outline the study and development of a secondary emission calorimeter prototype using micro-channel plates (MCP) as the active element, which directly amplify the electromagnetic shower signal. We also demonstrate the feasibility of using a bare MCP within an inexpensive and robust housing without the need for any photo cathode, which is a key requirement for high radiation tolerance. Test beammore » measurements of the prototype were performed with 120 GeV primary protons and secondary beams at the Fermilab Test Beam Facility, demonstrating basic calorimetric measurements and precision timing capabilities. Using multiple pixel readout on the MCP, we demonstrate a transverse spatial resolution of 0.8 mm, and time resolution better than 40 ps for electromagnetic showers.« less

  18. Optimization of Focusing by Strip and Pixel Arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burke, G J; White, D A; Thompson, C A

    Professor Kevin Webb and students at Purdue University have demonstrated the design of conducting strip and pixel arrays for focusing electromagnetic waves [1, 2]. Their key point was to design structures to focus waves in the near field using full wave modeling and optimization methods for design. Their designs included arrays of conducting strips optimized with a downhill search algorithm and arrays of conducting and dielectric pixels optimized with the iterative direct binary search method. They used a finite element code for modeling. This report documents our attempts to duplicate and verify their results. We have modeled 2D conducting stripsmore » and both conducting and dielectric pixel arrays with moment method and FDTD codes to compare with Webb's results. New designs for strip arrays were developed with optimization by the downhill simplex method with simulated annealing. Strip arrays were optimized to focus an incident plane wave at a point or at two separated points and to switch between focusing points with a change in frequency. We also tried putting a line current source at the focus point for the plane wave to see how it would work as a directive antenna. We have not tried optimizing the conducting or dielectric pixel arrays, but modeled the structures designed by Webb with the moment method and FDTD to compare with the Purdue results.« less

  19. High-resolution CASSINI-VIMS mosaics of Titan and the icy Saturnian satellites

    USGS Publications Warehouse

    Jaumann, R.; Stephan, K.; Brown, R.H.; Buratti, B.J.; Clark, R.N.; McCord, T.B.; Coradini, A.; Capaccioni, F.; Filacchione, G.; Cerroni, P.; Baines, K.H.; Bellucci, G.; Bibring, J.-P.; Combes, M.; Cruikshank, D.P.; Drossart, P.; Formisano, V.; Langevin, Y.; Matson, D.L.; Nelson, R.M.; Nicholson, P.D.; Sicardy, B.; Sotin, Christophe; Soderbloom, L.A.; Griffith, C.; Matz, K.-D.; Roatsch, Th.; Scholten, F.; Porco, C.C.

    2006-01-01

    The Visual Infrared Mapping Spectrometer (VIMS) onboard the CASSINI spacecraft obtained new spectral data of the icy satellites of Saturn after its arrival at Saturn in June 2004. VIMS operates in a spectral range from 0.35 to 5.2 ??m, generating image cubes in which each pixel represents a spectrum consisting of 352 contiguous wavebands. As an imaging spectrometer VIMS combines the characteristics of both a spectrometer and an imaging instrument. This makes it possible to analyze the spectrum of each pixel separately and to map the spectral characteristics spatially, which is important to study the relationships between spectral information and geological and geomorphologic surface features. The spatial analysis of the spectral data requires the determination of the exact geographic position of each pixel on the specific surface and that all 352 spectral elements of each pixel show the same region of the target. We developed a method to reproject each pixel geometrically and to convert the spectral data into map projected image cubes. This method can also be applied to mosaic different VIMS observations. Based on these mosaics, maps of the spectral properties for each Saturnian satellite can be derived and attributed to geographic positions as well as to geological and geomorphologic surface features. These map-projected mosaics are the basis for all further investigations. ?? 2006 Elsevier Ltd. All rights reserved.

  20. Matrix light and pixel light: optical system architecture and requirements to the light source

    NASA Astrophysics Data System (ADS)

    Spinger, Benno; Timinger, Andreas L.

    2015-09-01

    Modern Automotive headlamps enable improved functionality for more driving comfort and safety. Matrix or Pixel light headlamps are not restricted to either pure low beam functionality or pure high beam. Light in direction of oncoming traffic is selectively switched of, potential hazard can be marked via an isolated beam and the illumination on the road can even follow a bend. The optical architectures that enable these advanced functionalities are diverse. Electromechanical shutters and lens units moved by electric motors were the first ways to realize these systems. Switching multiple LED light sources is a more elegant and mechanically robust solution. While many basic functionalities can already be realized with a limited number of LEDs, an increasing number of pixels will lead to more driving comfort and better visibility. The required optical system needs not only to generate a desired beam distribution with a high angular dynamic, but also needs to guarantee minimal stray light and cross talk between the different pixels. The direct projection of the LED array via a lens is a simple but not very efficient optical system. We discuss different optical elements for pre-collimating the light with minimal cross talk and improved contrast between neighboring pixels. Depending on the selected optical system, we derive the basic light source requirements: luminance, surface area, contrast, flux and color homogeneity.

  1. Fast 3D Net Expeditions: Tools for Effective Scientific Collaboration on the World Wide Web

    NASA Technical Reports Server (NTRS)

    Watson, Val; Chancellor, Marisa K. (Technical Monitor)

    1996-01-01

    Two new technologies, the FASTexpedition and Remote FAST, have been developed that provide remote, 3D (three dimensional), high resolution, dynamic, interactive viewing of scientific data. The FASTexpedition permits one to access scientific data from the World Wide Web, take guided expeditions through the data, and continue with self controlled expeditions through the data. Remote FAST permits collaborators at remote sites to simultaneously view an analysis of scientific data being controlled by one of the collaborators. Control can be transferred between sites. These technologies are now being used for remote collaboration in joint university, industry, and NASA projects. Also, NASA Ames Research Center has initiated a project to make scientific data and guided expeditions through the data available as FASTexpeditions on the World Wide Web for educational purposes. Previously, remote visualization of dynamic data was done using video format (transmitting pixel information) such as video conferencing or MPEG (Motion Picture Expert Group) movies on the Internet. The concept for this new technology is to send the raw data (e.g., grids, vectors, and scalars) along with viewing scripts over the Internet and have the pixels generated by a visualization tool running on the viewers local workstation. The visualization tool that is currently used is FAST (Flow Analysis Software Toolkit). The advantages of this new technology over using video format are: (1) The visual is much higher in resolution (1280x1024 pixels with 24 bits of color) than typical video format transmitted over the network. (2) The form of the visualization can be controlled interactively (because the viewer is interactively controlling the visualization tool running on his workstation). (3) A rich variety of guided expeditions through the data can be included easily. (4) A capability is provided for other sites to see a visual analysis of one site as the analysis is interactively performed. Control of the analysis can be passed from site to site. (5) The scenes can be viewed in 3D using stereo vision. (6) The network bandwidth for the visualization using this new technology is much smaller than when using video format. (The measured peak bandwidth used was 1 Kbit/sec whereas the measured bandwidth for a small video picture was 500 Kbits/sec.) This talk will illustrate the use of these new technologies and present a proposal for using these technologies to improve science education.

  2. High resolution 1280×1024, 15 μm pitch compact InSb IR detector with on-chip ADC

    NASA Astrophysics Data System (ADS)

    Nesher, O.; Pivnik, I.; Ilan, E.; Calalhorra, Z.; Koifman, A.; Vaserman, I.; Oiknine Schlesinger, J.; Gazit, R.; Hirsh, I.

    2009-05-01

    Over the last decade, SCD has developed and manufactured high quality InSb Focal Plane Arrays (FPAs), which are currently used in many applications worldwide. SCD's production line includes many different types of InSb FPA with formats of 320x256, 480x384 and 640x512 elements and with pitch sizes in the range of 15 to 30 μm. All these FPAs are available in various packaging configurations, including fully integrated Detector-Dewar-Cooler Assemblies (DDCA) with either closed-cycle Sterling or open-loop Joule-Thomson coolers. With an increasing need for higher resolution, SCD has recently developed a new large format 2-D InSb detector with 1280x1024 elements and a pixel size of 15μm. The InSb 15μm pixel technology has already been proven at SCD with the "Pelican" detector (640x512 elements), which was introduced at the Orlando conference in 2006. A new signal processor was developed at SCD for use in this mega-pixel detector. This Readout Integrated Circuit (ROIC) is designed for, and manufactured with, 0.18 μm CMOS technology. The migration from 0.5 to 0.18 μm CMOS technology supports SCD's roadmap for the reduction of pixel size and power consumption and is in line with the increasing demand for improved performance and on-chip functionality. Consequently, the new ROIC maintains the same level of performance and functionality with a 15 μm pitch, as exists in our 20 μm-pitch ROICs based on 0.5μm CMOS technology. Similar to Sebastian (SCD ROIC with A/D on chip), this signal processor also includes A/D converters on the chip and demonstrates the same level of performance, but with reduced power consumption. The pixel readout rate has been increased up to 160 MHz in order to support a high frame rate, resulting in 120 Hz operation with a window of 1024×1024 elements at ~130 mW. These A/D converters on chip save the need for using 16 A/D channels on board (in the case of an analog ROIC) which would operate at 10 MHz and consume about 8Watts A Dewar has been designed with a stiffened detector support to withstand harsh environmental conditions with a minimal contribution to the heat load of the detector. The combination of the 0.18μm-based low power CMOS technology for the ROIC and the stiffening of the detector support within the Dewar has enabled the use of the Ricor K508 cryo-cooler (0.5 W). This has created a high-resolution detector in a very compact package. In this paper we present the basic concept of the new detector. We will describe its construction and will present electrical and radiometric characterization results.

  3. SNR improvement for hyperspectral application using frame and pixel binning

    NASA Astrophysics Data System (ADS)

    Rehman, Sami Ur; Kumar, Ankush; Banerjee, Arup

    2016-05-01

    Hyperspectral imaging spectrometer systems are increasingly being used in the field of remote sensing for variety of civilian and military applications. The ability of such instruments in discriminating finer spectral features along with improved spatial and radiometric performance have made such instruments a powerful tool in the field of remote sensing. Design and development of spaceborne hyper spectral imaging spectrometers poses lot of technological challenges in terms of optics, dispersion element, detectors, electronics and mechanical systems. The main factors that define the type of detectors are the spectral region, SNR, dynamic range, pixel size, number of pixels, frame rate, operating temperature etc. Detectors with higher quantum efficiency and higher well depth are the preferred choice for such applications. CCD based Si detectors serves the requirement of high well depth for VNIR band spectrometers but suffers from smear. Smear can be controlled by using CMOS detectors. Si CMOS detectors with large format arrays are available. These detectors generally have smaller pitch and low well depth. Binning technique can be used with available CMOS detectors to meet the large swath, higher resolution and high SNR requirements. Availability of larger dwell time of satellite can be used to bin multiple frames to increase the signal collection even with lesser well depth detectors and ultimately increase the SNR. Lab measurements reveal that SNR improvement by frame binning is more in comparison to pixel binning. Effect of pixel binning as compared to the frame binning will be discussed and degradation of SNR as compared to theoretical value for pixel binning will be analyzed.

  4. The Transition-Edge-Sensor Array for the Micro-X Sounding Rocket

    NASA Technical Reports Server (NTRS)

    Eckart, M. E.; Adams, J. S.; Bailey, C. N.; Bandler, S. R.; Busch, Sarah Elizabeth; Chervenak J. A.; Finkbeiner, F. M.; Kelley, R. L.; Kilbourne, C. A.; Porst, J. P.; hide

    2012-01-01

    The Micro-X sounding rocket program will fly a 128-element array of transition-edge-sensor microcalorimeters to enable high-resolution X-ray imaging spectroscopy of the Puppis-A supernova remnant. To match the angular resolution of the optics while maximizing the field-of-view and retaining a high energy resolution (< 4 eV at 1 keV), we have designed the pixels using 600 x 600 sq. micron Au/Bi absorbers, which overhang 140 x 140 sq. micron Mo/Au sensors. The data-rate capabilities of the rocket telemetry system require the pulse decay to be approximately 2 ms to allow a significant portion of the data to be telemetered during flight. Here we report experimental results from the flight array, including measurements of energy resolution, uniformity, and absorber thermalization. In addition, we present studies of test devices that have a variety of absorber contact geometries, as well as a variety of membrane-perforation schemes designed to slow the pulse decay time to match the telemetry requirements. Finally, we describe the reduction in pixel-to-pixel crosstalk afforded by an angle-evaporated Cu backside heatsinking layer, which provides Cu coverage on the four sidewalls of the silicon wells beneath each pixel.

  5. Simulation and Spectrum Extraction in the Spectroscopic Channel of the SNAP Experiment

    NASA Astrophysics Data System (ADS)

    Tilquin, Andre; Bonissent, A.; Gerdes, D.; Ealet, A.; Prieto, E.; Macaire, C.; Aumenier, M. H.

    2007-05-01

    A pixel-level simulation software is described. It is composed of two modules. The first module applies Fourier optics at each active element of the system to construct the PSF at a large variety of wavelengths and spatial locations of the point source. The input is provided by the engineer's design program (Zemax). It describes the optical path and the distortions. The PSF properties are compressed and interpolated using shapelets decomposition and neural network techniques. A second module is used for production jobs. It uses the output of the first module to reconstruct the relevant PSF and integrate it on the detector pixels. Extended and polychromatic sources are approximated by a combination of monochromatic point sources. For the spectrum extraction, we use a fast simulator based on a multidimensional linear interpolation of the pixel response tabulated on a grid of values of wavelength, position on sky and slice number. The prediction of the fast simulator is compared to the observed pixel content, and a chi-square minimization where the parameters are the bin contents is used to build the extracted spectrum. The visible and infrared arms are combined in the same chi-square, providing a single spectrum.

  6. Bioinspired architecture approach for a one-billion transistor smart CMOS camera chip

    NASA Astrophysics Data System (ADS)

    Fey, Dietmar; Komann, Marcus

    2007-05-01

    In the paper we present a massively parallel VLSI architecture for future smart CMOS camera chips with up to one billion transistors. To exploit efficiently the potential offered by future micro- or nanoelectronic devices traditional on central structures oriented parallel architectures based on MIMD or SIMD approaches will fail. They require too long and too many global interconnects for the distribution of code or the access to common memory. On the other hand nature developed self-organising and emergent principles to manage successfully complex structures based on lots of interacting simple elements. Therefore we developed a new as Marching Pixels denoted emergent computing paradigm based on a mixture of bio-inspired computing models like cellular automaton and artificial ants. In the paper we present different Marching Pixels algorithms and the corresponding VLSI array architecture. A detailed synthesis result for a 0.18 μm CMOS process shows that a 256×256 pixel image is processed in less than 10 ms assuming a moderate 100 MHz clock rate for the processor array. Future higher integration densities and a 3D chip stacking technology will allow the integration and processing of Mega pixels within the same time since our architecture is fully scalable.

  7. Coded-Aperture X- or gamma -ray telescope with Least- squares image reconstruction. III. Data acquisition and analysis enhancements

    NASA Astrophysics Data System (ADS)

    Kohman, T. P.

    1995-05-01

    The design of a cosmic X- or gamma -ray telescope with least- squares image reconstruction and its simulated operation have been described (Rev. Sci. Instrum. 60, 3396 and 3410 (1989)). Use of an auxiliary open aperture ("limiter") ahead of the coded aperture limits the object field to fewer pixels than detector elements, permitting least-squares reconstruction with improved accuracy in the imaged field; it also yields a uniformly sensitive ("flat") central field. The design has been enhanced to provide for mask-antimask operation. This cancels and eliminates uncertainties in the detector background, and the simulated results have virtually the same statistical accuracy (pixel-by-pixel output-input RMSD) as with a single mask alone. The simulations have been made more realistic by incorporating instrumental blurring of sources. A second-stage least-squares procedure had been developed to determine the precise positions and total fluxes of point sources responsible for clusters of above-background pixels in the field resulting from the first-stage reconstruction. Another program converts source positions in the image plane to celestial coordinates and vice versa, the image being a gnomic projection of a region of the sky.

  8. Low energy cross sections and underground laboratories

    NASA Astrophysics Data System (ADS)

    Corvisiero, P.; LUNA Collaboration

    2005-04-01

    It is known that the chemical elements and their isotopes were created by nuclear fusion reactions in the hot interiors of remote and long-vanished stars over many billions of years [C. Rolfs, W.S. Rodney, Cauldrons in the cosmos, University of Ghicago Press, Chicago (1988)]. The present picture is that all elements from carbon to uranium have been produced entirely within stars during their fiery lifetimes and explosive deaths. The detailed understanding of the origin of the chemical elements and their isotopes combines astrophysics and nuclear physics, and forms what is called nuclear astrophysics. In turn, nuclear reactions are at the heart of nuclear astrophysics: they influence sensitively the nucleosynthesis of the elements in the earliest stages of the universe and in all the objects formed thereafter, and control the associated energy generation, neutrino luminosity, and evolution of stars. A good knowledge of the rates of these fusion reactions is essential to understanding this broad picture. Some of the most important experimental techniques to measure the corresponding cross sections, based both on direct and indirect methods, will be described in this paper.

  9. Hyperspectral Imaging at the Micro- and Nanoscale using Energy-dispersive Spectroscopy (EDS) with Silicon Drift Detector (SDD) and EBSD Analysis

    NASA Astrophysics Data System (ADS)

    Salge, T.; Goran, D.

    2010-12-01

    SDD systems have become state of the art technology in the field of EDS. The main characteristic of the SDDs is their extremely high pulse load capacity of up to 750,000 counts per second at good energy resolution (<123 eV Mn-Kα, <46 eV C-Kα at 100,000 counts per seconds). These properties in conjunction with electron backscatter diffraction (EBSD) technique and modern data processing allows not only high speed mapping but also hyperspectral analysis. Here, a database is created that contains an EDS spectrum and/or EBSD pattern for each pixel of the SEM image setting the stage for innovative analysis options: The Maximum Pixel Spectrum function [1] synthesizes a spectrum out of the EDS database, consisting of the highest count level found in each spectrum channel. Here, (trace) elements which occur in only one pixel can be detected qualitatively. Areas of similar EDS composition can be made visible with Autophase, a spectroscopic phase detection system. In cases where the crystallographic phase assessment by EBSD is problematic due to pattern similarity, the EDS signal can be used as additional information for phase separation. This paper presents geoscience applications with the QUANTAX system with EDS SDD and EBSD detector using the options described above: (1) Drill core analysis of a Chicxulub impact ejecta sequence from the K/Pg boundary at ODP leg 207 [2] using fast, high resolution element maps. (2) Detection of monazite in granite by the Maximum Pixel Spectrum function. (3) Distribution of elements with overlapping peaks by deconvolution at the example of rare earth elements in zoned monazite. (4) Spectroscopic phase analysis of a sulfate-carbonate-dominated impact matrix at borehole UNAM-7 from the Chicxulub impact crater [3]. (5) EBSD studies with examples of iron meteorites and impact-induced, recrystallized carbonate melts [4]. In addition, continuing technological advances require the elemental analysis of increasingly smaller structures in many fields, including geosciences. It will be demonstrated that using low accelerating voltages, the element distribution of structures at the nanoscale in bulk samples can be displayed in a short time due to optimized signal processing and solid angle. Peaks composed of contributions from several overlapping elements e.g. N-K (392 eV) and Ti-Ll (395 eV) can be deconvolved [6] using an improved atomic database with 250 additional L, M and N lines below 4 keV. Improved light element quantification allows the standardless quantification of features at the nanoscale such as rutile grains 200-500 nm in size. References: [1] Bright D S. & Newbury D. E. (2004) Journal of Microscopy 216:186-193. [2] Schulte P. et al. (2010) Science 327: 1214-1218. [3] Salge T. (2007) PhD thesis: 130p. http://edoc.huberlin.de/docviews/abstract.php?lang=ger&id=27753. [4] Deutsch A. et al. MAPS 45: A45. [6] Tunckan O. (2010) Joining ceramics using capacitor discharge technique and determination of metal ceramic interface reactions, PhD thesis, Anadolu University, Eskisehir, Turkey. Acknowledgements: We thank P. Schulte, A. Deutsch, ODP, L. Hecht, A. Kearsley, J. Urrutria-Fucugauchi, O. Tunckan and S. Turan for generously providing the samples.

  10. Dione Before the Rings

    NASA Image and Video Library

    2015-11-23

    Saturn's rings are so expansive that they often sneak into Cassini's pictures of other bodies. Here, they appear with the planet in a picture taken during a close flyby of Dione. The flyby of Dione (698 miles or 1123 kilometers across) during which this image was taken was the last close encounter with this moon during Cassini's mission. The main goal of the flyby was to use the spacecraft as a probe to measure Dione's gravity field. However, scientists also managed to take some very close images of the surface. All of the data will be helpful to understand the interior structure and geological history of this distant, icy world. This view is centered on terrain at 7 degrees south latitude, 122 degrees west longitude. The image was taken in visible light with the Cassini spacecraft narrow-angle camera on Aug. 17, 2015. The view was obtained at a distance of approximately 48,000 miles (77,000 kilometers) from Dione and at a Sun-Dione-spacecraft, or phase angle of 35 degrees. Image scale is 1,520 feet (464 meters) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA18344

  11. Autumn Frost, North Polar Sand Dunes

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Autumn in the martian northern hemisphere began around August 1, 1999. Almost as soon as northern fall began, the Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) started documenting the arrival of autumn frost--a precursor to the cold winter that will arrive in late December 1999. The first features to become covered by frost were the sand dunes that surround the north polar ice cap. The dunes seen here would normally appear very dark--almost black--except when covered by frost. Why the dunes begin to frost sooner than the surrounding surfaces is a mystery: perhaps the dunes contain water vapor that emerges from the sand during the day and condenses again at night. This picture shows dunes near 74.7oN, 61.4oW at a resolution of about 7.3 meters (24 feet) per pixel. The area covered is about 3 km (1.9 mi) across and is illuminated from the upper right. The picture appears to be somewhat fuzzy and grainy because the dunes here are seen through the thin haze of the gathering north polar winter hood (i.e., clouds).

  12. Mesoscale Waves in Jupiter's Atmosphere

    NASA Technical Reports Server (NTRS)

    1997-01-01

    These two images of Jupiter's atmosphere were taken with the violet filter of the Solid State Imaging (CCD) system aboard NASA's Galileo spacecraft. The images were obtained on June 26, 1996; the lower image was taken approximately one rotation (9 hours) later than the upper image.

    Mesoscale waves can be seen in the center of the upper image. They appear as a series of about 15 nearly vertical stripes; the wave crests are aligned north-south. The wave packet is about 300 kilometers in length and is aligned east-west. In the lower image there is no indication of the waves, though the clouds appear to have been disturbed. Such waves were seen also in images obtained by NASA's Voyager spacecraft in 1979, though lower spatial and time resolution made tracking of features such as these nearly impossible.

    Mesoscale waves occur when the wind shear is strong in an atmospheric layer that is sandwiched vertically between zones of stable stratification. The orientation of the wave crests is perpendicular to the shear. Thus, a wave observation gives information about how the wind direction changes with height in the atmosphere.

    North is at the top of these images which are centered at approximately 15 South latitude and 307 West longitude. In the upper image, each picture element (pixel) subtends a square of about 36 kilometers on a side, and the spacecraft was at a range of more than 1.7 million kilometers from Jupiter. In the lower image, each pixel subtends a square of about 30 kilometers on a side, and the spacecraft was at a range of more than 1.4 million kilometers from Jupiter.

    The Jet Propulsion Laboratory, Pasadena, CA manages the mission for NASA's Office of Space Science, Washington, DC.

    This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://galileo.jpl.nasa.gov. Background information and educational context for the images can be found at URL http://www.jpl.nasa.gov/galileo/sepo

  13. East Candor Chasma

    NASA Image and Video Library

    1998-06-08

    During its examination of Mars, NASA's Viking 1 spacecraft returned images of Valles Marineris, a huge canyon system 5,000 km long, up to 240 km wide, and 6.5 km deep, whose connected chasma or valleys may have formed from a combination of erosional collapse and structural activity. The view shows east Candor Chasma, one of the connected valleys of Valles Marineris; north toward top of frame; for scale, the impact crater in upper right corner is 15 km (9 miles) wide. The image, centered at latitude 7.5 degrees S., longitude 67.5 degrees, is a composite of Viking 1 Orbiter high-resolution (about 80 m/pixel or picture element) images in black and white and low-resolution (about 250 m/pixel) images in color. The Viking 1 craft landed on Mars in July of 1976. East Candor Chasma occupies the eastern part of the large west-northwest-trending trough of Candor Chasma. This section is about 150 km wide. East Candor Chasma is bordered on the north and south by walled cliffs, most likely faults. The walls may have been dissected by landslides forming reentrants; one area on the north wall shows what appears to be landslide debris. Both walls show spur-and-gully morphology and smooth sections. In the lower part of the image northwest-trending, linear depressions on the plateau are younger graben or fault valleys that cut the south wall. Material central to the chasma shows layering in places and has been locally eroded by the wind to form flutes and ridges. These interior layered deposits have curvilinear reentrants carved into them, and in one locale a lobe flows away from the top of the interior deposit. The lobe may be mass-wasting deposits due to collapse of older interior deposits (Lucchitta, 1996, LPSC XXVII abs., p. 779- 780); this controversial idea requires that the older layered deposits were saturated with ice, perhaps from former lakes, and that young volcanism and/or tectonism melted the ice and made the material flow. http://photojournal.jpl.nasa.gov/catalog/PIA00424

  14. Color Breakup In Sequentially-Scanned LC Displays

    NASA Technical Reports Server (NTRS)

    Arend, L.; Lubin, J.; Gille, J.; Larimer, J.; Statler, Irving C. (Technical Monitor)

    1994-01-01

    In sequentially-scanned liquid-crystal displays the chromatic components of color pixels are distributed in time. For such displays eye, head, display, and image-object movements can cause the individual color elements to be visible. We analyze conditions (scan designs, types of eye movement) likely to produce color breakup.

  15. LCD Projectors: An Evaluation of Features and Utilization for Educators.

    ERIC Educational Resources Information Center

    Fawson, Curtis E.

    1990-01-01

    Describes liquid crystal display (LCD) projectors and discusses their use in educational settings. Highlights include rear screen projection; LCD projectors currently available and the number of pixel elements in each; and examples of instructional applications, including portable setups, and use with videocassette recorders (VCRs), computers, and…

  16. Quantitative landslide risk assessment and mapping on the basis of recent occurrences

    NASA Astrophysics Data System (ADS)

    Remondo, Juan; Bonachea, Jaime; Cendrero, Antonio

    A quantitative procedure for mapping landslide risk is developed from considerations of hazard, vulnerability and valuation of exposed elements. The approach based on former work by the authors, is applied in the Bajo Deba area (northern Spain) where a detailed study of landslide occurrence and damage in the recent past (last 50 years) was carried out. Analyses and mapping are implemented in a Geographic Information System (GIS). The method is based on a susceptibility model developed previously from statistical relationships between past landslides and terrain parameters related to instability. Extrapolations based on past landslide behaviour were used to calculate failure frequency for the next 50 years. A detailed inventory of direct damage due to landslides during the study period was carried out and the main elements at risk in the area identified and mapped. Past direct (monetary) losses per type of element were estimated and expressed as an average 'specific loss' for events of a given magnitude (corresponding to a specified scenario). Vulnerability was assessed by comparing losses with the actual value of the elements affected and expressed as a fraction of that value (0-1). From hazard, vulnerability and monetary value, risk was computed for each element considered. Direct risk maps (€/pixel/year) were obtained and indirect losses from the disruption of economic activities due to landslides assessed. The final result is a risk map and table combining all losses per pixel for a 50-year period. Total monetary value at risk for the Bajo Deba area in the next 50 years is about 2.4 × 10 6 Euros.

  17. Exploring semantic and phonological picture-word priming in adults who stutter using event-related potentials

    PubMed Central

    Maxfield, Nathan D.; Pizon-Moore, Angela A.; Frisch, Stefan A.; Constantine, Joseph L.

    2011-01-01

    Objective Our aim was to investigate how semantic and phonological information is processed in adults who stutter (AWS) preparing to name pictures, following-up a report that event-related potentials (ERPs) in AWS evidenced atypical semantic picture-word priming (Maxfield et al., 2010). Methods Fourteen AWS and 14 typically-fluent adults (TFA) participated. Pictures, named at a delay, were followed by probe words. Design elements not used in Maxfield et al. (2010) let us evaluate both phonological and semantic picture-word priming. Results TFA evidenced typical priming effects in probe-elicited ERPs. AWS evidenced diminished Semantic priming, and reverse Phonological N400 priming. Conclusions Results point to atypical processing of semantic and phonological information in AWS. Discussion considers whether AWS ERP effects reflect unstable activation of target label semantic and phonological representations, strategic inhibition of target label phonological neighbors, and/or phonological label-probe competition. Significance Results raise questions about how mechanisms that regulate activation spreading operate in AWS. PMID:22055837

  18. Old Night Vision Meets New

    NASA Image and Video Library

    2017-12-08

    NASA image acquired November 11-12, 2012. On November 12, 2012, the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi NPP satellite captured the top nighttime image of city, village, and highway lights near Delhi, India. For comparison, the lower image shows the same area one night earlier, as observed by the Operational Line Scan (OLS) system on a Defense Meteorological Satellite Program (DMSP) spacecraft. Since the 1960s, the U.S. Air Force has operated DMSP in order to observe clouds and other weather variables in key wavelengths of infrared and visible light. Since 1972, the DMSP satellites have included the Operational Linescan System (OLS), which gives weather forecasters some ability to see in the dark. It has been a highly successful sensor, but it is dependent on older technology with lower resolution than most scientists would like. And for many years, DMSP data were classified. Through improved optics and “smart” sensing technology, the VIIRS “day-night band,” is ten to fifteen times better than the OLS system at resolving the relatively dim lights of human settlements and reflected moonlight. Each VIIRS pixel shows roughly 740 meters (0.46 miles) across, compared to the 3-kilometer footprint (1.86 miles) of DMSP. Beyond the resolution, the new sensor can detect dimmer light sources. And since the VIIRS measurements are fully calibrated (unlike DMSP), scientists now have the precision required to make quantitative measurements of clouds and other features. “In contrast to the Operational Line Scan system, the imagery from the new day-night band is almost like a nearsighted person putting on glasses for the first time and looking at the Earth anew,” says Steve Miller, an atmospheric scientist at Colorado State University. “VIIRS has allowed us to bring this coarse, blurry view of night lights into clearer focus. Now we can see things in such great detail and at such high precision that we’re really talking about a new kind of measurement.” Unlike a film camera that captures a photograph in one exposure, VIIRS produces an image by repeatedly scanning a scene and resolving it as millions of individual picture elements, or pixels. The day-night band goes a step further, determining on-the-fly whether to use its low, medium, or high-gain mode. If a pixel is very bright, a low-gain mode on the sensor prevents the pixel from over-saturating. If the pixel is dark, the signal will be amplified. “On a hand-held camera, there’s a nighttime setting where the shutter will stay open much longer than it would under daylight imaging conditions,” says Chris Elvidge, who leads the Earth Observation Group at NOAA’s National Geophysical Data Center. “The day-night band is similar. It increases the exposure time—the amount of time that it’s collecting photons for pixels.” NASA Earth Observatory image by Jesse Allen and Robert Simmon, using Suomi NPP VIIRS and DMSP OLS data provided courtesy of Chris Elvidge (NOAA National Geophysical Data Center). Suomi NPP is the result of a partnership between NASA, NOAA, and the Department of Defense. Caption by Mike Carlowicz. Instrument: Suomi NPP - VIIRS Credit: NASA Earth Observatory Click here to view all of the Earth at Night 2012 images Click here to read more about this image NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific

  19. Documentary Elements in Early Films.

    ERIC Educational Resources Information Center

    Sanderson, Richard A.

    Focusing on documentary elements, this study examines the film content and film techniques of 681 motion pictures produced in the United States prior to 1904. Analysis of films by type, subject matter, and trends in subject matter shows that one-third of the early films are documentary in type and three-fourths of the films use subject matter of a…

  20. Sub-surface defects detection of by using active thermography and advanced image edge detection

    NASA Astrophysics Data System (ADS)

    Tse, Peter W.; Wang, Gaochao

    2017-05-01

    Active or pulsed thermography is a popular non-destructive testing (NDT) tool for inspecting the integrity and anomaly of industrial equipment. One of the recent research trends in using active thermography is to automate the process in detecting hidden defects. As of today, human effort has still been using to adjust the temperature intensity of the thermo camera in order to visually observe the difference in cooling rates caused by a normal target as compared to that by a sub-surface crack exists inside the target. To avoid the tedious human-visual inspection and minimize human induced error, this paper reports the design of an automatic method that is capable of detecting subsurface defects. The method used the technique of active thermography, edge detection in machine vision and smart algorithm. An infrared thermo-camera was used to capture a series of temporal pictures after slightly heating up the inspected target by flash lamps. Then the Canny edge detector was employed to automatically extract the defect related images from the captured pictures. The captured temporal pictures were preprocessed by a packet of Canny edge detector and then a smart algorithm was used to reconstruct the whole sequences of image signals. During the processes, noise and irrelevant backgrounds exist in the pictures were removed. Consequently, the contrast of the edges of defective areas had been highlighted. The designed automatic method was verified by real pipe specimens that contains sub-surface cracks. After applying such smart method, the edges of cracks can be revealed visually without the need of using manual adjustment on the setting of thermo-camera. With the help of this automatic method, the tedious process in manually adjusting the colour contract and the pixel intensity in order to reveal defects can be avoided.

  1. Preliminary optical design of the stereo channel of the imaging system simbiosys for the BepiColombo ESA mission

    NASA Astrophysics Data System (ADS)

    Da Deppo, Vania; Naletto, Giampiero; Cremonese, Gabriele; Debei, Stefano; Flamini, Enrico

    2017-11-01

    The paper describes the optical design and performance budget of a novel catadioptric instrument chosen as baseline for the Stereo Channel (STC) of the imaging system SIMBIOSYS for the BepiColombo ESA mission to Mercury. The main scientific objective is the 3D global mapping of the entire surface of Mercury with a scale factor of 50 m per pixel at periherm in four different spectral bands. The system consists of two twin cameras looking at +/-20° from nadir and sharing some components, such as the relay element in front of the detector and the detector itself. The field of view of each channel is 4° x 4° with a scale factor of 23''/pixel. The system guarantees good optical performance with Ensquared Energy of the order of 80% in one pixel. For the straylight suppression, an intermediate field stop is foreseen, which gives the possibility to design an efficient baffling system.

  2. Continuous phase and amplitude holographic elements

    NASA Technical Reports Server (NTRS)

    Maker, Paul D. (Inventor); Muller, Richard E. (Inventor)

    1995-01-01

    A method for producing a phase hologram using e-beam lithography provides n-ary levels of phase and amplitude by first producing an amplitude hologram on a transparent substrate by e-beam exposure of a resist over a film of metal by exposing n is less than or equal to m x m spots of an array of spots for each pixel, where the spots are randomly selected in proportion to the amplitude assigned to each pixel, and then after developing and etching the metal film producing a phase hologram by e-beam lithography using a low contrast resist, such as PMMA, and n-ary levels of low doses less than approximately 200 micro-C/sq cm and preferably in the range of 20-200 micro-C/sq cm, and aggressive development using pure acetone for an empirically determined time (about 6 s) controlled to within 1/10 s to produce partial development of each pixel in proportion to the n-ary level of dose assigned to it.

  3. Large CMOS imager using hadamard transform based multiplexing

    NASA Technical Reports Server (NTRS)

    Karasik, Boris S.; Wadsworth, Mark V.

    2005-01-01

    We have developed a concept design for a large (10k x 10k) CMOS imaging array whose elements are grouped in small subarrays with N pixels in each. The subarrays are code-division multiplexed using the Hadamard Transform (HT) based encoding. The Hadamard code improves the signal-to-noise (SNR) ratio to the reference of the read-out amplifier by a factor of N^1/2. This way of grouping pixels reduces the number of hybridization bumps by N. A single chip layout has been designed and the architecture of the imager has been developed to accommodate the HT base multiplexing into the existing CMOS technology. The imager architecture allows for a trade-off between the speed and the sensitivity. The envisioned imager would operate at a speed >100 fps with the pixel noise < 20 e-. The power dissipation would be 100 pW/pixe1. The combination of the large format, high speed, high sensitivity and low power dissipation can be very attractive for space reconnaissance applications.

  4. Figure-ground segregation: A fully nonlocal approach.

    PubMed

    Dimiccoli, Mariella

    2016-09-01

    We present a computational model that computes and integrates in a nonlocal fashion several configural cues for automatic figure-ground segregation. Our working hypothesis is that the figural status of each pixel is a nonlocal function of several geometric shape properties and it can be estimated without explicitly relying on object boundaries. The methodology is grounded on two elements: multi-directional linear voting and nonlinear diffusion. A first estimation of the figural status of each pixel is obtained as a result of a voting process, in which several differently oriented line-shaped neighborhoods vote to express their belief about the figural status of the pixel. A nonlinear diffusion process is then applied to enforce the coherence of figural status estimates among perceptually homogeneous regions. Computer simulations fit human perception and match the experimental evidence that several cues cooperate in defining figure-ground segregation. The results of this work suggest that figure-ground segregation involves feedback from cells with larger receptive fields in higher visual cortical areas. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Optical performances of the FM JEM-X masks

    NASA Astrophysics Data System (ADS)

    Reglero, V.; Rodrigo, J.; Velasco, T.; Gasent, J. L.; Chato, R.; Alamo, J.; Suso, J.; Blay, P.; Martínez, S.; Doñate, M.; Reina, M.; Sabau, D.; Ruiz-Urien, I.; Santos, I.; Zarauz, J.; Vázquez, J.

    2001-09-01

    The JEM-X Signal Multiplexing Systems are large HURA codes "written" in a pure tungsten plate 0.5 mm thick. 24.247 hexagonal pixels (25% open) are spread over a total area of 535 mm diameter. The tungsten plate is embedded in a mechanical structure formed by a Ti ring, a pretensioning system (Cu-Be) and an exoskeleton structure that provides the required stiffness. The JEM-X masks differ from the SPI and IBIS masks on the absence of a code support structure covering the mask assembly. Open pixels are fully transparent to X-rays. The scope of this paper is to report the optical performances of the FM JEM-X masks defined by uncertainties on the pixel location (centroid) and size coming from the manufacturing and assembly processes. Stability of the code elements under thermoelastic deformations is also discussed. As a general statement, JEM-X Mask optical properties are nearly one order of magnitude better than specified in 1994 during the ESA instrument selection.

  6. Stereo pair design for cameras with a fovea

    NASA Technical Reports Server (NTRS)

    Chettri, Samir R.; Keefe, Michael; Zimmerman, John R.

    1992-01-01

    We describe the methodology for the design and selection of a stereo pair when the cameras have a greater concentration of sensing elements in the center of the image plane (fovea). Binocular vision is important for the purpose of depth estimation, which in turn is important in a variety of applications such as gaging and autonomous vehicle guidance. We assume that one camera has square pixels of size dv and the other has pixels of size rdv, where r is between 0 and 1. We then derive results for the average error, the maximum error, and the error distribution in the depth determination of a point. These results can be shown to be a general form of the results for the case when the cameras have equal sized pixels. We discuss the behavior of the depth estimation error as we vary r and the tradeoffs between the extra processing time and increased accuracy. Knowing these results makes it possible to study the case when we have a pair of cameras with a fovea.

  7. Overview of the ATLAS Insertable B-Layer (IBL) Project

    NASA Astrophysics Data System (ADS)

    Kagan, M. A.

    2014-06-01

    The first upgrade for the Pixel Detector will be a new pixel layer which is currently under construction and will be installed during the first shutdown of the LHC machine, in 2013-14. The new detector, called the Insertable B-layer (IBL), will be installed between the existing Pixel Detector and a new, smaller radius beam-pipe. Two different silicon sensor technologies, planar n-in-n and 3D, will be used, connected with the new generation 130nm IBM CMOS FE-I4 readout chip via solder bump-bonds. A production quality control test bench was set up in the ATLAS inner detector assembly clean room to verify and rate the performance of the detector elements before integration around the beam-pipe. An overview of the IBL project, of the module design, the qualification for these sensor technologies, the integration quality control setups and recent results in the construction of this full scale new concept detector is discussed.

  8. Color moiré simulations in contact-type 3-D displays.

    PubMed

    Lee, B-R; Son, J-Y; Chernyshov, O O; Lee, H; Jeong, I-K

    2015-06-01

    A new method of color moiré fringe simulation in the contact-type 3-D displays is introduced. The method allows simulating color moirés appearing in the displays, which cannot be approximated by conventional cosine approximation of a line grating. The color moirés are mainly introduced by the line width of the boundary lines between the elemental optics in and plate thickness of viewing zone forming optics. This is because the lines are hiding some parts of pixels under the viewing zone forming optics, and the plate thickness induces a virtual contraction of the pixels. The simulated color moiré fringes are closely matched with those appearing at the displays.

  9. Modified Stereographic Projections of Point Groups and Diagrams of Their Irreducible Representations

    NASA Astrophysics Data System (ADS)

    Kettle, Sidney F. A.

    1999-05-01

    Modified versions of the stereographic projections of the point groups of classical crystallography are presented. They show the consequences of symmetry operations rather than emphasizing the existence of symmetry elements. These projections may be used to give pictures of the irreducible representations of point groups and several examples are given. Such pictures add physical reality to the irreducible representations and facilitate simple lecture demonstration of many important aspects and applications of group theory in chemistry.

  10. Development of arrays of position-sensitive microcalorimeters for Constellation-X

    NASA Astrophysics Data System (ADS)

    Smith, Stephen J.; Bandler, Simon R.; Brekosky, Regis P.; Brown, Ari-D.; Chervenak, James A.; Eckart, Megan E.; Figueroa-Feliciano, Enectali; Finkbeiner, Fred M.; Kelley, Richard L.; Kilbourne, Caroline A.; Porter, F. Scott; Sadleir, John E.

    2008-07-01

    We are developing arrays of position-sensitive transition-edge sensor (PoST) X-ray detectors for future astronomy missions such as NASA's Constellation-X. The PoST consists of multiple absorbers thermally coupled to one or more transition-edge sensor (TES). Each absorber element has a different thermal coupling to the TES. This results in a distribution of different pulse shapes and enables position discrimination between the absorber elements. PoST's are motivated by the desire to achieve the largest possible focal plane area with the fewest number of readout channels and are ideally suited to increasing the Constellation-X focal plane area, without comprising on spatial sampling. Optimizing the performance of PoST's requires careful design of key parameters such as the thermal conductances between the absorbers, TES and the heat sink, as well as the absorber heat capacities. Our new generation of PoST's utilizes technology successfully developed on high resolution (~ 2.5 eV) single pixels arrays of Mo/Au TESs, also under development for Constellation-X. This includes noise mitigation features on the TES and low resistivity electroplated absorbers. We report on the first experimental results from new one-channel, four-pixel, PoST's or 'Hydras', consisting of composite Au/Bi absorbers. We have achieved full-width-at-half-maximum energy resolution of between 5-6 eV on all four Hydra pixels with an exponential decay time constant of 620 μs. Straightforward position discrimination by means of rise time is also demonstrated.

  11. Simultaneous storage of medical images in the spatial and frequency domain: a comparative study.

    PubMed

    Nayak, Jagadish; Bhat, P Subbanna; Acharya U, Rajendra; Uc, Niranjan

    2004-06-05

    Digital watermarking is a technique of hiding specific identification data for copyright authentication. This technique is adapted here for interleaving patient information with medical images, to reduce storage and transmission overheads. The patient information is encrypted before interleaving with images to ensure greater security. The bio-signals are compressed and subsequently interleaved with the image. This interleaving is carried out in the spatial domain and Frequency domain. The performance of interleaving in the spatial, Discrete Fourier Transform (DFT), Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) coefficients is studied. Differential pulse code modulation (DPCM) is employed for data compression as well as encryption and results are tabulated for a specific example. It can be seen from results, the process does not affect the picture quality. This is attributed to the fact that the change in LSB of a pixel changes its brightness by 1 part in 256. Spatial and DFT domain interleaving gave very less %NRMSE as compared to DCT and DWT domain. The Results show that spatial domain the interleaving, the %NRMSE was less than 0.25% for 8-bit encoded pixel intensity. Among the frequency domain interleaving methods, DFT was found to be very efficient.

  12. Adaptive removal of background and white space from document images using seam categorization

    NASA Astrophysics Data System (ADS)

    Fillion, Claude; Fan, Zhigang; Monga, Vishal

    2011-03-01

    Document images are obtained regularly by rasterization of document content and as scans of printed documents. Resizing via background and white space removal is often desired for better consumption of these images, whether on displays or in print. While white space and background are easy to identify in images, existing methods such as naïve removal and content aware resizing (seam carving) each have limitations that can lead to undesirable artifacts, such as uneven spacing between lines of text or poor arrangement of content. An adaptive method based on image content is hence needed. In this paper we propose an adaptive method to intelligently remove white space and background content from document images. Document images are different from pictorial images in structure. They typically contain objects (text letters, pictures and graphics) separated by uniform background, which include both white paper space and other uniform color background. Pixels in uniform background regions are excellent candidates for deletion if resizing is required, as they introduce less change in document content and style, compared with deletion of object pixels. We propose a background deletion method that exploits both local and global context. The method aims to retain the document structural information and image quality.

  13. Image statistics decoding for convolutional codes

    NASA Technical Reports Server (NTRS)

    Pitt, G. H., III; Swanson, L.; Yuen, J. H.

    1987-01-01

    It is a fact that adjacent pixels in a Voyager image are very similar in grey level. This fact can be used in conjunction with the Maximum-Likelihood Convolutional Decoder (MCD) to decrease the error rate when decoding a picture from Voyager. Implementing this idea would require no changes in the Voyager spacecraft and could be used as a backup to the current system without too much expenditure, so the feasibility of it and the possible gains for Voyager were investigated. Simulations have shown that the gain could be as much as 2 dB at certain error rates, and experiments with real data inspired new ideas on ways to get the most information possible out of the received symbol stream.

  14. Sidelobe-modulated optical vortices for free-space communication.

    PubMed

    Jia, P; Yang, Y; Min, C J; Fang, H; Yuan, X-C

    2013-02-15

    We propose and experimentally demonstrate a new method for free-space optical (FSO) communication, where the transmitter encodes data into a composite computer-generated hologram and the receiver decodes through a retrieved array of sidelobe-modulated optical vortices (SMOVs). By employing the SMOV generation and detection technique, the usual stringent alignment and phase-matching requirement of the detection of optical vortices is released. In transmitting a gray-scale picture with 180×180 pixels, a bit error rate as low as 3.01×10(-3) has been achieved. Due to the orbital angular momentum multiplexing and spatial paralleling, this FSO communication method possesses the ability to greatly increase the capacity of data transmission.

  15. The New Maia Detector System: Methods For High Definition Trace Element Imaging Of Natural Material

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryan, C. G.; School of Physics, University of Melbourne, Parkville VIC; CODES Centre of Excellence, University of Tasmania, Hobart TAS

    2010-04-06

    Motivated by the need for megapixel high definition trace element imaging to capture intricate detail in natural material, together with faster acquisition and improved counting statistics in elemental imaging, a large energy-dispersive detector array called Maia has been developed by CSIRO and BNL for SXRF imaging on the XFM beamline at the Australian Synchrotron. A 96 detector prototype demonstrated the capacity of the system for real-time deconvolution of complex spectral data using an embedded implementation of the Dynamic Analysis method and acquiring highly detailed images up to 77 M pixels spanning large areas of complex mineral sample sections.

  16. The New Maia Detector System: Methods For High Definition Trace Element Imaging Of Natural Material

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryan, C.G.; Siddons, D.P.; Kirkham, R.

    2010-05-25

    Motivated by the need for megapixel high definition trace element imaging to capture intricate detail in natural material, together with faster acquisition and improved counting statistics in elemental imaging, a large energy-dispersive detector array called Maia has been developed by CSIRO and BNL for SXRF imaging on the XFM beamline at the Australian Synchrotron. A 96 detector prototype demonstrated the capacity of the system for real-time deconvolution of complex spectral data using an embedded implementation of the Dynamic Analysis method and acquiring highly detailed images up to 77 M pixels spanning large areas of complex mineral sample sections.

  17. High-resolution photon spectroscopy with a microwave-multiplexed 4-pixel transition edge sensor array

    NASA Astrophysics Data System (ADS)

    Guss, Paul; Rabin, Michael; Croce, Mark; Hoteling, Nathan; Schwellenbach, David; Kruschwitz, Craig; Mocko, Veronika; Mukhopadhyay, Sanjoy

    2017-09-01

    We demonstrate very high-resolution photon spectroscopy with a microwave-multiplexed 4-pixel transition edge sensor (TES) array. The readout circuit consists of superconducting microwave resonators coupled to radio frequency superconducting-quantum-interference devices (RF-SQUIDs) and transduces changes in input current to changes in phase of a microwave signal. We used a flux-ramp modulation to linearize the response and avoid low-frequency noise. The result is a very high-resolution photon spectroscopy with a microwave-multiplexed 4-pixel transition edge sensor array. We performed and validated a small-scale demonstration and test of all the components of our concept system, which encompassed microcalorimetry, microwave multiplexing, RF-SQUIDs, and software-defined radio (SDR). We shall display data we acquired in the first simultaneous combination of all key innovations in a 4-pixel demonstration, including microcalorimetry, microwave multiplexing, RF-SQUIDs, and SDR. We present the energy spectrum of a gadolinium-153 (153Gd) source we measured using our 4-pixel TES array and the RF-SQUID multiplexer. For each pixel, one can observe the two 97.4 and 103.2 keV photopeaks. We measured the 153Gd photon source with an achieved energy resolution of 70 eV, full width half maximum (FWHM) at 100 keV, and an equivalent readout system noise of 90 pA/pHz at the TES. This demonstration establishes a path for the readout of cryogenic x-ray and gamma ray sensor arrays with more elements and spectral resolving powers. We believe this project has improved capabilities and substantively advanced the science useful for missions such as nuclear forensics, emergency response, and treaty verification through the explored TES developments.

  18. The fundamentals of average local variance--Part II: Sampling simple regular patterns with optical imagery.

    PubMed

    Bøcher, Peder Klith; McCloy, Keith R

    2006-02-01

    In this investigation, the characteristics of the average local variance (ALV) function is investigated through the acquisition of images at different spatial resolutions of constructed scenes of regular patterns of black and white squares. It is shown that the ALV plot consistently peaks at a spatial resolution in which the pixels has a size corresponding to half the distance between scene objects, and that, under very specific conditions, it also peaks at a spatial resolution in which the pixel size corresponds to the whole distance between scene objects. It is argued that the peak at object distance when present is an expression of the Nyquist sample rate. The presence of this peak is, hence, shown to be a function of the matching between the phase of the scene pattern and the phase of the sample grid, i.e., the image. When these phases match, a clear and distinct peak is produced on the ALV plot. The fact that the peak at half the distance consistently occurs in the ALV plot is linked to the circumstance that the sampling interval (distance between pixels) and the extent of the sampling unit (size of pixels) are equal. Hence, at twice the Nyquist sampling rate, each fundamental period of the pattern is covered by four pixels; therefore, at least one pixel is always completely embedded within one pattern element, regardless of sample scene phase. If the objects in the scene are scattered with a distance larger than their extent, the peak will be related to the size by a factor larger than 1/2. This is suggested to be the explanation to the results presented by others that the ALV plot is related to scene-object size by a factor of 1/2-3/4.

  19. A Phenology-based Method For Identifying the Planting Fraction of Winter Wheat Using Moderate-resolution Satellite Data

    NASA Astrophysics Data System (ADS)

    Dong, J.; Liu, W.; Han, W.; Lei, T.; Xia, J.; Yuan, W.

    2017-12-01

    Winter wheat is a staple food crop for most of the world's population, and the area and spatial distribution of winter wheat are key elements in estimating crop production and ensuring food security. However, winter wheat planting areas contain substantial spatial heterogeneity with mixed pixels for coarse- and moderate-resolution satellite data, leading to significant errors in crop acreage estimation. This study has developed a phenology-based approach using moderate-resolution satellite data to estimate sub-pixel planting fractions of winter wheat. Based on unmanned aerial vehicle (UAV) observations, the unique characteristics of winter wheat with high vegetation index values at the heading stage (May) and low values at the harvest stage (June) were investigated. The differences in vegetation index between heading and harvest stages increased with the planting fraction of winter wheat, and therefore the planting fractions were estimated by comparing the NDVI differences of a given pixel with those of predetermined pure winter wheat and non-winter wheat pixels. This approach was evaluated using aerial images and agricultural statistical data in an intensive agricultural region, Shandong Province in North China. The method explained 60% and 85% of the spatial variation in county- and municipal-level statistical data, respectively. More importantly, the predetermined pure winter wheat and non-winter wheat pixels can be automatically identified using MODIS data according to their NDVI differences, which strengthens the potential to use this method at regional and global scales without any field observations as references.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ting Yuan-Sen; Conroy, Charlie; Cargile, Phillip

    Understanding the evolution of the Milky Way calls for the precise abundance determination of many elements in many stars. A common perception is that deriving more than a few elemental abundances ([Fe/H], [ α /Fe], perhaps [C/H], [N/H]) requires medium-to-high spectral resolution, R ≳ 10,000, mostly to overcome the effects of line blending. In a recent work, we presented an efficient and practical way to model the full stellar spectrum, even when fitting a large number of stellar labels simultaneously. In this paper, we quantify to what precision the abundances of many different elements can be recovered, as a functionmore » of spectroscopic resolution and wavelength range. In the limit of perfect spectral models and spectral normalization, we show that the precision of elemental abundances is nearly independent of resolution, for a fixed exposure time and number of detector pixels; low-resolution spectra simply afford much higher S/N per pixel and generally larger wavelength range in a single setting. We also show that estimates of most stellar labels are not strongly correlated with one another once R ≳ 1000. Modest errors in the line-spread function, as well as small radial velocity errors, do not affect these conclusions, and data-driven models indicate that spectral (continuum) normalization can be achieved well enough in practice. These results, to be confirmed with an analysis of observed low-resolution data, open up new possibilities for the design of large spectroscopic stellar surveys and for the reanalysis of archival low-resolution data sets.« less

  1. Miniature infrared hyperspectral imaging sensor for airborne applications

    NASA Astrophysics Data System (ADS)

    Hinnrichs, Michele; Hinnrichs, Bradford; McCutchen, Earl

    2017-05-01

    Pacific Advanced Technology (PAT) has developed an infrared hyperspectral camera, both MWIR and LWIR, small enough to serve as a payload on a miniature unmanned aerial vehicles. The optical system has been integrated into the cold-shield of the sensor enabling the small size and weight of the sensor. This new and innovative approach to infrared hyperspectral imaging spectrometer uses micro-optics and will be explained in this paper. The micro-optics are made up of an area array of diffractive optical elements where each element is tuned to image a different spectral region on a common focal plane array. The lenslet array is embedded in the cold-shield of the sensor and actuated with a miniature piezo-electric motor. This approach enables rapid infrared spectral imaging with multiple spectral images collected and processed simultaneously each frame of the camera. This paper will present our optical mechanical design approach which results in an infrared hyper-spectral imaging system that is small enough for a payload on a mini-UAV or commercial quadcopter. The diffractive optical elements used in the lenslet array are blazed gratings where each lenslet is tuned for a different spectral bandpass. The lenslets are configured in an area array placed a few millimeters above the focal plane and embedded in the cold-shield to reduce the background signal normally associated with the optics. We have developed various systems using a different number of lenslets in the area array. Depending on the size of the focal plane and the diameter of the lenslet array will determine the spatial resolution. A 2 x 2 lenslet array will image four different spectral images of the scene each frame and when coupled with a 512 x 512 focal plane array will give spatial resolution of 256 x 256 pixel each spectral image. Another system that we developed uses a 4 x 4 lenslet array on a 1024 x 1024 pixel element focal plane array which gives 16 spectral images of 256 x 256 pixel resolution each frame.

  2. Infrared hyperspectral imaging miniaturized for UAV applications

    NASA Astrophysics Data System (ADS)

    Hinnrichs, Michele; Hinnrichs, Bradford; McCutchen, Earl

    2017-02-01

    Pacific Advanced Technology (PAT) has developed an infrared hyperspectral camera, both MWIR and LWIR, small enough to serve as a payload on a miniature unmanned aerial vehicles. The optical system has been integrated into the cold-shield of the sensor enabling the small size and weight of the sensor. This new and innovative approach to infrared hyperspectral imaging spectrometer uses micro-optics and will be explained in this paper. The micro-optics are made up of an area array of diffractive optical elements where each element is tuned to image a different spectral region on a common focal plane array. The lenslet array is embedded in the cold-shield of the sensor and actuated with a miniature piezo-electric motor. This approach enables rapid infrared spectral imaging with multiple spectral images collected and processed simultaneously each frame of the camera. This paper will present our optical mechanical design approach which results in an infrared hyper-spectral imaging system that is small enough for a payload on a mini-UAV or commercial quadcopter. Also, an example of how this technology can easily be used to quantify a hydrocarbon gas leak's volume and mass flowrates. The diffractive optical elements used in the lenslet array are blazed gratings where each lenslet is tuned for a different spectral bandpass. The lenslets are configured in an area array placed a few millimeters above the focal plane and embedded in the cold-shield to reduce the background signal normally associated with the optics. We have developed various systems using a different number of lenslets in the area array. Depending on the size of the focal plane and the diameter of the lenslet array will determine the spatial resolution. A 2 x 2 lenslet array will image four different spectral images of the scene each frame and when coupled with a 512 x 512 focal plane array will give spatial resolution of 256 x 256 pixel each spectral image. Another system that we developed uses a 4 x 4 lenslet array on a 1024 x 1024 pixel element focal plane array which gives 16 spectral images of 256 x 256 pixel resolution each frame.

  3. A hyperspectral image projector for hyperspectral imagers

    NASA Astrophysics Data System (ADS)

    Rice, Joseph P.; Brown, Steven W.; Neira, Jorge E.; Bousquet, Robert R.

    2007-04-01

    We have developed and demonstrated a Hyperspectral Image Projector (HIP) intended for system-level validation testing of hyperspectral imagers, including the instrument and any associated spectral unmixing algorithms. HIP, based on the same digital micromirror arrays used in commercial digital light processing (DLP*) displays, is capable of projecting any combination of many different arbitrarily programmable basis spectra into each image pixel at up to video frame rates. We use a scheme whereby one micromirror array is used to produce light having the spectra of endmembers (i.e. vegetation, water, minerals, etc.), and a second micromirror array, optically in series with the first, projects any combination of these arbitrarily-programmable spectra into the pixels of a 1024 x 768 element spatial image, thereby producing temporally-integrated images having spectrally mixed pixels. HIP goes beyond conventional DLP projectors in that each spatial pixel can have an arbitrary spectrum, not just arbitrary color. As such, the resulting spectral and spatial content of the projected image can simulate realistic scenes that a hyperspectral imager will measure during its use. Also, the spectral radiance of the projected scenes can be measured with a calibrated spectroradiometer, such that the spectral radiance projected into each pixel of the hyperspectral imager can be accurately known. Use of such projected scenes in a controlled laboratory setting would alleviate expensive field testing of instruments, allow better separation of environmental effects from instrument effects, and enable system-level performance testing and validation of hyperspectral imagers as used with analysis algorithms. For example, known mixtures of relevant endmember spectra could be projected into arbitrary spatial pixels in a hyperspectral imager, enabling tests of how well a full system, consisting of the instrument + calibration + analysis algorithm, performs in unmixing (i.e. de-convolving) the spectra in all pixels. We discuss here the performance of a visible prototype HIP. The technology is readily extendable to the ultraviolet and infrared spectral ranges, and the scenes can be static or dynamic.

  4. Methods for Monitoring the Detection of Multi-Temporal Land Use Change Through the Classification of Urban Areas

    NASA Astrophysics Data System (ADS)

    Alhaddad, B. I.; Burns, M. C.; Roca, J.

    2011-08-01

    Urban areas are complicated due to the mix of man-made features and natural features. A higher level of structural information plays an important role in land cover/use classification of urban areas. Additional spatial indicators have to be extracted based on structural analysis in order to understand and identify spatial patterns or the spatial organization of features, especially for man-made feature. It's very difficult to extract such spatial patterns by using only classification approaches. Clusters of urban patterns which are integral parts of other uses may be difficult to identify. A lot of public resources have been directed towards seeking to develop a standardized classification system and to provide as much compatibility as possible to ensure the widespread use of such categorized data obtained from remote sensor sources. In this paper different methods applied to understand the change in the land use areas by understanding and monitoring the change in urban areas and as its hard to apply those methods to classification results for high elements quantities, dusts and scratches (Roca and Alhaddad, 2005). This paper focuses on a methodology developed based relation between urban elements and how to join this elements in zones or clusters have commune behaviours such as form, pattern, size. The main objective is to convert urban class category in to various structure densities depend on conjunction of pixel and shortest distance between them, Delaunay triangulation has been widely used in spatial analysis and spatial modelling. To identify these different zones, a spatial density-based clustering technique was adopted. In highly urban zones, the spatial density of the pixels is high, while in sparsely areas the density of points is much lower. Once the groups of pixels are identified, the calculation of the boundaries of the areas containing each group of pixels defines the new regions indicate the different contains inside such as high or low urban areas. Multi-temporal datasets from 1986, 1995 and 2004 used to urban region centroid to be our reference in this study which allow us to follow the urban movement, increase and decrease by the time. Kernel Density function used to Calculates urban magnitude, Voronoi algorithm is proposed for deriving explicit boundaries between objects units. To test the approach, we selected a site in a suburban area in Barcelona Municipality, the Spain.

  5. An investigation of signal performance enhancements achieved through innovative pixel design across several generations of indirect detection, active matrix, flat-panel arrays

    PubMed Central

    Antonuk, Larry E.; Zhao, Qihua; El-Mohri, Youcef; Du, Hong; Wang, Yi; Street, Robert A.; Ho, Jackson; Weisfield, Richard; Yao, William

    2009-01-01

    Active matrix flat-panel imager (AMFPI) technology is being employed for an increasing variety of imaging applications. An important element in the adoption of this technology has been significant ongoing improvements in optical signal collection achieved through innovations in indirect detection array pixel design. Such improvements have a particularly beneficial effect on performance in applications involving low exposures and∕or high spatial frequencies, where detective quantum efficiency is strongly reduced due to the relatively high level of additive electronic noise compared to signal levels of AMFPI devices. In this article, an examination of various signal properties, as determined through measurements and calculations related to novel array designs, is reported in the context of the evolution of AMFPI pixel design. For these studies, dark, optical, and radiation signal measurements were performed on prototype imagers incorporating a variety of increasingly sophisticated array designs, with pixel pitches ranging from 75 to 127 μm. For each design, detailed measurements of fundamental pixel-level properties conducted under radiographic and fluoroscopic operating conditions are reported and the results are compared. A series of 127 μm pitch arrays employing discrete photodiodes culminated in a novel design providing an optical fill factor of ∼80% (thereby assuring improved x-ray sensitivity), and demonstrating low dark current, very low charge trapping and charge release, and a large range of linear signal response. In two of the designs having 75 and 90 μm pitches, a novel continuous photodiode structure was found to provide fill factors that approach the theoretical maximum of 100%. Both sets of novel designs achieved large fill factors by employing architectures in which some, or all of the photodiode structure was elevated above the plane of the pixel addressing transistor. Generally, enhancement of the fill factor in either discrete or continuous photodiode arrays was observed to result in no degradation in MTF due to charge sharing between pixels. While the continuous designs exhibited relatively high levels of charge trapping and release, as well as shorter ranges of linearity, it is possible that these behaviors can be addressed through further refinements to pixel design. Both the continuous and the most recent discrete photodiode designs accommodate more sophisticated pixel circuitry than is present on conventional AMFPIs – such as a pixel clamp circuit, which is demonstrated to limit signal saturation under conditions corresponding to high exposures. It is anticipated that photodiode structures such as the ones reported in this study will enable the development of even more complex pixel circuitry, such as pixel-level amplifiers, that will lead to further significant improvements in imager performance. PMID:19673228

  6. Mars Global Surveyor Approach Image

    NASA Image and Video Library

    1997-07-04

    This image is the first view of Mars taken by the Mars Global Surveyor Orbiter Camera (MOC). It was acquired the afternoon of July 2, 1997 when the MGS spacecraft was 17.2 million kilometers (10.7 million miles) and 72 days from encounter. At this distance, the MOC's resolution is about 64 km per picture element, and the 6800 km (4200 mile) diameter planet is 105 pixels across. The observation was designed to show the Mars Pathfinder landing site at 19.4 N, 33.1 W approximately 48 hours prior to landing. The image shows the north polar cap of Mars at the top of the image, the dark feature Acidalia Planitia in the center with the brighter Chryse plain immediately beneath it, and the highland areas along the Martian equator including the canyons of the Valles Marineris (which are bright in this image owing to atmospheric dust). The dark features Terra Meridiani and Terra Sabaea can be seen at the 4 o`clock position, and the south polar hood (atmospheric fog and hazes) can be seen at the bottom of the image. Launched on November 7, 1996, Mars Global Surveyor will enter Mars orbit on Thursday, September 11 shortly after 6:00 PM PDT. After Mars Orbit Insertion, the spacecraft will use atmospheric drag to reduce the size of its orbit, achieving a circular orbit only 400 km (248 mi) above the surface in early March 1998, when mapping operations will begin. http://photojournal.jpl.nasa.gov/catalog/PIA00606

  7. Evaluation of position-estimation methods applied to CZT-based photon-counting detectors for dedicated breast CT

    PubMed Central

    Makeev, Andrey; Clajus, Martin; Snyder, Scott; Wang, Xiaolang; Glick, Stephen J.

    2015-01-01

    Abstract. Semiconductor photon-counting detectors based on high atomic number, high density materials [cadmium zinc telluride (CZT)/cadmium telluride (CdTe)] for x-ray computed tomography (CT) provide advantages over conventional energy-integrating detectors, including reduced electronic and Swank noise, wider dynamic range, capability of spectral CT, and improved signal-to-noise ratio. Certain CT applications require high spatial resolution. In breast CT, for example, visualization of microcalcifications and assessment of tumor microvasculature after contrast enhancement require resolution on the order of 100  μm. A straightforward approach to increasing spatial resolution of pixellated CZT-based radiation detectors by merely decreasing the pixel size leads to two problems: (1) fabricating circuitry with small pixels becomes costly and (2) inter-pixel charge spreading can obviate any improvement in spatial resolution. We have used computer simulations to investigate position estimation algorithms that utilize charge sharing to achieve subpixel position resolution. To study these algorithms, we model a simple detector geometry with a 5×5 array of 200  μm pixels, and use a conditional probability function to model charge transport in CZT. We used COMSOL finite element method software to map the distribution of charge pulses and the Monte Carlo package PENELOPE for simulating fluorescent radiation. Performance of two x-ray interaction position estimation algorithms was evaluated: the method of maximum-likelihood estimation and a fast, practical algorithm that can be implemented in a readout application-specific integrated circuit and allows for identification of a quadrant of the pixel in which the interaction occurred. Both methods demonstrate good subpixel resolution; however, their actual efficiency is limited by the presence of fluorescent K-escape photons. Current experimental breast CT systems typically use detectors with a pixel size of 194  μm, with 2×2 binning during the acquisition giving an effective pixel size of 388  μm. Thus, it would be expected that the position estimate accuracy reported in this study would improve detection and visualization of microcalcifications as compared to that with conventional detectors. PMID:26158095

  8. Evaluation of position-estimation methods applied to CZT-based photon-counting detectors for dedicated breast CT.

    PubMed

    Makeev, Andrey; Clajus, Martin; Snyder, Scott; Wang, Xiaolang; Glick, Stephen J

    2015-04-01

    Semiconductor photon-counting detectors based on high atomic number, high density materials [cadmium zinc telluride (CZT)/cadmium telluride (CdTe)] for x-ray computed tomography (CT) provide advantages over conventional energy-integrating detectors, including reduced electronic and Swank noise, wider dynamic range, capability of spectral CT, and improved signal-to-noise ratio. Certain CT applications require high spatial resolution. In breast CT, for example, visualization of microcalcifications and assessment of tumor microvasculature after contrast enhancement require resolution on the order of [Formula: see text]. A straightforward approach to increasing spatial resolution of pixellated CZT-based radiation detectors by merely decreasing the pixel size leads to two problems: (1) fabricating circuitry with small pixels becomes costly and (2) inter-pixel charge spreading can obviate any improvement in spatial resolution. We have used computer simulations to investigate position estimation algorithms that utilize charge sharing to achieve subpixel position resolution. To study these algorithms, we model a simple detector geometry with a [Formula: see text] array of [Formula: see text] pixels, and use a conditional probability function to model charge transport in CZT. We used COMSOL finite element method software to map the distribution of charge pulses and the Monte Carlo package PENELOPE for simulating fluorescent radiation. Performance of two x-ray interaction position estimation algorithms was evaluated: the method of maximum-likelihood estimation and a fast, practical algorithm that can be implemented in a readout application-specific integrated circuit and allows for identification of a quadrant of the pixel in which the interaction occurred. Both methods demonstrate good subpixel resolution; however, their actual efficiency is limited by the presence of fluorescent [Formula: see text]-escape photons. Current experimental breast CT systems typically use detectors with a pixel size of [Formula: see text], with [Formula: see text] binning during the acquisition giving an effective pixel size of [Formula: see text]. Thus, it would be expected that the position estimate accuracy reported in this study would improve detection and visualization of microcalcifications as compared to that with conventional detectors.

  9. The Mars Hand Lens Imager (MAHLI) aboard the Mars rover, Curiosity

    NASA Astrophysics Data System (ADS)

    Edgett, K. S.; Ravine, M. A.; Caplinger, M. A.; Ghaemi, F. T.; Schaffner, J. A.; Malin, M. C.; Baker, J. M.; Dibiase, D. R.; Laramee, J.; Maki, J. N.; Willson, R. G.; Bell, J. F., III; Cameron, J. F.; Dietrich, W. E.; Edwards, L. J.; Hallet, B.; Herkenhoff, K. E.; Heydari, E.; Kah, L. C.; Lemmon, M. T.; Minitti, M. E.; Olson, T. S.; Parker, T. J.; Rowland, S. K.; Schieber, J.; Sullivan, R. J.; Sumner, D. Y.; Thomas, P. C.; Yingst, R. A.

    2009-08-01

    The Mars Science Laboratory (MSL) rover, Curiosity, is expected to land on Mars in 2012. The Mars Hand Lens Imager (MAHLI) will be used to document martian rocks and regolith with a 2-megapixel RGB color CCD camera with a focusable macro lens mounted on an instrument-bearing turret on the end of Curiosity's robotic arm. The flight MAHLI can focus on targets at working distances of 20.4 mm to infinity. At 20.4 mm, images have a pixel scale of 13.9 μm/pixel. The pixel scale at 66 mm working distance is about the same (31 μm/pixel) as that of the Mars Exploration Rover (MER) Microscopic Imager (MI). MAHLI camera head placement is dependent on the capabilities of the MSL robotic arm, the design for which presently has a placement uncertainty of ~20 mm in 3 dimensions; hence, acquisition of images at the minimum working distance may be challenging. The MAHLI consists of 3 parts: a camera head, a Digital Electronics Assembly (DEA), and a calibration target. The camera head and DEA are connected by a JPL-provided cable which transmits data, commands, and power. JPL is also providing a contact sensor. The camera head will be mounted on the rover's robotic arm turret, the DEA will be inside the rover body, and the calibration target will be mounted on the robotic arm azimuth motor housing. Camera Head. MAHLI uses a Kodak KAI-2020CM interline transfer CCD (1600 x 1200 active 7.4 μm square pixels with RGB filtered microlenses arranged in a Bayer pattern). The optics consist of a group of 6 fixed lens elements, a movable group of 3 elements, and a fixed sapphire window front element. Undesired near-infrared radiation is blocked using a coating deposited on the inside surface of the sapphire window. The lens is protected by a dust cover with a Lexan window through which imaging can be ac-complished if necessary, and targets can be illuminated by sunlight or two banks of two white light LEDs. Two 365 nm UV LEDs are included to search for fluores-cent materials at night. DEA and Onboard Processing. The DEA incorpo-rates the circuit elements required for data processing, compression, and buffering. It also includes all power conversion and regulation capabilities for both the DEA and the camera head. The DEA has an 8 GB non-volatile flash memory plus 128 MB volatile storage. Images can be commanded as full-frame or sub-frame and the camera has autofocus and autoexposure capa-bilities. MAHLI can also acquire 720p, ~7 Hz high definition video. Onboard processing includes options for Bayer pattern filter interpolation, JPEG-based compression, and focus stack merging (z-stacking). Malin Space Science Systems (MSSS) built and will operate the MAHLI. Alliance Spacesystems, LLC, designed and built the lens mechanical assembly. MAHLI shares common electronics, detector, and software designs with the MSL Mars Descent Imager (MARDI) and the 2 MSL Mast Cameras (Mastcam). Pre-launch images of geologic materials imaged by MAHLI are online at: http://www.msss.com/msl/mahli/prelaunch_images/.

  10. Why do we laugh at misfortunes? An electrophysiological exploration of comic situation processing.

    PubMed

    Manfredi, Mirella; Adorni, Roberta; Proverbio, Alice Mado; Proverbio, Alice

    2014-08-01

    The goal of the present study was to shed some light on a particular kind of humour, called slapstick, by measuring brain bioelectrical activity during the perception of funny vs. non-funny pictures involving misfortunate circumstances. According to our hypothesis, the element mostly providing a comic feature in a misfortunate situation is the facial expression of the victims: the observer׳s reaction will usually be laughing only if the victims will show a funny bewilderment face and not a painful or anger expression. Several coloured photographs depicting people involved in misfortunate situations were presented to 30 Italian healthy volunteers, while their EEG was recorded. Three different situations were considered: people showing a painful or an angry expression (Affective); people showing a bewilderment expression and, so, a comic look (Comic); people engaged in similar misfortunate situations but with no face visible (No Face). Results showed that the mean amplitude of both the posterior N170 and anterior N220 components was much larger in amplitude to comic pictures, than the other stimuli. This early response could be considered the first identification of a comic element and evidence of the compelling and automatic response that usually characterizes people amused reaction during a misfortune. In addition, we observed a larger P300 amplitude in response to comic than affective pictures, probably reflecting a more conscious processing of the comic element. Finally, no face pictures elicited an anteriorly distributed N400, which might reflect the effort to comprehend the nature of the situation displayed without any affective facial information, and a late positivity, possibly indexing a re-analysis processing of the unintelligible misfortunate situation (comic or unhappy) depicted in the No Face stimuli. These data support the hypothesis that the facial expression of the victims acts as a specific trigger for the amused feeling that observers usually experience when someone falls down. Overall, the data indicate the existence of a neural circuit that is capable of recognize and appreciate the comic element of a misfortunate situation in a group of young adults. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. A beachhead on the island of stability

    DOE PAGES

    Oganessian, Yuri Ts.; Rykaczewski, Krzysztof P.

    2015-01-01

    Remember learning the periodic table of elements in high school? Our chemistry teachers explained that the chemical properties of elements come from the electronic shell structure of atoms. Furthermore, our physics teachers enriched that picture of the atomic world by introducing us to isotopes and the Segrè chart of nuclides, which arranges them by proton number Z and neutron number N.

  12. The Formal Elements Art Therapy Scale: A Measurement System for Global Variables in Art

    ERIC Educational Resources Information Center

    Gantt, Linda M.

    2009-01-01

    The Formal Elements Art Therapy Scale (FEATS) is a measurement system for applying numbers to global variables in two-dimensional art (drawing and painting). While it was originally developed for use with the single-picture assessment ("Draw a person picking an apple from a tree" [PPAT]), researchers can also apply many of the 14 scales of the…

  13. Natural pixel decomposition for computational tomographic reconstruction from interferometric projection: algorithms and comparison

    NASA Astrophysics Data System (ADS)

    Cha, Don J.; Cha, Soyoung S.

    1995-09-01

    A computational tomographic technique, termed the variable grid method (VGM), has been developed for improving interferometric reconstruction of flow fields under ill-posed data conditions of restricted scanning and incomplete projection. The technique is based on natural pixel decomposition, that is, division of a field into variable grid elements. The performances of two algorithms, that is, original and revised versions, are compared to investigate the effects of the data redundancy criteria and seed element forming schemes. Tests of the VGMs are conducted through computer simulation of experiments and reconstruction of fields with a limited view angel of 90 degree(s). The temperature fields at two horizontal sections of a thermal plume of two interacting isothermal cubes, produced by a finite numerical code, are analyzed as test fields. The computer simulation demonstrates the superiority of the revised VGM to either the conventional fixed grid method or the original VGM. Both the maximum and average reconstruction errors are reduced appreciably. The reconstruction shows substantial improvement in the regions with dense scanning by probing rays. These regions are usually of interest in engineering applications.

  14. Mars Eolian Geology at Airphoto Scales: The Large Wind Streaks of Western Arabia Terra

    NASA Technical Reports Server (NTRS)

    Edgett, Kenneth S.

    2001-01-01

    More than 27,000 pictures at aerial photograph scales (1.5-12 m/pixel) have been acquired by the Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) since September 1997. The pictures are valuable for testing hypotheses about geologic history and processes of Mars. Of particular interest are eolian features connected to surface albedo patterns. This work is focused on low-albedo wind streaks, some over 100 km long, in western Arabia Terra. Each streak is widest where it originates at an impact crater (typically 25-150 km diameter). The streaks taper downwind. Within the associated craters there is a lower-albedo surface that, in nearly all observed cases, includes barchan dunes indicative of transport in the same direction as the wind streaks. Upwind of the dunes there is usually an outcrop of layered material that might have served as a source for dune sand. MOC images show that the west Arabia streaks consist of a smooth-surfaced, multiple-meters-thick, mantle (smooth at 1.5 m/pixel) that appears to be superposed on local surfaces. No dunes are present, indicating that down-streak transport of sediment via saltation and traction have not occurred. Two models might explain the observed properties: (1) the streaks consist of dark silt- and clay-sized grains deflated from the adjacent crater interiors and deposited from suspension or (2) they are remnants (protected in the lee of impact crater rims) of a formerly much larger, regional covering of low albedo, smooth-surfaced mantle. The latter hypothesis is based on observation of low albedo mantled surfaces occurring south of west Arabia in Terra Meridiani. For reasons yet unknown, a large fraction of the martian equatorial regions are covered by low albedo, mesa-forming material that lies unconformably atop eroded layered and cratered terrain. Both hypotheses are being explored via continued selective targeting of new MOC images as well as analyses of the new data.

  15. Development of a versatile XRF scanner for the elemental imaging of paintworks

    NASA Astrophysics Data System (ADS)

    Ravaud, E.; Pichon, L.; Laval, E.; Gonzalez, V.; Eveno, M.; Calligaro, T.

    2016-01-01

    Scanning XRF is a powerful elemental imaging technique introduced at the synchrotron that has recently been transposed to laboratory. The growing interest in this technique stems from its ability to collect images reflecting pigment distribution within large areas on artworks by means of their elemental signature. In that sense, scanning XRF appears highly complementary to standard imaging techniques (Visible, UV, IR photography and X-ray radiography). The versatile XRF scanner presented here has been designed and built at the C2RMF in response to specific constraints: transportability, cost-effectiveness and ability to scan large areas within a single working day. The instrument is based on a standard X-ray generator with sub-millimetre collimated beam and a SDD-based spectrometer to collected X-ray spectra. The instrument head is scanned in front of the painting by means of motorised movements to cover an area up to 300 × 300 mm2 with a resolution of 0.5 mm (600 × 600 pixels). The 15-kg head is mounted on a stable photo stand for rapid positioning on paintworks and maintains a free side-access for safety; it can also be attached to a lighter tripod for field measurements. Alignment is achieved with a laser pointer and a micro-camera. With a scanning speed of 5 mm/s and 0.1 s/point, elemental maps are collected in 10 h, i.e. a working day. The X-ray spectra of all pixels are rapidly processed using an open source program to derive elemental maps. To illustrate the capabilities of this instrument, this contribution presents the results obtained on the Belle Ferronnière painted by Leonardo da Vinci (1452-1519) and conserved in the Musée du Louvre, prior to its restoration at the C2RMF.

  16. Dispersion of the solar magnetic flux in the undisturbed photosphere as derived from SDO/HMI data

    NASA Astrophysics Data System (ADS)

    Abramenko, Valentina I.

    2017-11-01

    To explore the magnetic flux dispersion in the undisturbed solar photosphere, magnetograms acquired by Helioseismic and Magnetic Imager (HMI) onboard the Solar Dynamic Observatory (SDO) were utilized. Two areas, a coronal hole (CH) area and an area of super-granulation (SG) pattern, were analysed. We explored the displacement and separation spectra and the behaviour of the turbulent diffusion coefficient, K. The displacement and separation spectra are very similar to each other. Small magnetic elements (of size 3-100 squared pixels and the detection threshold of 20 Mx sm-2) in both CH and SG areas disperse in the same way and they are more mobile than the large elements (of size 20-400 squared pixels and the detection threshold of 130 Mx sm-2). The regime of super-diffusivity is found for small elements (γ ≈ 1.3 and K growing from ˜100 to ˜ 300 km2 s-1). Large elements in the CH area are scanty and show super-diffusion with γ ≈ 1.2 and K = (62-96) km2 s-1 on a rather narrow range of 500-2200 km. Large elements in the SG area demonstrate two ranges of linearity and two diffusivity regimes: sub-diffusivity on scales 900-2500 km with γ = 0.88 and K decreasing from ˜130 to ˜100 km2 s-1, and super-diffusivity on scales 2500-4800 km with γ ≈ 1.3 and K growing from ˜140 to ˜200 km2 s-1. A comparison of our results with the previously published shows that there is a tendency of saturation of the diffusion coefficient on large scales, I.e. the turbulent regime of super-diffusivity is gradually replaced by normal diffusion.

  17. Two-dimensional photon detector

    NASA Technical Reports Server (NTRS)

    Timothy, J. G.; Bybee, R. L.

    1976-01-01

    Device incorporates set of cascaded microchannel-array plates in proximity focus with two sets of mutually-orthogonal linear anodes. Technique allows data from N x M picture elements to be recorded with only N + M amplifiers.

  18. Nuclear magnetic resonance imaging at microscopic resolution

    NASA Astrophysics Data System (ADS)

    Johnson, G. Allan; Thompson, Morrow B.; Gewalt, Sally L.; Hayes, Cecil E.

    Resolution limits in NMR imaging are imposed by bandwidth considerations, available magnetic gradients for spatial encoding, and signal to noise. This work reports modification of a clinical NMR imaging device with picture elements of 500 × 500 × 5000 μm to yield picture elements of 50 × 50 × 1000 μm. Resolution has been increased by using smaller gradient coils permitting gradient fields >0.4 mT/cm. Significant improvements in signal to noise are achieved with smaller rf coils, close attention to choice of bandwidth, and signal averaging. These improvements permit visualization of anatomical structures in the rat brain with an effective diameter of 1 cm with the same definition as is seen in human imaging. The techniques and instrumentation should open a number of basic sciences such as embryology, plant sciences, and teratology to the potentials of NMR imaging.

  19. A Specially Constructed Metallograph for Use at Elevated Temperatures

    NASA Technical Reports Server (NTRS)

    Jenkins, Joe E; Buchele, Donald R; Long, Roger A

    1951-01-01

    A Metallographic microscope was developed with provision for heating a specimen to 1800 F in protective atmospheres, that is, vacuum or gas. A special objective was constructed of reflecting elements with an unusually long working distance (7/16 in.) and a high numerical aperture (0.5). Changes in specimen microstructure were observed and recorded on 35-millimeter motion-picture film. The resulting pictures were projected as motion pictures and individual frames were cut and enlargements made for close observation. Structural changes upon heating a 0.35-percent annealed carbon steel and a 5-percent tin phosphor bronze specimen were observed and recorded. Newly formed microstructure were revealed by selective vacuum etching and specimen relief resulting from recrystallization and varying grain orientation.

  20. Characterization of an in-vacuum PILATUS 1M detector.

    PubMed

    Wernecke, Jan; Gollwitzer, Christian; Müller, Peter; Krumrey, Michael

    2014-05-01

    A dedicated in-vacuum X-ray detector based on the hybrid pixel PILATUS 1M detector has been installed at the four-crystal monochromator beamline of the PTB at the electron storage ring BESSY II in Berlin, Germany. Owing to its windowless operation, the detector can be used in the entire photon energy range of the beamline from 10 keV down to 1.75 keV for small-angle X-ray scattering (SAXS) experiments and anomalous SAXS at absorption edges of light elements. The radiometric and geometric properties of the detector such as quantum efficiency, pixel pitch and module alignment have been determined with low uncertainties. The first grazing-incidence SAXS results demonstrate the superior resolution in momentum transfer achievable at low photon energies.

  1. X-ray microanalytical surveys of minor element concentrations in unsectioned biological samples

    NASA Astrophysics Data System (ADS)

    Schofield, R. M. S.; Lefevre, H. W.; Overley, J. C.; Macdonald, J. D.

    1988-03-01

    Approximate concentration maps of small unsectioned biological samples are made using the pixel by pixel ratio of PIXE images to areal density images. Areal density images are derived from scanning transmission ion microscopy (STIM) proton energy-loss images. Corrections for X-ray production cross section variations, X-ray attenuation, and depth averaging are approximated or ignored. Estimates of the magnitude of the resulting error are made. Approximate calcium concentrations within the head of a fruit fly are reported. Concentrations in the retinula cell region of the eye average about 1 mg/g dry weight. Concentrations of zinc in the mandible of several ant species average about 40 mg/g. Zinc concentrations in the stomachs of these ants are at least 1 mg/g.

  2. A finite element beam propagation method for simulation of liquid crystal devices.

    PubMed

    Vanbrabant, Pieter J M; Beeckman, Jeroen; Neyts, Kristiaan; James, Richard; Fernandez, F Anibal

    2009-06-22

    An efficient full-vectorial finite element beam propagation method is presented that uses higher order vector elements to calculate the wide angle propagation of an optical field through inhomogeneous, anisotropic optical materials such as liquid crystals. The full dielectric permittivity tensor is considered in solving Maxwell's equations. The wide applicability of the method is illustrated with different examples: the propagation of a laser beam in a uniaxial medium, the tunability of a directional coupler based on liquid crystals and the near-field diffraction of a plane wave in a structure containing micrometer scale variations in the transverse refractive index, similar to the pixels of a spatial light modulator.

  3. Elements of Sexism in a Selected Group of Picture Books Recommended for Kindergarten Use.

    ERIC Educational Resources Information Center

    Easley, Ann

    A list of 100 books recommended for kindergarten use were reviewed and evaluated for elements of sexism and sex role stereotyping. Each book was carefully scanned and notations made on survey sheets. The story was checked to see if it was a boy or girl centered story, had an adult male or adult female character, male or female animal or inanimate…

  4. Reconsolidation from negative emotional pictures: is successful retrieval required?

    PubMed

    Finn, Bridgid; Roediger, Henry L; Rosenzweig, Emily

    2012-10-01

    Finn and Roediger (Psychological science 22:781-786, 2011) found that when a negative emotional picture was presented immediately after a successful retrieval, later test performance was enhanced as compared to when a neutral picture or a blank screen had been shown. This finding implicates the period immediately following retrieval as playing an important role in determining later retention via reconsolidation. In two new experiments, we investigated whether successful retrieval was required to show the enhancing effect of negative emotion on later recall. In both experiments, the participants studied Swahili-English vocabulary pairs, took an intervening cued-recall test, and were given a final cued-recall test on all items. In Experiment 1, we tested a distinctiveness explanation of the effect. The results showed that neither presentation of a negative picture just prior to successful retrieval nor presentation of a positive picture after successful retrieval produced the enhancing effect that was seen when negative pictures were presented after successful retrieval. In Experiment 2, we tested whether the enhancing effect would occur when a negative picture followed an unsuccessful retrieval attempt with feedback, and a larger enhancement effect occurred after errors of commission than after errors of omission. These results indicate that effort in retrieving is critical to the enhancing effect shown with negative pictures; whether the target is produced by the participant or given by an external source following a commission error does not matter. We interpret these results as support for semantic enrichment as a key element in producing the enhancing effect of negative pictures that are presented after a retrieval attempt.

  5. Fabrication of a Kilopixel Array of Superconducting Microcalorimeters with Microstripline Wiring

    NASA Technical Reports Server (NTRS)

    Chervenak, James

    2012-01-01

    A document describes the fabrication of a two-dimensional microcalorimeter array that uses microstrip wiring and integrated heat sinking to enable use of high-performance pixel designs at kilopixel scales (32 X 32). Each pixel is the high-resolution design employed in small-array test devices, which consist of a Mo/Au TES (transition edge sensor) on a silicon nitride membrane and an electroplated Bi/Au absorber. The pixel pitch within the array is 300 microns, where absorbers 290 microns on a side are cantilevered over a silicon support grid with 100-micron-wide beams. The high-density wiring and heat sinking are both carried by the silicon beams to the edge of the array. All pixels are wired out to the array edge. ECR (electron cyclotron resonance) oxide underlayer is deposited underneath the sensor layer. The sensor (TES) layer consists of a superconducting underlayer and a normal metal top layer. If the sensor is deposited at high temperature, the ECR oxide can be vacuum annealed to improve film smoothness and etch characteristics. This process is designed to recover high-resolution, single-pixel x-ray microcalorimeter performance within arrays of arbitrarily large format. The critical current limiting parts of the circuit are designed to have simple interfaces that can be independently verified. The lead-to-TES interface is entirely determined in a single layer that has multiple points of interface to maximize critical current. The lead rails that overlap the TES sensor element contact both the superconducting underlayer and the TES normal metal

  6. An optical watermarking solution for color personal identification pictures

    NASA Astrophysics Data System (ADS)

    Tan, Yi-zhou; Liu, Hai-bo; Huang, Shui-hua; Sheng, Ben-jian; Pan, Zhong-ming

    2009-11-01

    This paper presents a new approach for embedding authentication information into image on printed materials based on optical projection technique. Our experimental setup consists of two parts, one is a common camera, and the other is a LCD projector, which project a pattern on personnel's body (especially on the face). The pattern, generated by a computer, act as the illumination light source with sinusoidal distribution and it is also the watermark signal. For a color image, the watermark is embedded into the blue channel. While we take pictures (256×256 and 512×512, 567×390 pixels, respectively), an invisible mark is embedded directly into magnitude coefficients of Discrete Fourier transform (DFT) at exposure moment. Both optical and digital correlation is suitable for detection of this type of watermark. The decoded watermark is a set of concentric circles or sectors in the DFT domain (middle frequencies region) which is robust to photographing, printing and scanning. The unlawful people modify or replace the original photograph, and make fake passport (drivers' license and so on). Experiments show, it is difficult to forge certificates in which a watermark was embedded by our projector-camera combination based on analogue watermark method rather than classical digital method.

  7. Xevioso Crater on Ceres

    NASA Image and Video Library

    2017-09-26

    Xevioso Crater is the small (5.3 miles, 8.5 kilometers in diameter) crater associated with bright ejecta toward the top of this image, taken by NASA's Dawn spacecraft. It is one of the newly named craters on Ceres. Xevioso is located in the vicinity of Ahuna Mons, the tall, lonely mountain seen toward the bottom of the picture. Given that the small impact that formed Xevioso was able to excavate bright material, scientists suspect the material may be found at shallow depth. Its nature and relationship to other bright regions on Ceres is under analysis. The asymmetrical distribution of this bright ejecta indicates Xevioso formed via an oblique impact. Another view of Xevioso can be found here. Xevioso is named for the Fon god of thunder and fertility from the Kingdom of Dahomey, which was located in a region that is now the west African country of Benin. Dawn acquired this picture on October 15, 2015, from its high altitude mapping orbit at about 915 miles (1,470 kilometers) above the surface. The center coordinates of this image are 3.8 degrees south latitude, 314 degrees east longitude, and its resolution is 450 feet (140 meters) per pixel. https://photojournal.jpl.nasa.gov/catalog/PIA21907

  8. Correlative Light-Electron Fractography of Interlaminar Fracture in a Carbon-Epoxy Composite.

    PubMed

    Hein, Luis Rogerio de O; Campos, Kamila A de

    2015-12-01

    This work evaluates the use of light microscopes (LMs) as a tool for interlaminar fracture of polymer composite investigation with the aid of correlative fractography. Correlative fractography consists of an association of the extended depth of focus (EDF) method, based on reflected LM, with scanning electron microscopy (SEM) to evaluate interlaminar fractures. The use of these combined techniques is exemplified here for the mode I fracture of carbon-epoxy plain-weave reinforced composite. The EDF-LM is a digital image-processing method that consists of the extraction of in-focus pixels for each x-y coordinate in an image from a stack of Z-ordered digital pictures from an LM, resulting in a fully focused picture and a height elevation map for each stack. SEM is the most used tool for the identification of fracture mechanisms in a qualitative approach, with the combined advantages of a large focus depth and fine lateral resolution. However, LMs, with EDF software, may bypass the restriction on focus depth and present enough lateral resolution at low magnification. Finally, correlative fractography can provide the general comprehension of fracture processes, with the benefits of the association of different resolution scales and contrast modes.

  9. Finite Element Simulations of Kaikoura, NZ Earthquake using DInSAR and High-Resolution DSMs

    NASA Astrophysics Data System (ADS)

    Barba, M.; Willis, M. J.; Tiampo, K. F.; Glasscoe, M. T.; Clark, M. K.; Zekkos, D.; Stahl, T. A.; Massey, C. I.

    2017-12-01

    Three-dimensional displacements from the Kaikoura, NZ, earthquake in November 2016 are imaged here using Differential Interferometric Synthetic Aperture Radar (DInSAR) and high-resolution Digital Surface Model (DSM) differencing and optical pixel tracking. Full-resolution co- and post-seismic interferograms of Sentinel-1A/B images are constructed using the JPL ISCE software. The OSU SETSM software is used to produce repeat 0.5 m posting DSMs from commercial satellite imagery, which are supplemented with UAV derived DSMs over the Kaikoura fault rupture on the eastern South Island, NZ. DInSAR provides long-wavelength motions while DSM differencing and optical pixel tracking provides both horizontal and vertical near fault motions, improving the modeling of shallow rupture dynamics. JPL GeoFEST software is used to perform finite element modeling of the fault segments and slip distributions and, in turn, the associated asperity distribution. The asperity profile is then used to simulate event rupture, the spatial distribution of stress drop, and the associated stress changes. Finite element modeling of slope stability is accomplished using the ultra high-resolution UAV derived DSMs to examine the evolution of post-earthquake topography, landslide dynamics and volumes. Results include new insights into shallow dynamics of fault slip and partitioning, estimates of stress change, and improved understanding of its relationship with the associated seismicity, deformation, and triggered cascading hazards.

  10. Reduced As Components in Highly Oxidized Environments: Evidence from Full Spectral XANES Imaging using the Maia Massively Parallel Detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Etschmann, B.; Ryan, C; Brugger, J

    2010-01-01

    Synchrotron X-ray fluorescence (SXRF) and X-ray absorption spectroscopy (XAS) have become standard tools to measure element concentration, distribution at micrometer- to nanometer-scale, and speciation (e.g., nature of host phase; oxidation state) in inhomogeneous geomaterials. The new Maia X-ray detector system provides a quantum leap for the method in terms of data acquisition rate. It is now possible to rapidly collect fully quantitative maps of the distribution of major and trace elements at micrometer spatial resolution over areas as large as 1 x 5 cm{sup 2}. Fast data acquisition rates also open the way to X-ray absorption near-edge structure (XANES) imaging,more » in which spectroscopic information is available at each pixel in the map. These capabilities are critical for studying inhomogeneous Earth materials. Using a 96-element prototype Maia detector, we imaged thin sections of an oxidized pisolitic regolith (2 x 4.5 mm{sup 2} at 2.5 x 2.5 {micro}m{sup 2} pixel size) and a metamorphosed, sedimentary exhalative Mn-Fe ore (3.3 x 4 mm{sup 2} at 1.25 x 5 {micro}m{sup 2}). In both cases, As K-edge XANES imaging reveals localized occurrence of reduced As in parts of these oxidized samples, which would have been difficult to recognize using traditional approaches.« less

  11. Performance of CMOS imager as sensing element for a Real-time Active Pixel Dosimeter for Interventional Radiology procedures

    NASA Astrophysics Data System (ADS)

    Magalotti, D.; Bissi, L.; Conti, E.; Paolucci, M.; Placidi, P.; Scorzoni, A.; Servoli, L.

    2014-01-01

    Staff members applying Interventional Radiology procedures are exposed to ionizing radiation, which can induce detrimental effects to the human body, and requires an improvement of radiation protection. This paper is focused on the study of the sensor element for a wireless real-time dosimeter to be worn by the medical staff during the interventional radiology procedures, in the framework of the Real-Time Active PIxel Dosimetry (RAPID) INFN project. We characterize a CMOS imager to be used as detection element for the photons scattered by the patient body. The CMOS imager has been first characterized in laboratory using fluorescence X-ray sources, then a PMMA phantom has been used to diffuse the X-ray photons from an angiography system. Different operating conditions have been used to test the detector response in realistic situations, by varying the X-ray tube parameters (continuous/pulsed mode, tube voltage and current, pulse parameters), the sensor parameters (gain, integration time) and the relative distance between sensor and phantom. The sensor response has been compared with measurements performed using passive dosimeters (TLD) and also with a certified beam, in an accredited calibration centre, in order to obtain an absolute calibration. The results are very encouraging, with dose and dose rate measurement uncertainties below the 10% level even for the most demanding Interventional Radiology protocols.

  12. Evaluation of computational endomicroscopy architectures for minimally-invasive optical biopsy

    NASA Astrophysics Data System (ADS)

    Dumas, John P.; Lodhi, Muhammad A.; Bajwa, Waheed U.; Pierce, Mark C.

    2017-02-01

    We are investigating compressive sensing architectures for applications in endomicroscopy, where the narrow diameter probes required for tissue access can limit the achievable spatial resolution. We hypothesize that the compressive sensing framework can be used to overcome the fundamental pixel number limitation in fiber-bundle based endomicroscopy by reconstructing images with more resolvable points than fibers in the bundle. An experimental test platform was assembled to evaluate and compare two candidate architectures, based on introducing a coded amplitude mask at either a conjugate image or Fourier plane within the optical system. The benchtop platform consists of a common illumination and object path followed by separate imaging arms for each compressive architecture. The imaging arms contain a digital micromirror device (DMD) as a reprogrammable mask, with a CCD camera for image acquisition. One arm has the DMD positioned at a conjugate image plane ("IP arm"), while the other arm has the DMD positioned at a Fourier plane ("FP arm"). Lenses were selected and positioned within each arm to achieve an element-to-pixel ratio of 16 (230,400 mask elements mapped onto 14,400 camera pixels). We discuss our mathematical model for each system arm and outline the importance of accounting for system non-idealities. Reconstruction of a 1951 USAF resolution target using optimization-based compressive sensing algorithms produced images with higher spatial resolution than bicubic interpolation for both system arms when system non-idealities are included in the model. Furthermore, images generated with image plane coding appear to exhibit higher spatial resolution, but more noise, than images acquired through Fourier plane coding.

  13. Extended focal-plane array development for the International X-ray Observatory

    NASA Astrophysics Data System (ADS)

    Smith, Stephen J.; Bandler, Simon R.; Beyer, Joern; Chervenak, James A.; Drung, Dietmar; Eckart, Megan E.; Finkbeiner, Fred M.; Kelley, Richard L.; Kilbourne, Caroline A.; Scott Porter, F.; Sadleir, John E.

    2009-12-01

    We are developing arrays of transition-edge sensors (TES's) for the International X-ray observatory (IXO). The IXO microcalorimeter array will consist of a central 40×40 core of 300 μm pitch pixels with a resolution of 2.5 eV from 0.3-10 keV. To maximize the science return from the mission, an outer extended array is also required. This 52×52 array (2304 elements surrounding the core) of 600 μm pitch pixels increases the field-of-view from 2' to 5.4' with a resolution of 10 eV. However, significantly increasing the number of readout channels is unfavorable due to the increase in mass and power of the readout chain as well as adding complexity at the focal plane. Consequently, we are developing position-sensitive devices which maintain the same plate scale but at a reduced number of readout channels. One option is to use multiple absorber elements with different thermal conductances to a single TES. Position discrimination is achieved from differences in the pulse rise-time. Another new option is to inductively couple several TES's to a single SQUID. Position discrimination can be achieved by using different combinations of coupling polarity, inductive couplings and heat sink conductances. We present first results demonstrating <9 eV across four 500 μm pixels coupled to a single SQUID. A further possibility is to increase the number of channels to be time-division multiplexed in a single column at some expense in resolution. In this paper we discuss experimental results and trade-offs for these extended array options.

  14. Dynamic Janus Metasurfaces in the Visible Spectral Region.

    PubMed

    Yu, Ping; Li, Jianxiong; Zhang, Shuang; Jin, Zhongwei; Schütz, Gisela; Qiu, Cheng-Wei; Hirscher, Michael; Liu, Na

    2018-06-27

    Janus monolayers have long been captivated as a popular notion for breaking in-plane and out-of-plane structural symmetry. Originated from chemistry and materials science, the concept of Janus functions have been recently extended to ultrathin metasurfaces by arranging meta-atoms asymmetrically with respect to the propagation or polarization direction of the incident light. However, such metasurfaces are intrinsically static and the information they carry can be straightforwardly decrypted by scanning the incident light directions and polarization states once the devices are fabricated. In this Letter, we present a dynamic Janus metasurface scheme in the visible spectral region. In each super unit cell, three plasmonic pixels are categorized into two sets. One set contains a magnesium nanorod and a gold nanorod that are orthogonally oriented with respect to each other, working as counter pixels. The other set only contains a magnesium nanorod. The effective pixels on the Janus metasurface can be reversibly regulated by hydrogenation/dehydrogenation of the magnesium nanorods. Such dynamic controllability at visible frequencies allows for flat optical elements with novel functionalities including beam steering, bifocal lensing, holographic encryption, and dual optical function switching.

  15. Phase unwrapping algorithm using polynomial phase approximation and linear Kalman filter.

    PubMed

    Kulkarni, Rishikesh; Rastogi, Pramod

    2018-02-01

    A noise-robust phase unwrapping algorithm is proposed based on state space analysis and polynomial phase approximation using wrapped phase measurement. The true phase is approximated as a two-dimensional first order polynomial function within a small sized window around each pixel. The estimates of polynomial coefficients provide the measurement of phase and local fringe frequencies. A state space representation of spatial phase evolution and the wrapped phase measurement is considered with the state vector consisting of polynomial coefficients as its elements. Instead of using the traditional nonlinear Kalman filter for the purpose of state estimation, we propose to use the linear Kalman filter operating directly with the wrapped phase measurement. The adaptive window width is selected at each pixel based on the local fringe density to strike a balance between the computation time and the noise robustness. In order to retrieve the unwrapped phase, either a line-scanning approach or a quality guided strategy of pixel selection is used depending on the underlying continuous or discontinuous phase distribution, respectively. Simulation and experimental results are provided to demonstrate the applicability of the proposed method.

  16. A novel method of the image processing on irregular triangular meshes

    NASA Astrophysics Data System (ADS)

    Vishnyakov, Sergey; Pekhterev, Vitaliy; Sokolova, Elizaveta

    2018-04-01

    The paper describes a novel method of the image processing based on irregular triangular meshes implementation. The triangular mesh is adaptive to the image content, least mean square linear approximation is proposed for the basic interpolation within the triangle. It is proposed to use triangular numbers to simplify using of the local (barycentric) coordinates for the further analysis - triangular element of the initial irregular mesh is to be represented through the set of the four equilateral triangles. This allows to use fast and simple pixels indexing in local coordinates, e.g. "for" or "while" loops for access to the pixels. Moreover, representation proposed allows to use discrete cosine transform of the simple "rectangular" symmetric form without additional pixels reordering (as it is used for shape-adaptive DCT forms). Furthermore, this approach leads to the simple form of the wavelet transform on triangular mesh. The results of the method application are presented. It is shown that advantage of the method proposed is a combination of the flexibility of the image-adaptive irregular meshes with the simple form of the pixel indexing in local triangular coordinates and the using of the common forms of the discrete transforms for triangular meshes. Method described is proposed for the image compression, pattern recognition, image quality improvement, image search and indexing. It also may be used as a part of video coding (intra-frame or inter-frame coding, motion detection).

  17. Rover mast calibration, exact camera pointing, and camara handoff for visual target tracking

    NASA Technical Reports Server (NTRS)

    Kim, Won S.; Ansar, Adnan I.; Steele, Robert D.

    2005-01-01

    This paper presents three technical elements that we have developed to improve the accuracy of the visual target tracking for single-sol approach-and-instrument placement in future Mars rover missions. An accurate, straightforward method of rover mast calibration is achieved by using a total station, a camera calibration target, and four prism targets mounted on the rover. The method was applied to Rocky8 rover mast calibration and yielded a 1.1-pixel rms residual error. Camera pointing requires inverse kinematic solutions for mast pan and tilt angles such that the target image appears right at the center of the camera image. Two issues were raised. Mast camera frames are in general not parallel to the masthead base frame. Further, the optical axis of the camera model in general does not pass through the center of the image. Despite these issues, we managed to derive non-iterative closed-form exact solutions, which were verified with Matlab routines. Actual camera pointing experiments aver 50 random target image paints yielded less than 1.3-pixel rms pointing error. Finally, a purely geometric method for camera handoff using stereo views of the target has been developed. Experimental test runs show less than 2.5 pixels error on high-resolution Navcam for Pancam-to-Navcam handoff, and less than 4 pixels error on lower-resolution Hazcam for Navcam-to-Hazcam handoff.

  18. Towards simultaneous Talbot bands based optical coherence tomography and scanning laser ophthalmoscopy imaging.

    PubMed

    Marques, Manuel J; Bradu, Adrian; Podoleanu, Adrian Gh

    2014-05-01

    We report a Talbot bands-based optical coherence tomography (OCT) system capable of producing longitudinal B-scan OCT images and en-face scanning laser ophthalmoscopy (SLO) images of the human retina in-vivo. The OCT channel employs a broadband optical source and a spectrometer. A gap is created between the sample and reference beams while on their way towards the spectrometer's dispersive element to create Talbot bands. The spatial separation of the two beams facilitates collection by an SLO channel of optical power originating exclusively from the retina, deprived from any contribution from the reference beam. Three different modes of operation are presented, constrained by the minimum integration time of the camera used in the spectrometer and by the galvo-scanners' scanning rate: (i) a simultaneous acquisition mode over the two channels, useful for small size imaging, that conserves the pixel-to-pixel correspondence between them; (ii) a hybrid sequential mode, where the system switches itself between the two regimes and (iii) a sequential "on-demand" mode, where the system can be used in either OCT or SLO regimes for as long as required. The two sequential modes present varying degrees of trade-off between pixel-to-pixel correspondence and independent full control of parameters within each channel. Images of the optic nerve and fovea regions obtained in the simultaneous (i) and in the hybrid sequential mode (ii) are presented.

  19. Towards simultaneous Talbot bands based optical coherence tomography and scanning laser ophthalmoscopy imaging

    PubMed Central

    Marques, Manuel J.; Bradu, Adrian; Podoleanu, Adrian Gh.

    2014-01-01

    We report a Talbot bands-based optical coherence tomography (OCT) system capable of producing longitudinal B-scan OCT images and en-face scanning laser ophthalmoscopy (SLO) images of the human retina in-vivo. The OCT channel employs a broadband optical source and a spectrometer. A gap is created between the sample and reference beams while on their way towards the spectrometer’s dispersive element to create Talbot bands. The spatial separation of the two beams facilitates collection by an SLO channel of optical power originating exclusively from the retina, deprived from any contribution from the reference beam. Three different modes of operation are presented, constrained by the minimum integration time of the camera used in the spectrometer and by the galvo-scanners’ scanning rate: (i) a simultaneous acquisition mode over the two channels, useful for small size imaging, that conserves the pixel-to-pixel correspondence between them; (ii) a hybrid sequential mode, where the system switches itself between the two regimes and (iii) a sequential “on-demand” mode, where the system can be used in either OCT or SLO regimes for as long as required. The two sequential modes present varying degrees of trade-off between pixel-to-pixel correspondence and independent full control of parameters within each channel. Images of the optic nerve and fovea regions obtained in the simultaneous (i) and in the hybrid sequential mode (ii) are presented. PMID:24877006

  20. 10000 pixels wide CMOS frame imager for earth observation from a HALE UAV

    NASA Astrophysics Data System (ADS)

    Delauré, B.; Livens, S.; Everaerts, J.; Kleihorst, R.; Schippers, Gert; de Wit, Yannick; Compiet, John; Banachowicz, Bartosz

    2009-09-01

    MEDUSA is the lightweight high resolution camera, designed to be operated from a solar-powered Unmanned Aerial Vehicle (UAV) flying at stratospheric altitudes. The instrument is a technology demonstrator within the Pegasus program and targets applications such as crisis management and cartography. A special wide swath CMOS imager has been developed by Cypress Semiconductor Cooperation Belgium to meet the specific sensor requirements of MEDUSA. The CMOS sensor has a stitched design comprising a panchromatic and color sensor on the same die. Each sensor consists of 10000*1200 square pixels (5.5μm size, novel 6T architecture) with micro-lenses. The exposure is performed by means of a high efficiency snapshot shutter. The sensor is able to operate at a rate of 30fps in full frame readout. Due to a novel pixel design, the sensor has low dark leakage of the memory elements (PSNL) and low parasitic light sensitivity (PLS). Still it maintains a relative high QE (Quantum efficiency) and a FF (fill factor) of over 65%. It features an MTF (Modulation Transfer Function) higher than 60% at Nyquist frequency in both X and Y directions The measured optical/electrical crosstalk (expressed as MTF) of this 5.5um pixel is state-of-the art. These properties makes it possible to acquire sharp images also in low-light conditions.

  1. Twofold processing for denoising ultrasound medical images.

    PubMed

    Kishore, P V V; Kumar, K V V; Kumar, D Anil; Prasad, M V D; Goutham, E N D; Rahul, R; Krishna, C B S Vamsi; Sandeep, Y

    2015-01-01

    Ultrasound medical (US) imaging non-invasively pictures inside of a human body for disease diagnostics. Speckle noise attacks ultrasound images degrading their visual quality. A twofold processing algorithm is proposed in this work to reduce this multiplicative speckle noise. First fold used block based thresholding, both hard (BHT) and soft (BST), on pixels in wavelet domain with 8, 16, 32 and 64 non-overlapping block sizes. This first fold process is a better denoising method for reducing speckle and also inducing object of interest blurring. The second fold process initiates to restore object boundaries and texture with adaptive wavelet fusion. The degraded object restoration in block thresholded US image is carried through wavelet coefficient fusion of object in original US mage and block thresholded US image. Fusion rules and wavelet decomposition levels are made adaptive for each block using gradient histograms with normalized differential mean (NDF) to introduce highest level of contrast between the denoised pixels and the object pixels in the resultant image. Thus the proposed twofold methods are named as adaptive NDF block fusion with hard and soft thresholding (ANBF-HT and ANBF-ST). The results indicate visual quality improvement to an interesting level with the proposed twofold processing, where the first fold removes noise and second fold restores object properties. Peak signal to noise ratio (PSNR), normalized cross correlation coefficient (NCC), edge strength (ES), image quality Index (IQI) and structural similarity index (SSIM), measure the quantitative quality of the twofold processing technique. Validation of the proposed method is done by comparing with anisotropic diffusion (AD), total variational filtering (TVF) and empirical mode decomposition (EMD) for enhancement of US images. The US images are provided by AMMA hospital radiology labs at Vijayawada, India.

  2. 1.5 Meter Per Pixel View of Boulders in Ganges Chasma

    NASA Technical Reports Server (NTRS)

    1999-01-01

    The Mars Orbiter Camera (MOC) on board the Mars Global Surveyor (MGS)spacecraft was designed to be able to take pictures that 'bridge the gap' between what could be seen by the Mariner 9 and Viking Orbiters from space and what could be seen by landers from the ground. In other words, MOC was designed to be able to see boulders of sizes similar to and larger than those named 'Yogi' at the Mars Pathfinder site and 'Big Joe' at the Viking 1 landing site. To see such boulders, a resolution of at least 1.5 meters (5 feet) per pixel was required.

    With the start of the MGS Mapping Phase of the mission during the second week of March 1999, the MOC team is pleased to report that 'the gap is bridged.' This image shows a field of boulders on the surface of a landslide deposit in Ganges Chasma. Ganges Chasma is one of the valleys in the Valles Marineris canyon system. The image resolution is 1.5 meters per pixel. The boulders shown here range in size from about 2 meters (7 feet) to about 20 meters (66 feet) in size. The image covers an area 1 kilometer (0.62 miles) across, and illumination is from the upper left.

    Malin Space Science Systems and the California Institute of Technology built the MOC using spare hardware from the Mars Observer mission. MSSS operates the camera from its facilities in San Diego, CA. The Jet Propulsion Laboratory's Mars Surveyor Operations Project operates the Mars Global Surveyor spacecraft with its industrial partner, Lockheed Martin Astronautics, from facilities in Pasadena, CA and Denver, CO.

  3. Megapixel mythology and photospace: estimating photospace for camera phones from large image sets

    NASA Astrophysics Data System (ADS)

    Hultgren, Bror O.; Hertel, Dirk W.

    2008-01-01

    It is a myth that more pixels alone result in better images. The marketing of camera phones in particular has focused on their pixel numbers. However, their performance varies considerably according to the conditions of image capture. Camera phones are often used in low-light situations where the lack of a flash and limited exposure time will produce underexposed, noisy and blurred images. Camera utilization can be quantitatively described by photospace distributions, a statistical description of the frequency of pictures taken at varying light levels and camera-subject distances. If the photospace distribution is known, the user-experienced distribution of quality can be determined either directly by direct measurement of subjective quality, or by photospace-weighting of objective attributes. The population of a photospace distribution requires examining large numbers of images taken under typical camera phone usage conditions. ImagePhi was developed as a user-friendly software tool to interactively estimate the primary photospace variables, subject illumination and subject distance, from individual images. Additionally, subjective evaluations of image quality and failure modes for low quality images can be entered into ImagePhi. ImagePhi has been applied to sets of images taken by typical users with a selection of popular camera phones varying in resolution. The estimated photospace distribution of camera phone usage has been correlated with the distributions of failure modes. The subjective and objective data show that photospace conditions have a much bigger impact on image quality of a camera phone than the pixel count of its imager. The 'megapixel myth' is thus seen to be less a myth than an ill framed conditional assertion, whose conditions are to a large extent specified by the camera's operational state in photospace.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skidmore, Cary Bradford; Preston, Daniel N.

    These are a set of slides for educational outreach to children on high explosives science. It gives an introduction to the elements involved in this science: carbon, hydrogen, nitrogen, and oxygen. Combined, these form the molecule HMX. Many pictures are also included to illustrate explosions.

  5. Some dipole shower studies

    NASA Astrophysics Data System (ADS)

    Cabouat, Baptiste; Sjöstrand, Torbjörn

    2018-03-01

    Parton showers have become a standard component in the description of high-energy collisions. Nowadays most final-state ones are of the dipole character, wherein a pair of partons branches into three, with energy and momentum preserved inside this subsystem. For initial-state showers a dipole picture is also possible and commonly used, but the older global-recoil strategy remains a valid alternative, wherein larger groups of partons share the energy-momentum preservation task. In this article we introduce and implement a dipole picture also for initial-state radiation in Pythia, and compare with the existing global-recoil one, and with data. For the case of Deeply Inelastic Scattering we can directly compare with matrix element expressions and show that the dipole picture gives a very good description over the whole phase space, at least for the first branching.

  6. Europa Ridges, Hills and Domes

    NASA Technical Reports Server (NTRS)

    1997-01-01

    This moderate-resolution view of the surface of one of Jupiter's moons, Europa, shows the complex icy crust that has been extensively modified by fracturing and the formation of ridges. The ridge systems superficially resemble highway networks with overpasses, interchanges and junctions. From the relative position of the overlaps, it is possible to determine the age sequence for the ridge sets. For example, while the 8-kilometer-wide (5-mile) ridge set in the lower left corner is younger than most of the terrain seen in this picture, a narrow band cuts across the set toward the bottom of the picture, indicating that the band formed later. In turn, this band is cut by the narrow 2- kilometer-wide (1.2-mile) double ridge running from the lower right to upper left corner of the picture. Also visible are numerous clusters of hills and low domes as large as 9 kilometers (5.5 miles) across, many with associated dark patches of non-ice material. The ridges, hills and domes are considered to be ice-rich material derived from the subsurface. These are some of the youngest features seen on the surface of Europa and could represent geologically young eruptions.

    This area covers about 140 kilometers by 130 kilometers (87 miles by 81 miles) and is centered at 12.3 degrees north latitude, 268 degrees west longitude. Illumination is from the east (right side of picture). The resolution is about 180 meters (200 yards) per pixel, meaning that the smallest feature visible is about a city block in size. The picture was taken by the Solid State Imaging system on board the Galileo spacecraft on February 20, 1997, from a distance of 17,700 kilometers (11,000 miles) during its sixth orbit around Jupiter.

    The Jet Propulsion Laboratory, Pasadena, CA, manages the mission for NASA's Office of Space Science, Washington D.C. This image and other images and data received from Galileo are posted on the World Wide Web Galileo mission home page at http://galileo.jpl.nasa.gov.

  7. Simultaneous storage of medical images in the spatial and frequency domain: A comparative study

    PubMed Central

    Nayak, Jagadish; Bhat, P Subbanna; Acharya U, Rajendra; UC, Niranjan

    2004-01-01

    Background Digital watermarking is a technique of hiding specific identification data for copyright authentication. This technique is adapted here for interleaving patient information with medical images, to reduce storage and transmission overheads. Methods The patient information is encrypted before interleaving with images to ensure greater security. The bio-signals are compressed and subsequently interleaved with the image. This interleaving is carried out in the spatial domain and Frequency domain. The performance of interleaving in the spatial, Discrete Fourier Transform (DFT), Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) coefficients is studied. Differential pulse code modulation (DPCM) is employed for data compression as well as encryption and results are tabulated for a specific example. Results It can be seen from results, the process does not affect the picture quality. This is attributed to the fact that the change in LSB of a pixel changes its brightness by 1 part in 256. Spatial and DFT domain interleaving gave very less %NRMSE as compared to DCT and DWT domain. Conclusion The Results show that spatial domain the interleaving, the %NRMSE was less than 0.25% for 8-bit encoded pixel intensity. Among the frequency domain interleaving methods, DFT was found to be very efficient. PMID:15180899

  8. Miniaturized X-ray telescope for VZLUSAT-1 nanosatellite with Timepix detector

    NASA Astrophysics Data System (ADS)

    Baca, T.; Platkevic, M.; Jakubek, J.; Inneman, A.; Stehlikova, V.; Urban, M.; Nentvich, O.; Blazek, M.; McEntaffer, R.; Daniel, V.

    2016-10-01

    We present the application of a Timepix detector on the VZLUSAT-1 nanosatellite. Timepix is a compact pixel detector (256×256 square pixels, 55×55 μm each) sensitive to hard X-ray radiation. It is suitable for detecting extraterrestrial X-rays due to its low noise characteristics, which enables measuring without special cooling. This project aims to verify the practicality of the detector in conjunction with 1-D Lobster-Eye optics to observe celestial sources between 5 and 20 keV. A modified USB interface (developed by IEAP at CTU in Prague) is used for low-level control of the Timepix. An additional 8-bit Atmel microcontroller is dedicated for commanding the detector and to process the data onboard the satellite. We present software methods for onboard post-processing of captured images, which are suitable for implementation under the constraints of the low-powered embedded hardware. Several measuring modes are prepared for different scenarios including single picture exposure, solar UV-light triggered exposure, and long-term all-sky monitoring. The work has been done within Medipix2 collaboration. The satellite is planned for launch in April 2017 as a part of the QB50 project with an end of life expectancy in 2019.

  9. A 65k pixel, 150k frames-per-second camera with global gating and micro-lenses suitable for fluorescence lifetime imaging

    NASA Astrophysics Data System (ADS)

    Burri, Samuel; Powolny, François; Bruschini, Claudio E.; Michalet, Xavier; Regazzoni, Francesco; Charbon, Edoardo

    2014-05-01

    This paper presents our work on a 65k pixel single-photon avalanche diode (SPAD) based imaging sensor realized in a 0.35μm standard CMOS process. At a resolution of 512 by 128 pixels the sensor is read out in 6.4μs to deliver over 150k monochrome frames per second. The individual pixel has a size of 24μm2 and contains the SPAD with a 12T quenching and gating circuitry along with a memory element. The gating signals are distributed across the chip through a balanced tree to minimize the signal skew between the pixels. The array of pixels is row-addressable and data is sent out of the chip on 128 lines in parallel at a frequency of 80MHz. The system is controlled by an FPGA which generates the gating and readout signals and can be used for arbitrary real-time computation on the frames from the sensor. The communication protocol between the camera and a conventional PC is USB2. The active area of the chip is 5% and can be significantly improved with the application of a micro-lens array. A micro-lens array, for use with collimated light, has been designed and its performance is reviewed in the paper. Among other high-speed phenomena the gating circuitry capable of generating illumination periods shorter than 5ns can be used for Fluorescence Lifetime Imaging (FLIM). In order to measure the lifetime of fluorophores excited by a picosecond laser, the sensor's illumination period is synchronized with the excitation laser pulses. A histogram of the photon arrival times relative to the excitation is then constructed by counting the photons arriving during the sensitive time for several positions of the illumination window. The histogram for each pixel is transferred afterwards to a computer where software routines extract the lifetime at each location with an accuracy better than 100ps. We show results for fluorescence lifetime measurements using different fluorophores with lifetimes ranging from 150ps to 5ns.

  10. Landsat-Swath Imaging Spectrometer Design

    NASA Technical Reports Server (NTRS)

    Mouroulis, Pantazis; Green, Robert O.; Van Gorp, Byron; Moore, Lori; Wilson, Daniel W.; Bender, Holly A.

    2015-01-01

    We describe the design of a high-throughput pushbroom imaging spectrometer and telescope system that is capable of Landsat swath and resolution while providing better than 10 nm per pixel spectral resolution. The design is based on a 3200 x 480 element x 18 µm pixel size focal plane array, two of which are utilized to cover the full swath. At an optical speed of F/1.8, the system is the fastest proposed to date to our knowledge. The utilization of only two spectrometer modules fed from the same telescope reduces system complexity while providing a solution within achievable detector technology. Predictions of complete system response are shown. Also, it is shown that detailed ghost analysis is a requirement for this type of spectrometer and forms an essential part of a complete design.

  11. Information extraction from multivariate images

    NASA Technical Reports Server (NTRS)

    Park, S. K.; Kegley, K. A.; Schiess, J. R.

    1986-01-01

    An overview of several multivariate image processing techniques is presented, with emphasis on techniques based upon the principal component transformation (PCT). Multiimages in various formats have a multivariate pixel value, associated with each pixel location, which has been scaled and quantized into a gray level vector, and the bivariate of the extent to which two images are correlated. The PCT of a multiimage decorrelates the multiimage to reduce its dimensionality and reveal its intercomponent dependencies if some off-diagonal elements are not small, and for the purposes of display the principal component images must be postprocessed into multiimage format. The principal component analysis of a multiimage is a statistical analysis based upon the PCT whose primary application is to determine the intrinsic component dimensionality of the multiimage. Computational considerations are also discussed.

  12. Computer animation of modal and transient vibrations

    NASA Technical Reports Server (NTRS)

    Lipman, Robert R.

    1987-01-01

    An interactive computer graphics processor is described that is capable of generating input to animate modal and transient vibrations of finite element models on an interactive graphics system. The results from NASTRAN can be postprocessed such that a three dimensional wire-frame picture, in perspective, of the finite element mesh is drawn on the graphics display. Modal vibrations of any mode shape or transient motions over any range of steps can be animated. The finite element mesh can be color-coded by any component of displacement. Viewing parameters and the rate of vibration of the finite element model can be interactively updated while the structure is vibrating.

  13. NAVAIR Portable Source Initiative (NPSI) Standard for Reusable Source Dataset Metadata (RSDM) V2.4

    DTIC Science & Technology

    2012-09-26

    defining a raster file format: <RasterFileFormat> <FormatName>TIFF</FormatName> <Order>BIP</Order> < DataType >8-BIT_UNSIGNED</ DataType ...interleaved by line (BIL); Band interleaved by pixel (BIP). element RasterFileFormatType/ DataType diagram type restriction of xsd:string facets

  14. CryoPAF4: a cryogenic phased array feed design

    NASA Astrophysics Data System (ADS)

    Locke, Lisa; Garcia, Dominic; Halman, Mark; Henke, Doug; Hovey, Gary; Jiang, Nianhua; Knee, Lewis; Lacy, Gordon; Loop, David; Rupen, Michael; Veidt, Bruce; Wierzbicki, Ramunas

    2016-07-01

    Phased array feed (PAF) receivers used on radio astronomy telescopes offer the promise of increased fields of view while maintaining the superlative performance attained with traditional single pixel feeds (SPFs). However, the much higher noise temperatures of room temperature PAFs compared to cryogenically-cooled SPFs have prevented their general adoption. Here we describe a conceptual design for a cryogenically cooled 2.8 - 5.18 GHz dual linear polarization PAF with estimated receiver temperature of 11 K. The cryogenic PAF receiver will comprise a 140 element Vivaldi antenna array and low-noise amplifiers housed in a 480 mm diameter cylindrical dewar covered with a RF transparent radome. A broadband two-section coaxial feed is integrated within each metal antenna element to withstand the cryogenic environment and to provide a 50 ohm impedance for connection to the rest of the receiver. The planned digital beamformer performs digitization, frequency band selection, beam forming and array covariance matrix calibration. Coupling to a 15 m offset Gregorian dual-reflector telescope, cryoPAF4 can expect to form 18 overlapping beams increasing the field of view by a factor of 8x compared to a single pixel receiver of equal system temperature.

  15. Development of a mercuric iodide detector array for in-vivo x-ray imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patt, B.E.; Iwanczyk, J.S.; Tornai, M.P.

    A nineteen element mercuric iodide (HgI{sub 2}) detector array has been developed in order to investigate the potential of using this technology for in-vivo x-ray and gamma-ray imaging. A prototype cross-grid detector array was constructed with hexagonal pixels of 1.9 mm diameter (active area = 3.28 mm{sup 2}) and 0.2 mm thick septa. The overall detector active area is roughly 65 mm{sup 2}. A detector thickness of 1.2 mm was used to achieve about 100% efficiency at 60 keV and 67% efficiency at 140 keV The detector fabrication, geometry and structure were optimized for charge collection and to minimize crosstalkmore » between elements. A section of a standard high resolution cast-lead gamma-camera collimator was incorporated into the detector to provide collimation matching the discrete pixel geometry. Measurements of spectral and spatial performance of the array were made using 241-Am and 99m-Tc sources. These measurements were compared with similar measurements made using an optimized single HgI{sub 2} x-ray detector with active area of about 3 mm{sup 2} and thickness of 500 {mu}m.« less

  16. Phased-array ultrasonic surface contour mapping system and method for solids hoppers and the like

    DOEpatents

    Fasching, George E.; Smith, Jr., Nelson S.

    1994-01-01

    A real time ultrasonic surface contour mapping system is provided including a digitally controlled phased-array of transmitter/receiver (T/R) elements located in a fixed position above the surface to be mapped. The surface is divided into a predetermined number of pixels which are separately scanned by an arrangement of T/R elements by applying phase delayed signals thereto that produce ultrasonic tone bursts from each T/R that arrive at a point X in phase and at the same time relative to the leading edge of the tone burst pulse so that the acoustic energies from each T/R combine in a reinforcing manner at point X. The signals produced by the reception of the echo signals reflected from point X back to the T/Rs are also delayed appropriately so that they add in phase at the input of a signal combiner. This combined signal is then processed to determine the range to the point X using density-corrected sound velocity values. An autofocusing signal is developed from the computed average range for a complete scan of the surface pixels. A surface contour map is generated in real time form the range signals on a video monitor.

  17. Laser Fusion - A New Thermonuclear Concept

    ERIC Educational Resources Information Center

    Cooper, Ralph S.

    1975-01-01

    Describes thermonuclear processes induced by interaction of a laser beam with the surface of a fuel pellet. An expanding plasma is formed which results in compression of the element. Laser and reactor technology are discussed. Pictures and diagrams are included. (GH)

  18. Segmentation of suspicious objects in an x-ray image using automated region filling approach

    NASA Astrophysics Data System (ADS)

    Fu, Kenneth; Guest, Clark; Das, Pankaj

    2009-08-01

    To accommodate the flow of commerce, cargo inspection systems require a high probability of detection and low false alarm rate while still maintaining a minimum scan speed. Since objects of interest (high atomic-number metals) will often be heavily shielded to avoid detection, any detection algorithm must be able to identify such objects despite the shielding. Since pixels of a shielded object have a greater opacity than the shielding, we use a clustering method to classify objects in the image by pixel intensity levels. We then look within each intensity level region for sub-clusters of pixels with greater opacity than the surrounding region. A region containing an object has an enclosed-contour region (a hole) inside of it. We apply a region filling technique to fill in the hole, which represents a shielded object of potential interest. One method for region filling is seed-growing, which puts a "seed" starting point in the hole area and uses a selected structural element to fill out that region. However, automatic seed point selection is a hard problem; it requires additional information to decide if a pixel is within an enclosed region. Here, we propose a simple, robust method for region filling that avoids the problem of seed point selection. In our approach, we calculate the gradient Gx and Gy at each pixel in a binary image, and fill in 1s between a pair of x1 Gx(x1,y)=-1 and x2 Gx(x2,y)=1, and do the same thing in y-direction. The intersection of the two results will be filled region. We give a detailed discussion of our algorithm, discuss the strengths this method has over other methods, and show results of using our method.

  19. Combined statistical analysis of landslide release and propagation

    NASA Astrophysics Data System (ADS)

    Mergili, Martin; Rohmaneo, Mohammad; Chu, Hone-Jay

    2016-04-01

    Statistical methods - often coupled with stochastic concepts - are commonly employed to relate areas affected by landslides with environmental layers, and to estimate spatial landslide probabilities by applying these relationships. However, such methods only concern the release of landslides, disregarding their motion. Conceptual models for mass flow routing are used for estimating landslide travel distances and possible impact areas. Automated approaches combining release and impact probabilities are rare. The present work attempts to fill this gap by a fully automated procedure combining statistical and stochastic elements, building on the open source GRASS GIS software: (1) The landslide inventory is subset into release and deposition zones. (2) We employ a traditional statistical approach to estimate the spatial release probability of landslides. (3) We back-calculate the probability distribution of the angle of reach of the observed landslides, employing the software tool r.randomwalk. One set of random walks is routed downslope from each pixel defined as release area. Each random walk stops when leaving the observed impact area of the landslide. (4) The cumulative probability function (cdf) derived in (3) is used as input to route a set of random walks downslope from each pixel in the study area through the DEM, assigning the probability gained from the cdf to each pixel along the path (impact probability). The impact probability of a pixel is defined as the average impact probability of all sets of random walks impacting a pixel. Further, the average release probabilities of the release pixels of all sets of random walks impacting a given pixel are stored along with the area of the possible release zone. (5) We compute the zonal release probability by increasing the release probability according to the size of the release zone - the larger the zone, the larger the probability that a landslide will originate from at least one pixel within this zone. We quantify this relationship by a set of empirical curves. (6) Finally, we multiply the zonal release probability with the impact probability in order to estimate the combined impact probability for each pixel. We demonstrate the model with a 167 km² study area in Taiwan, using an inventory of landslides triggered by the typhoon Morakot. Analyzing the model results leads us to a set of key conclusions: (i) The average composite impact probability over the entire study area corresponds well to the density of observed landside pixels. Therefore we conclude that the method is valid in general, even though the concept of the zonal release probability bears some conceptual issues that have to be kept in mind. (ii) The parameters used as predictors cannot fully explain the observed distribution of landslides. The size of the release zone influences the composite impact probability to a larger degree than the pixel-based release probability. (iii) The prediction rate increases considerably when excluding the largest, deep-seated, landslides from the analysis. We conclude that such landslides are mainly related to geological features hardly reflected in the predictor layers used.

  20. Martian 'Swiss Cheese'

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This image is illuminated by sunlight from the upper left.

    Looking like pieces of sliced and broken swiss cheese, the upper layer of the martian south polar residual cap has been eroded, leaving flat-topped mesas into which are set circular depressions such as those shown here. The circular features are depressions, not hills. The largest mesas here stand about 4 meters (13 feet) high and may be composed of frozen carbon dioxide and/or water. Nothing like this has ever been seen anywhere on Mars except within the south polar cap, leading to some speculation that these landforms may have something to do with the carbon dioxide thought to be frozen in the south polar region. On Earth, we know frozen carbon dioxide as 'dry ice'. On Mars, as this picture might be suggesting, there may be entire landforms larger than a small town and taller than 2 to 3 men and women that consist, in part, of dry ice.

    No one knows for certain whether frozen carbon dioxide has played a role in the creation of the 'swiss cheese' and other bizarre landforms seen in this picture. The picture covers an area 3 x 9 kilometers (1.9 x 5.6 miles) near 85.6oS, 74.4oW at a resolution of 7.3 meters (24 feet) per pixel. This picture was taken by the Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) during early southern spring on August 3, 1999.

    Malin Space Science Systems and the California Institute of Technology built the MOC using spare hardware from the Mars Observer mission. MSSS operates the camera from its facilities in San Diego, CA. The Jet Propulsion Laboratory's Mars Surveyor Operations Project operates the Mars Global Surveyor spacecraft with its industrial partner, Lockheed Martin Astronautics, from facilities in Pasadena, CA and Denver, CO.

  1. Young Tanzanians and the Cinema: A Study of the Effects of Selected Basic Motion Picture Elements and Population Characteristics on Filmic Comprehension of Tanzanian Adolescent Primary School Children.

    ERIC Educational Resources Information Center

    Giltrow, David Roger

    A study was conducted of Tanzanian adolescent school children's responses to filmic elements. The design included a very large sample in a complicated factorial design, varying such factors as color, type of action, background and sound of the film, and the demographic characteristics of the subjects. Results showed that of these variables,…

  2. Statistical analysis of low-voltage EDS spectrum images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, I.M.

    1998-03-01

    The benefits of using low ({le}5 kV) operating voltages for energy-dispersive X-ray spectrometry (EDS) of bulk specimens have been explored only during the last few years. This paper couples low-voltage EDS with two other emerging areas of characterization: spectrum imaging of a computer chip manufactured by a major semiconductor company. Data acquisition was performed with a Philips XL30-FEG SEM operated at 4 kV and equipped with an Oxford super-ATW detector and XP3 pulse processor. The specimen was normal to the electron beam and the take-off angle for acquisition was 35{degree}. The microscope was operated with a 150 {micro}m diameter finalmore » aperture at spot size 3, which yielded an X-ray count rate of {approximately}2,000 s{sup {minus}1}. EDS spectrum images were acquired as Adobe Photoshop files with the 4pi plug-in module. (The spectrum images could also be stored as NIH Image files, but the raw data are automatically rescaled as maximum-contrast (0--255) 8-bit TIFF images -- even at 16-bit resolution -- which poses an inconvenience for quantitative analysis.) The 4pi plug-in module is designed for EDS X-ray mapping and allows simultaneous acquisition of maps from 48 elements plus an SEM image. The spectrum image was acquired by re-defining the energy intervals of 48 elements to form a series of contiguous 20 eV windows from 1.25 kV to 2.19 kV. A spectrum image of 450 x 344 pixels was acquired from the specimen with a sampling density of 50 nm/pixel and a dwell time of 0.25 live seconds per pixel, for a total acquisition time of {approximately}14 h. The binary data files were imported into Mathematica for analysis with software developed by the author at Oak Ridge National Laboratory. A 400 x 300 pixel section of the original image was analyzed. MSA required {approximately}185 Mbytes of memory and {approximately}18 h of CPU time on a 300 MHz Power Macintosh 9600.« less

  3. Image quality measures to assess hyperspectral compression techniques

    NASA Astrophysics Data System (ADS)

    Lurie, Joan B.; Evans, Bruce W.; Ringer, Brian; Yeates, Mathew

    1994-12-01

    The term 'multispectral' is used to describe imagery with anywhere from three to about 20 bands of data. The images acquired by Landsat and similar earth sensing satellites including the French Spot platform are typical examples of multispectral data sets. Applications range from crop observation and yield estimation, to forestry, to sensing of the environment. The wave bands typically range from the visible to thermal infrared and are fractions of a micron wide. They may or may not be contiguous. Thus each pixel will have several spectral intensities associated with it but detailed spectra are not obtained. The term 'hyperspectral' is typically used for spectral data encompassing hundreds of samples of a spectrum. Hyperspectral, electro-optical sensors typically operate in the visible and near infrared bands. Their characteristic property is the ability to resolve a large number (typically hundreds) of contiguous spectral bands, thus producing a detailed profile of the electromagnetic spectrum. Like multispectral sensors, recently developed hyperspectral sensors are often also imaging sensors, measuring spectral over a two dimensional spatial array of picture elements of pixels. The resulting data is thus inherently three dimensional - an array of samples in which two dimensions correspond to spatial position and the third to wavelength. The data sets, commonly referred to as image cubes or datacubes (although technically they are often rectangular solids), are very rich in information but quickly become unwieldy in size, generating formidable torrents of data. Both spaceborne and airborne hyperspectral cameras exist and are in use today. The data is unique in its ability to provide high spatial and spectral resolution simultaneously, and shows great promise in both military and civilian applications. A data analysis system has been built at TRW under a series of Internal Research and Development projects. This development has been prompted by the business opportunities, by the series of instruments built here and by the availability of data from other instruments. The products of the processing system has been used to process data produced by TRW sensors and other instruments. Figure 1 provides an overview of the TRW hyperspectral collection, data handling and exploitation capability. The Analysis and Exploitation functions deal with the digitized image cubes. The analysis system was designed to handle various types of data but the emphasis was on the data acquired by the TRW instruments.

  4. Lateral organization and aesthetic preference: the importance of peripheral visual asymmetries.

    PubMed

    Beaumont, J G

    1985-01-01

    The observation that right-handers prefer pictures with the important content to the right was examined. In the first experiment, subjects manipulated the two elements of the composition. They showed a bias to place the principal object to the right and, with a central principal object, the secondary object was placed to the left. In a further experiment, eye movements were recorded while subjects scanned the pictures used in the first experiment and a rightward lateral bias in gaze direction was observed. It is argued that lateral asymmetry in preferred picture arrangements is not the result of a counterbalancing of content against perceptual bias, but a consequence of gaze being directed to informative content on the right, leaving more of the secondary content within the left visual field and associated with attentional bias or processes of the right hemisphere.

  5. Dunes in Twilight

    NASA Technical Reports Server (NTRS)

    2004-01-01

    17 January 2004 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows frost-covered north polar dunes in early January 2004. When this picture was taken, the dunes were in twilight, just before the late winter dawn that would come a few days later. These dunes spent many of the last several months in complete darkness. In this image, they are illuminated only by sunlight that has been scattered over the horizon by the martian atmosphere. These dunes are located near 77.0oN, 246.2oW. The image covers an area 3 km (1.9 mi) wide and has been expanded by 200% from its original 12 meters (39 ft.) per pixel scale. While the sun had not yet risen when the image was obtained, illumination is mostly from the lower left.

  6. A new MAP for Mars

    NASA Technical Reports Server (NTRS)

    Zubrin, Robert; Price, Steve; Clark, Ben; Cantrell, Jim; Bourke, Roger

    1993-01-01

    A Mars Aerial Platform (MAP) mission capable of generating thousands of very-high-resolution (20 cm/pixel) pictures of the Martian surface is considered. The MAP entry vehicle will map the global circulation of the planet's atmosphere and examine the surface and subsurface. Data acquisition will use instruments carried aboard balloons flying at nominal altitude of about 7 km over the Martian surface. The MAP balloons will take high- and medium-resolution photographs of Mars, sound its surface with radar, and provide tracking data to chart its winds. Mars vehicle design is based on the fourth-generation NTP, NEP, SEP vehicle set that provides a solid database for determining transportation system costs. Interference analysis and 3D image generation are performed using manual system sizing and sketching in conjunction with precise CAD modeling.

  7. DAVIS: A direct algorithm for velocity-map imaging system

    NASA Astrophysics Data System (ADS)

    Harrison, G. R.; Vaughan, J. C.; Hidle, B.; Laurent, G. M.

    2018-05-01

    In this work, we report a direct (non-iterative) algorithm to reconstruct the three-dimensional (3D) momentum-space picture of any charged particles collected with a velocity-map imaging system from the two-dimensional (2D) projected image captured by a position-sensitive detector. The method consists of fitting the measured image with the 2D projection of a model 3D velocity distribution defined by the physics of the light-matter interaction. The meaningful angle-correlated information is first extracted from the raw data by expanding the image with a complete set of Legendre polynomials. Both the particle's angular and energy distributions are then directly retrieved from the expansion coefficients. The algorithm is simple, easy to implement, fast, and explicitly takes into account the pixelization effect in the measurement.

  8. A conceptual model for quantifying connectivity using graph theory and cellular (per-pixel) approach

    NASA Astrophysics Data System (ADS)

    Singh, Manudeo; Sinha, Rajiv; Tandon, Sampat K.

    2016-04-01

    The concept of connectivity is being increasingly used for understanding hydro-geomorphic processes at all spatio-temporal scales. Connectivity is defined as the potential for energy and material flux (water, sediments, nutrients, heat, etc.) to navigate within or between the landscape systems and has two components, structural connectivity and dynamic connectivity. Structural connectivity is defined by the spatially connected features (physical linkages) through which energy and materials flow. Dynamic connectivity is a process defined connectivity component. These two connectivity components also interact with each other by forming a feedback system. This study attempts to explore a method to quantify structural and dynamic connectivity. In fluvial transport systems, sediment and water can flow in either a diffused manner or in a channelized way. At all the scales, hydrological and sediment fluxes can be tracked using a cellular (per-pixel) approach and can be quantified using graphical approach. The material flux, slope and LULC (Land Use Land Cover) weightage factors of a pixel together determine if it will contribute towards connectivity of the landscape/system. In a graphical approach, all the contributing pixels will form a node at their centroid and this node will be connected to the next 'down-node' via a directed edge with 'least cost path'. The length of the edge will depend on the desired spatial scale and its path direction will depend on the traversed pixel's slope and the LULC (weightage) factors. The weightage factors will lie in-between 0 to 1. This value approaches 1 for the LULC factors which promote connectivity. For example, in terms of sediment connectivity, the weightage could be RUSLE (Revised Universal Soil Loss Equation) C-factors with bare unconsolidated surfaces having values close to 1. This method is best suited for areas with low slopes, where LULC can be a deciding as well as dominating factor. The degree of connectivity and its pathways will show changes under different LULC conditions even if the slope remains the same. The graphical approach provides the statistics of connected and disconnected graph elements (edges, nodes) and graph components, thereby allowing the quantification of structural connectivity. This approach also quantifies the dynamic connectivity by allowing the measurement of the fluxes (e.g. via hydrographs or sedimentographs) at any node as well as at any system outlet. The contribution of any sub-system can be understood by removing the remaining sub-systems which can be conveniently achieved by masking associated graph elements.

  9. Advanced optical coatings for astronomical instrumentation

    NASA Astrophysics Data System (ADS)

    Pradal, Fabien; Leplan, Hervé; Vayssade, Hervé; Geyl, Roland

    2016-07-01

    Recently Safran Reosc worked and progressed on various thin film technology for: Large mirrors with low stress and stable coatings. Large lens elements with strong curvature and precise layer specifications. Large filters with high spectral response uniformity specifications. IR coatings with low stress and excellent resistance to cryogenic environment for NIR to LWIR domains. Pixelated coatings. Results will be presented and discussed on the basis of several examples.

  10. Mars Orbiter Camera Views the 'Face on Mars' - Calibrated, contrast enhanced, filtered,

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Shortly after midnight Sunday morning (5 April 1998 12:39 AM PST), the Mars Orbiter Camera (MOC) on the Mars Global Surveyor (MGS) spacecraft successfully acquired a high resolution image of the 'Face on Mars' feature in the Cydonia region. The image was transmitted to Earth on Sunday, and retrieved from the mission computer data base Monday morning (6 April 1998). The image was processed at the Malin Space Science Systems (MSSS) facility 9:15 AM and the raw image immediately transferred to the Jet Propulsion Laboratory (JPL) for release to the Internet. The images shown here were subsequently processed at MSSS.

    The picture was acquired 375 seconds after the spacecraft's 220th close approach to Mars. At that time, the 'Face', located at approximately 40.8o N, 9.6o W, was 275 miles (444 km) from the spacecraft. The 'morning' sun was 25o above the horizon. The picture has a resolution of 14.1 feet (4.3 meters) per pixel, making it ten times higher resolution than the best previous image of the feature, which was taken by the Viking Mission in the mid-1970's. The full image covers an area 2.7 miles (4.4 km) wide and 25.7 miles (41.5 km) long. Processing Image processing has been applied to the images in order to improve the visibility of features. This processing included the following steps:

    The image was processed to remove the sensitivity differences between adjacent picture elements (calibrated). This removes the vertical streaking.

    The contrast and brightness of the image was adjusted, and 'filters' were applied to enhance detail at several scales.

    The image was then geometrically warped to meet the computed position information for a mercator-type map. This corrected for the left-right flip, and the non-vertical viewing angle (about 45o from vertical), but also introduced some vertical 'elongation' of the image for the same reason Greenland looks larger than Africa on a mercator map of the Earth.

    A section of the image, containing the 'Face' and a couple of nearly impact craters and hills, was 'cut' out of the full image and reproduced separately.

    See PIA01440-1442 for additional processing steps. Also see PIA01236 for the raw image.

    Malin Space Science Systems and the California Institute of Technology built the MOC using spare hardware from the Mars Observer mission. MSSS operates the camera from its facilities in San Diego, CA. The Jet Propulsion Laboratory's Mars Surveyor Operations Project operates the Mars Global Surveyor spacecraft with its industrial partner, Lockheed Martin Astronautics, from facilities in Pasadena, CA and Denver, CO.

  11. Mars Orbiter Camera Views the 'Face on Mars' - Calibrated, contrast enhanced, filtered

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Shortly after midnight Sunday morning (5 April 1998 12:39 AM PST), the Mars Orbiter Camera (MOC) on the Mars Global Surveyor (MGS) spacecraft successfully acquired a high resolution image of the 'Face on Mars' feature in the Cydonia region. The image was transmitted to Earth on Sunday, and retrieved from the mission computer data base Monday morning (6 April 1998). The image was processed at the Malin Space Science Systems (MSSS) facility 9:15 AM and the raw image immediately transferred to the Jet Propulsion Laboratory (JPL) for release to the Internet. The images shown here were subsequently processed at MSSS.

    The picture was acquired 375 seconds after the spacecraft's 220th close approach to Mars. At that time, the 'Face', located at approximately 40.8o N, 9.6o W, was 275 miles (444 km) from the spacecraft. The 'morning' sun was 25o above the horizon. The picture has a resolution of 14.1 feet (4.3 meters) per pixel, making it ten times higher resolution than the best previous image of the feature, which was taken by the Viking Mission in the mid-1970's. The full image covers an area 2.7 miles (4.4 km) wide and 25.7 miles (41.5 km) long. Processing Image processing has been applied to the images in order to improve the visibility of features. This processing included the following steps:

    The image was processed to remove the sensitivity differences between adjacent picture elements (calibrated). This removes the vertical streaking.

    The contrast and brightness of the image was adjusted, and 'filters' were applied to enhance detail at several scales.

    The image was then geometrically warped to meet the computed position information for a mercator-type map. This corrected for the left-right flip, and the non-vertical viewing angle (about 45o from vertical), but also introduced some vertical 'elongation' of the image for the same reason Greenland looks larger than Africa on a mercator map of the Earth.

    A section of the image, containing the 'Face' and a couple of nearly impact craters and hills, was 'cut' out of the full image and reproduced separately.

    See PIA01441-1442 for additional processing steps. Also see PIA01236 for the raw image.

    Malin Space Science Systems and the California Institute of Technology built the MOC using spare hardware from the Mars Observer mission. MSSS operates the camera from its facilities in San Diego, CA. The Jet Propulsion Laboratory's Mars Surveyor Operations Project operates the Mars Global Surveyor spacecraft with its industrial partner, Lockheed Martin Astronautics, from facilities in Pasadena, CA and Denver, CO.

  12. Automated design of infrared digital metamaterials by genetic algorithm

    NASA Astrophysics Data System (ADS)

    Sugino, Yuya; Ishikawa, Atsushi; Hayashi, Yasuhiko; Tsuruta, Kenji

    2017-08-01

    We demonstrate automatic design of infrared (IR) metamaterials using a genetic algorithm (GA) and experimentally characterize their IR properties. To implement the automated design scheme of the metamaterial structures, we adopt a digital metamaterial consisting of 7 × 7 Au nano-pixels with an area of 200 nm × 200 nm, and their placements are coded as binary genes in the GA optimization process. The GA combined with three-dimensional (3D) finite element method (FEM) simulation is developed and applied to automatically construct a digital metamaterial to exhibit pronounced plasmonic resonances at the target IR frequencies. Based on the numerical results, the metamaterials are fabricated on a Si substrate over an area of 1 mm × 1 mm by using an EB lithography, Cr/Au (2/20 nm) depositions, and liftoff process. In the FT-IR measurement, pronounced plasmonic responses of each metamaterial are clearly observed near the targeted frequencies, although the synthesized pixel arrangements of the metamaterials are seemingly random. The corresponding numerical simulations reveal the important resonant behavior of each pixel and their hybridized systems. Our approach is fully computer-aided without artificial manipulation, thus paving the way toward the novel device design for next-generation plasmonic device applications.

  13. Ultrathin phase-change coatings on metals for electrothermally tunable colors

    NASA Astrophysics Data System (ADS)

    Bakan, Gokhan; Ayas, Sencer; Saidzoda, Tohir; Celebi, Kemal; Dana, Aykutlu

    2016-08-01

    Metal surfaces coated with ultrathin lossy dielectrics enable color generation through strong interferences in the visible spectrum. Using a phase-change thin film as the coating layer offers tuning the generated color by crystallization or re-amorphization. Here, we study the optical response of surfaces consisting of thin (5-40 nm) phase-changing Ge2Sb2Te5 (GST) films on metal, primarily Al, layers. A color scale ranging from yellow to red to blue that is obtained using different thicknesses of as-deposited amorphous GST layers turns dim gray upon annealing-induced crystallization of the GST. Moreover, when a relatively thick (>100 nm) and lossless dielectric film is introduced between the GST and Al layers, optical cavity modes are observed, offering a rich color gamut at the expense of the angle independent optical response. Finally, a color pixel structure is proposed for ultrahigh resolution (pixel size: 5 × 5 μm2), non-volatile displays, where the metal layer acting like a mirror is used as a heater element. The electrothermal simulations of such a pixel structure suggest that crystallization and re-amorphization of the GST layer using electrical pulses are possible for electrothermal color tuning.

  14. First Tests of Prototype SCUBA-2 Superconducting Bolometer Array

    NASA Astrophysics Data System (ADS)

    Woodcraft, Adam L.; Ade, Peter A. R.; Bintley, Dan; Hunt, Cynthia L.; Sudiwala, Rashmi V.; Hilton, Gene C.; Irwin, Kent D.; Reintsema, Carl D.; Audley, Michael D.; Holland, Wayne S.; MacIntosh, Mike

    2006-09-01

    We present results of the first tests on a 1280 pixel superconducting bolometer array, a prototype for SCUBA-2, a sub-mm camera being built for the James Clerk Maxwell Telescope in Hawaii. The bolometers are TES (transition edge sensor) detectors; these take advantage of the large variation of resistance with temperature through the superconducting transition. To keep the number of wires reasonable, a multiplexed read-out is used. Each pixel is read out through an individual DC SQUID; room temperature electronics switch between rows in the array by biasing the appropriate SQUIDs in turn. Arrays of 100 SQUIDs in series for each column then amplify the output. Unlike previous TES arrays, the multiplexing elements are located beneath each pixel, making large arrays possible, but construction more challenging. The detectors are constructed from Mo/Cu bi-layers; this technique enables the transition temperature to be tuned using the proximity effect by choosing the thickness of the normal and superconducting materials. To achieve the required performance, the detectors are operated at a temperature of approximately 120 mK. We describe the results of a basic characterisation of the array, demonstrating that it is fully operational, and give the results of signal to noise measurements.

  15. Contribution of non-negative matrix factorization to the classification of remote sensing images

    NASA Astrophysics Data System (ADS)

    Karoui, M. S.; Deville, Y.; Hosseini, S.; Ouamri, A.; Ducrot, D.

    2008-10-01

    Remote sensing has become an unavoidable tool for better managing our environment, generally by realizing maps of land cover using classification techniques. The classification process requires some pre-processing, especially for data size reduction. The most usual technique is Principal Component Analysis. Another approach consists in regarding each pixel of the multispectral image as a mixture of pure elements contained in the observed area. Using Blind Source Separation (BSS) methods, one can hope to unmix each pixel and to perform the recognition of the classes constituting the observed scene. Our contribution consists in using Non-negative Matrix Factorization (NMF) combined with sparse coding as a solution to BSS, in order to generate new images (which are at least partly separated images) using HRV SPOT images from Oran area, Algeria). These images are then used as inputs of a supervised classifier integrating textural information. The results of classifications of these "separated" images show a clear improvement (correct pixel classification rate improved by more than 20%) compared to classification of initial (i.e. non separated) images. These results show the contribution of NMF as an attractive pre-processing for classification of multispectral remote sensing imagery.

  16. Chromatic Modulator for High Resolution CCD or APS Devices

    NASA Technical Reports Server (NTRS)

    Hartley, Frank T. (Inventor); Hull, Anthony B. (Inventor)

    2003-01-01

    A system for providing high-resolution color separation in electronic imaging. Comb drives controllably oscillate a red-green-blue (RGB) color strip filter system (or otherwise) over an electronic imaging system such as a charge-coupled device (CCD) or active pixel sensor (APS). The color filter is modulated over the imaging array at a rate three or more times the frame rate of the imaging array. In so doing, the underlying active imaging elements are then able to detect separate color-separated images, which are then combined to provide a color-accurate frame which is then recorded as the representation of the recorded image. High pixel resolution is maintained. Registration is obtained between the color strip filter and the underlying imaging array through the use of electrostatic comb drives in conjunction with a spring suspension system.

  17. Development of Position-Sensitive Magnetic Calorimeters for X-Ray Astronomy

    NASA Technical Reports Server (NTRS)

    Bandler, SImon; Stevenson, Thomas; Hsieh, Wen-Ting

    2011-01-01

    Metallic magnetic calorimeters (MMC) are one of the most promising devices to provide very high energy resolution needed for future astronomical x-ray spectroscopy. MMC detectors can be built to large detector arrays having thousands of pixels. Position-sensitive magnetic (PoSM) microcalorimeters consist of multiple absorbers thermally coupled to one magnetic micro calorimeter. Each absorber element has a different thermal coupling to the MMC, resulting in a distribution of different pulse shapes and enabling position discrimination between the absorber elements. PoSMs therefore achieve the large focal plane area with fewer number of readout channels without compromising spatial sampling. Excellent performance of PoSMs was achieved by optimizing the designs of key parameters such as the thermal conductance among the absorbers, magnetic sensor, and heat sink, as well as the absorber heat capacities. Micro fab ri - cation techniques were developed to construct four-absorber PoSMs, in which each absorber consists of a two-layer composite of bismuth and gold. The energy resolution (FWHM full width at half maximum) was measured to be better than 5 eV at 6 keV x-rays for all four absorbers. Position determination was demonstrated with pulse-shape discrimination, as well as with pulse rise time. X-ray microcalorimeters are usually designed to thermalize as quickly as possible to avoid degradation in energy resolution from position dependence to the pulse shapes. Each pixel consists of an absorber and a temperature sensor, both decoupled from the cold bath through a weak thermal link. Each pixel requires a separate readout channel; for instance, with a SQUID (superconducting quantum interference device). For future astronomy missions where thousands to millions of resolution elements are required, having an individual SQUID readout channel for each pixel becomes difficult. One route to attaining these goals is a position-sensitive detector in which a large continuous or pixilated array of x-ray absorbers shares fewer numbers of temperature sensors. A means of discriminating the signals from different absorber positions, however, needs to be built into the device for each sensor. The design concept for the device is such that the shape of the temperature pulse with time depends on the location of the absorber. This inherent position sensitivity of the signal is then analyzed to determine the location of the event precisely, effectively yielding one device with many sub-pixels. With such devices, the total number of electronic channels required to read out a given number of pixels is significantly reduced. PoSMs were developed that consist of four discrete absorbers connected to a single magnetic sensor. The design concept can be extended to more than four absorbers per sensor. The thermal conductance between the sensor and each absorber is different by design and consequently, the pulse shapes are different depending upon which absorber the xrays are received, allowing position discrimination. A magnetic sensor was used in which a paramagnetic Au:Er temperature-sensitive material is located in a weak magnetic field. Deposition of energy from an x-ray photon causes an increase in temperature, which leads to a change of magnetization of the paramagnetic sensor, which is subsequently read out using a low noise dc-SQUID. The PoSM microcalorimeters are fully microfabricated: the Au:Er sensor is located above the meander, with a thin insulation gap in between. For this position-sensitive device, four electroplated absorbers are thermally linked to the sensor via heat links of different thermal conductance. One pixel is identical to that of a single-pixel design, consisting of an overhanging absorber fabricated directly on top of the sensor. It is therefore very strongly thermally coupled to it. The three other absorbers are supported directly on a silicon-nitride membrane. These absorbers are thermally coupled to the sensor via Ti (5 nm)/Au250 nm) metal links. The strength of the links is parameterized by the number of gold squares making up the link. For detector performance, experimentally different pulse-shapes were demonstrated with 6 keV x-rays, which clearly show different rise times for different absorber positions. For energy resolution measurement, the PoSM was operated at 32 mK with an applied field that was generated using a persistent current of 50 mA. Over the four pixels, energy resolution ranges from 4.4 to 4.7 eV were demonstrated.

  18. Physical characterization and performance comparison of active- and passive-pixel CMOS detectors for mammography.

    PubMed

    Elbakri, I A; McIntosh, B J; Rickey, D W

    2009-03-21

    We investigated the physical characteristics of two complementary metal oxide semiconductor (CMOS) mammography detectors. The detectors featured 14-bit image acquisition, 50 microm detector element (del) size and an active area of 5 cm x 5 cm. One detector was a passive-pixel sensor (PPS) with signal amplification performed by an array of amplifiers connected to dels via data lines. The other detector was an active-pixel sensor (APS) with signal amplification performed at each del. Passive-pixel designs have higher read noise due to data line capacitance, and the APS represents an attempt to improve the noise performance of this technology. We evaluated the detectors' resolution by measuring the modulation transfer function (MTF) using a tilted edge. We measured the noise power spectra (NPS) and detective quantum efficiencies (DQE) using mammographic beam conditions specified by the IEC 62220-1-2 standard. Our measurements showed the APS to have much higher gain, slightly higher MTF, and higher NPS. The MTF of both sensors approached 10% near the Nyquist limit. DQE values near dc frequency were in the range of 55-67%, with the APS sensor DQE lower than the PPS DQE for all frequencies. Our results show that lower read noise specifications in this case do not translate into gains in the imaging performance of the sensor. We postulate that the lower fill factor of the APS is a possible cause for this result.

  19. Advanced processing of CdTe pixel radiation detectors

    NASA Astrophysics Data System (ADS)

    Gädda, A.; Winkler, A.; Ott, J.; Härkönen, J.; Karadzhinova-Ferrer, A.; Koponen, P.; Luukka, P.; Tikkanen, J.; Vähänen, S.

    2017-12-01

    We report a fabrication process of pixel detectors made of bulk cadmium telluride (CdTe) crystals. Prior to processing, the quality and defect density in CdTe material was characterized by infrared (IR) spectroscopy. The semiconductor detector and Flip-Chip (FC) interconnection processing was carried out in the clean room premises of Micronova Nanofabrication Centre in Espoo, Finland. The chip scale processes consist of the aluminum oxide (Al2O3) low temperature thermal Atomic Layer Deposition (ALD), titanium tungsten (TiW) metal sputtering depositions and an electroless Nickel growth. CdTe crystals with the size of 10×10×0.5 mm3 were patterned with several photo-lithography techniques. In this study, gold (Au) was chosen as the material for the wettable Under Bump Metalization (UBM) pads. Indium (In) based solder bumps were grown on PSI46dig read out chips (ROC) having 4160 pixels within an area of 1 cm2. CdTe sensor and ROC were hybridized using a low temperature flip-chip (FC) interconnection technique. The In-Au cold weld bonding connections were successfully connecting both elements. After the processing the detector packages were wire bonded into associated read out electronics. The pixel detectors were tested at the premises of Finnish Radiation Safety Authority (STUK). During the measurement campaign, the modules were tested by exposure to a 137Cs source of 1.5 TBq for 8 minutes. We detected at the room temperature a photopeak at 662 keV with about 2 % energy resolution.

  20. The resolved star formation history of M51a through successive Bayesian marginalization

    NASA Astrophysics Data System (ADS)

    Martínez-García, Eric E.; Bruzual, Gustavo; Magris C., Gladis; González-Lópezlira, Rosa A.

    2018-02-01

    We have obtained the time and space-resolved star formation history (SFH) of M51a (NGC 5194) by fitting Galaxy Evolution Explorer (GALEX), Sloan Digital Sky Survey and near-infrared pixel-by-pixel photometry to a comprehensive library of stellar population synthesis models drawn from the Synthetic Spectral Atlas of Galaxies (SSAG). We fit for each space-resolved element (pixel) an independent model where the SFH is averaged in 137 age bins, each one 100 Myr wide. We used the Bayesian Successive Priors (BSP) algorithm to mitigate the bias in the present-day spatial mass distribution. We test BSP with different prior probability distribution functions (PDFs); this exercise suggests that the best prior PDF is the one concordant with the spatial distribution of the stellar mass as inferred from the near-infrared images. We also demonstrate that varying the implicit prior PDF of the SFH in SSAG does not affect the results. By summing the contributions to the global star formation rate of each pixel, at each age bin, we have assembled the resolved SFH of the whole galaxy. According to these results, the star formation rate of M51a was exponentially increasing for the first 10 Gyr after the big bang, and then turned into an exponentially decreasing function until the present day. Superimposed, we find a main burst of star formation at t ≈ 11.9 Gyr after the big bang.

  1. Modelling and testing the x-ray performance of CCD and CMOS APS detectors using numerical finite element simulations

    NASA Astrophysics Data System (ADS)

    Weatherill, Daniel P.; Stefanov, Konstantin D.; Greig, Thomas A.; Holland, Andrew D.

    2014-07-01

    Pixellated monolithic silicon detectors operated in a photon-counting regime are useful in spectroscopic imaging applications. Since a high energy incident photon may produce many excess free carriers upon absorption, both energy and spatial information can be recovered by resolving each interaction event. The performance of these devices in terms of both the energy and spatial resolution is in large part determined by the amount of diffusion which occurs during the collection of the charge cloud by the pixels. Past efforts to predict the X-ray performance of imaging sensors have used either analytical solutions to the diffusion equation or simplified monte carlo electron transport models. These methods are computationally attractive and highly useful but may be complemented using more physically detailed models based on TCAD simulations of the devices. Here we present initial results from a model which employs a full transient numerical solution of the classical semiconductor equations to model charge collection in device pixels under stimulation from initially Gaussian photogenerated charge clouds, using commercial TCAD software. Realistic device geometries and doping are included. By mapping the pixel response to different initial interaction positions and charge cloud sizes, the charge splitting behaviour of the model sensor under various illuminations and operating conditions is investigated. Experimental validation of the model is presented from an e2v CCD30-11 device under varying substrate bias, illuminated using an Fe-55 source.

  2. Optimization of the Performance of Segmented Scintillators for Radiotherapy Imaging through Novel Binning Techniques

    PubMed Central

    El-Mohri, Youcef; Antonuk, Larry E.; Choroszucha, Richard B.; Zhao, Qihua; Jiang, Hao; Liu, Langechuan

    2014-01-01

    Thick, segmented crystalline scintillators have shown increasing promise as replacement x-ray converters for the phosphor screens currently used in active matrix flat-panel imagers (AMFPIs) in radiotherapy, by virtue of providing over an order of magnitude improvement in the DQE. However, element-to-element misalignment in current segmented scintillator prototypes creates a challenge for optimal registration with underlying AMFPI arrays, resulting in degradation of spatial resolution. To overcome this challenge, a methodology involving the use of a relatively high resolution AMFPI array in combination with novel binning techniques is presented. The array, which has a pixel pitch of 0.127 mm, was coupled to prototype segmented scintillators based on BGO, LYSO and CsI:Tl materials, each having a nominal element-to-element pitch of 1.016 mm and thickness of ~1 cm. The AMFPI systems incorporating these prototypes were characterized at a radiotherapy energy of 6 MV in terms of MTF, NPS, DQE, and reconstructed images of a resolution phantom acquired using a cone-beam CT geometry. For each prototype, the application of 8×8 pixel binning to achieve a sampling pitch of 1.016 mm was optimized through use of an alignment metric which minimized misregistration and thereby improved spatial resolution. In addition, the application of alternative binning techniques that exclude the collection of signal near septal walls resulted in further significant improvement in spatial resolution for the BGO and LYSO prototypes, though not for the CsI:Tl prototype due to the large amount of optical cross-talk resulting from significant light spread between scintillator elements in that device. The efficacy of these techniques for improving spatial resolution appears to be enhanced for scintillator materials that exhibit mechanical hardness, high density and high refractive index, such as BGO. Moreover, materials that exhibit these properties as well as offer significantly higher light output than BGO, such as CdWO4, should provide the additional benefit of preserving DQE performance. PMID:24487347

  3. SHD digital cinema distribution over a long distance network of Internet2

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Takahiro; Shirai, Daisuke; Fujii, Tatsuya; Nomura, Mitsuru; Fujii, Tetsuro; Ono, Sadayasu

    2003-06-01

    We have developed a prototype SHD (Super High Definition) digital cinema distribution system that can store, transmit and display eight-million-pixel motion pictures that have the image quality of a 35-mm film movie. The system contains a video server, a real-time decoder, and a D-ILA projector. Using a gigabit Ethernet link and TCP/IP, the server transmits JPEG2000 compressed motion picture data streams to the decoder at transmission speeds as high as 300 Mbps. The received data streams are decompressed by the decoder, and then projected onto a screen via the projector. With this system, digital cinema contents can be distributed over a wide-area optical gigabit IP network. However, when digital cinema contents are delivered over long distances by using a gigabit IP network and TCP, the round-trip time increases and network throughput either stops rising or diminishes. In a long-distance SHD digital cinema transmission experiment performed on the Internet2 network in October 2002, we adopted enlargement of the TCP window, multiple TCP connections, and shaping function to control the data transmission quantity. As a result, we succeeded in transmitting the SHD digital cinema content data at about 300 Mbps between Chicago and Los Angeles, a distance of more than 3000 km.

  4. Mars South Polar Cap "Fingerprint" Terrain

    NASA Image and Video Library

    2000-04-24

    This picture is illuminated by sunlight from the upper left. Some portions of the martian south polar residual cap have long, somewhat curved troughs instead of circular pits. These appear to form in a layer of material that may be different than that in which "swiss cheese" circles and pits form, and none of these features has any analog in the north polar cap or elsewhere on Mars. This picture shows the "fingerprint" terrain as a series of long, narrow depressions considered to have formed by collapse and widening by sublimation of ice. Unlike the north polar cap, the south polar region stays cold enough in summer to retain frozen carbon dioxide. Viking Orbiter observations during the late 1970s showed that very little water vapor comes off the south polar cap during summer, indicating that any frozen water that might be there remains solid throughout the year. This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image was obtained in early southern spring on August 4, 1999. It shows an area 3 x 5 kilometers (1.9 x 3.1 miles) at a resolution of about 7.3 meters (24 ft) per pixel. Located near 86.0°S, 53.9°W. http://photojournal.jpl.nasa.gov/catalog/PIA02373

  5. Viewing-zone control of integral imaging display using a directional projection and elemental image resizing method.

    PubMed

    Alam, Md Ashraful; Piao, Mei-Lan; Bang, Le Thanh; Kim, Nam

    2013-10-01

    Viewing-zone control of integral imaging (II) displays using a directional projection and elemental image (EI) resizing method is proposed. Directional projection of EIs with the same size of microlens pitch causes an EI mismatch at the EI plane. In this method, EIs are generated computationally using a newly introduced algorithm: the directional elemental image generation and resizing algorithm considering the directional projection geometry of each pixel as well as an EI resizing method to prevent the EI mismatch. Generated EIs are projected as a collimated projection beam with a predefined directional angle, either horizontally or vertically. The proposed II display system allows reconstruction of a 3D image within a predefined viewing zone that is determined by the directional projection angle.

  6. Laser fabrication of diffractive optical elements based on detour-phase computer-generated holograms for two-dimensional Airy beams.

    PubMed

    Călin, Bogdan-Ştefăniţă; Preda, Liliana; Jipa, Florin; Zamfirescu, Marian

    2018-02-20

    We have designed, fabricated, and tested an amplitude diffractive optical element for generation of two-dimensional (2D) Airy beams. The design is based on a detour-phase computer-generated hologram. Using laser ablation of metallic films, we obtained a 2  mm×2  mm diffractive optical element with a pixel of 5  μm×5  μm and demonstrated a fast, cheap, and reliable fabrication process. This device can modulate 2D Airy beams or it can be used as a UV lithography mask to fabricate a series of phase holograms for higher energy efficiency. Tests according to the premise and an analysis of the transverse profile and propagation are presented.

  7. Design, Fabrication, and Testing of Lumped Element Kinetic inductance Detectors for 3 mm CMB Observations

    NASA Technical Reports Server (NTRS)

    Lowitz, Amy E.; Brown, Ari David; Stevenson, Thomas R.; Timbie, Peter T.; Wollack, Edward J.

    2014-01-01

    Kinetic inductance detectors (KIDs) are a promising technology for low-noise, highly-multiplexible mm- and submm-wave detection. KIDs have a number of advantages over other detector technologies, which make them an appealing option in the cosmic microwave background B-mode anisotropy search, including passive frequency domain multiplexing and relatively simple fabrication, but have suffered from challenges associated with noise control. Here we describe design and fabrication of a 20-pixel prototype array of lumped element molybdenum KIDs. We show Q, frequency and temperature measurements from the array under dark conditions. We also present evidence for a double superconducting gap in molybdenum.

  8. Would you hire me? Selfie portrait images perception in a recruitment context

    NASA Astrophysics Data System (ADS)

    Mazza, F.; Da Silva, M. P.; Le Callet, P.

    2014-02-01

    Human content perception has been underlined to be important in multimedia quality evaluation. Recently aesthetic considerations have been subject of research in this field. First attempts in aesthetics took into account perceived low-level features, especially taken from photography theory. However they demonstrated to be insuf- ficient to characterize human content perception. More recently image psychology started to be considered as higher cognitive feature impacting user perception. In this paper we follow this idea introducing social cognitive elements. Our experiments focus on the influence of different versions of portrait pictures in context where they are showed aside some completely unrelated informations; this can happen for example in social networks interactions between users, where profile pictures are present aside almost every user action. In particular, we tested this impact on resumes between professional portrait and self shot pictures. Moreover, as we run tests in crowdsourcing, we will discuss the use of this methodology for these tests. Our final aim is to analyse social biases' impact on multimedia aesthetics evaluation and how this bias influences messages that go along with pictures, as in public online platforms and social networks.

  9. SHERPA: Towards better accessibility of earthquake rupture archives

    NASA Astrophysics Data System (ADS)

    Théo, Yann; Sémo, Emmanuel; Mazet Roux, Gilles; Bossu, Rémy; Kamb, Linus; Frobert, Laurent

    2010-05-01

    Large crustal earthquakes are the subject of extensive field surveys in order to better understand the rupture process and its tectonic consequences. After the earthquake, pictures of the rupture can easily viewed quite easily on the web. However, once the event gets old, pictures disappear and can no longer be viewed, a heavy loss for researchers looking for information. Even when available, there are linked to a given survey and comparison between different earthquakes of the same phenomenon can not be easily performed. SHERPA, Sharing of Earthquake Rupture Pictures Archive, a web application developed at EMSC aims to fill this void. It aims at making available pictures of past earthquakes and sharing resources while strictly protecting the authors copyright and keeping the authors in charge of the diffusion to avoid unfair or inappropriate use of the photos. Our application is targeted at scientists and scientists only. Pictures uploaded on SHERPA are marked by a watermark "NOT FOR PUBLICATION" spread all over, and state the author's name. Authors and authors only have the possibility to remove this mark should they want their work to enter the public domain. If a user sees a picture he/she would like to use, he/she can put this picture in his/her cart. After the validation of this cart, a request (stating the name and purposes of the requestor) will be sent to the author(s) to ask to share the picture(s). If an author accepts this request, the requestor will be given the authorization to access a protected folder and download the unmarked picture. Without the author explicit consent, no picture will never be accessible to anyone. We want to state this point very clearly because ownership and copyright protection are essential to the SHERPA project. Uploading pictures is quick and easy: once registered, you can very simply upload pictures that can then be geolocalised using a Google map plugged on the web site. If the camera is equipped with a GPS, the software will automatically retrieve the location from the exif file. Pictures can be linked to an earthquake and be described through a system of tags. This way, they are searchable in the database. Once uploaded, pictures become available for browsing for any visitors. Using the tags, visitors can search the database for pictures of a same phenomenon in several events, or extract the ones from a given region, or a certain type of faulting. The selected pictures can be viewed on a map and on a carousel. By providing such a service we hope to contribute to a better accessibility of the pictures taken during field survey and then improving earthquake documentation which remain a key element for our field of research. http://sherpa.emsc-csem.org/

  10. [Evaluation standards and application for photography of schistosomiasis control theme].

    PubMed

    Chun-Li, Cao; Qing-Biao, Hong; Jing-Ping, Guo; Fang, Liu; Tian-Ping, Wang; Jian-Bin, Liu; Lin, Chen; Hao, Wang; You-Sheng, Liang; Jia-Gang, Guo

    2018-02-26

    To set up and apply the evaluation standards for photography of schistosomiasis control theme, so as to offer the scientific advice for enriching the health information carrier of schistosomiasis control. Through the literature review and expert consultation, the evaluation standard for photography of schistosomiasis control theme was formulated. The themes were divided into 4 projects, such as new construction, natural scenery, working scene, and control achievements. The evaluation criteria of the theme photography were divided into the theme (60%), photographic composition (15%), focus exposure (15%), and color saturation (10%) . A total of 495 pictures (sets) from 59 units with 77 authors were collected from schistosomiasis epidemic areas national wide. After the first-step screening and second-step evaluation, the prizes of 3 themes of control achievements and new construction, working scene, and natural scenery were selected, such as 6 pictures of first prize, 12 pictures of second prize, 18 pictures of third prize, and 20 pictures of honorable prize. The evaluation standards of theme photography should be taken into the consideration of the technical elements of photography and the work specification of schistosomiasis prevention and control. In order to improve the ability of records for propaganda purpose of schistosomiasis control and better play a role of guiding correct propaganda, the training and guidance of photography of professionals should be carried out.

  11. GRAMPS: a graphics language interpreter for real-time, interactive, three-dimensional picture editing and animation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Donnell, T.J.; Olson, A.J.

    1981-08-01

    GRAMPS, a graphics language interpreter has been developed in FORTRAN 77 to be used in conjunction with an interactive vector display list processor (Evans and Sutherland Multi-Picture-System). Several of the features of the language make it very useful and convenient for real-time scene construction, manipulation and animation. The GRAMPS language syntax allows natural interaction with scene elements as well as easy, interactive assignment of graphics input devices. GRAMPS facilitates the creation, manipulation and copying of complex nested picture structures. The language has a powerful macro feature that enables new graphics commands to be developed and incorporated interactively. Animation may bemore » achieved in GRAMPS by two different, yet mutually compatible means. Picture structures may contain framed data, which consist of a sequence of fixed objects. These structures may be displayed sequentially to give a traditional frame animation effect. In addition, transformation information on picture structures may be saved at any time in the form of new macro commands that will transform these structures from one saved state to another in a specified number of steps, yielding an interpolated transformation animation effect. An overview of the GRAMPS command structure is given and several examples of application of the language to molecular modeling and animation are presented.« less

  12. Comparison of Objective Measures for Predicting Perceptual Balance and Visual Aesthetic Preference

    PubMed Central

    Hübner, Ronald; Fillinger, Martin G.

    2016-01-01

    The aesthetic appreciation of a picture largely depends on the perceptual balance of its elements. The underlying mental mechanisms of this relation, however, are still poorly understood. For investigating these mechanisms, objective measures of balance have been constructed, such as the Assessment of Preference for Balance (APB) score of Wilson and Chatterjee (2005). In the present study we examined the APB measure and compared it to an alternative measure (DCM; Deviation of the Center of “Mass”) that represents the center of perceptual “mass” in a picture and its deviation from the geometric center. Additionally, we applied measures of homogeneity and of mirror symmetry. In a first experiment participants had to rate the balance and symmetry of simple pictures, whereas in a second experiment different participants rated their preference (liking) for these pictures. In a third experiment participants rated the balance as well as the preference of new pictures. Altogether, the results show that DCM scores accounted better for balance ratings than APB scores, whereas the opposite held with respect to preference. Detailed analyses revealed that these results were due to the fact that aesthetic preference does not only depend on balance but also on homogeneity, and that the APB measure takes this feature into account. PMID:27014143

  13. Multiple Representations and Connections with the Sierpinski Triangle

    ERIC Educational Resources Information Center

    Kirwan, J. Vince; Tobias, Jennifer M.

    2014-01-01

    To understand multiple representations in algebra, students must be able to describe relationships through a variety of formats, such as graphs, tables, pictures, and equations. NCTM indicates that varied representations are "essential elements in supporting students' understanding of mathematical concepts and relationships" (NCTM…

  14. Can You Picture That?

    ERIC Educational Resources Information Center

    Damico, Julie

    2014-01-01

    This article describes the Exploring Experimental Design lesson, which uses a Pictionary-style approach to introduce the elements of the third science and engineering practice: Planning and Carrying Out Investigations, found in "A Framework for K-12 Science Education" (NRC 2012) and the "Next Generation Science Standards"…

  15. Eliminating "Hotspots" in Digital Image Processing

    NASA Technical Reports Server (NTRS)

    Salomon, P. M.

    1984-01-01

    Signals from defective picture elements rejected. Image processing program for use with charge-coupled device (CCD) or other mosaic imager augmented with algorithm that compensates for common type of electronic defect. Algorithm prevents false interpretation of "hotspots". Used for robotics, image enhancement, image analysis and digital television.

  16. Site investigation of bridges along the I-24 in Western Kentucky.

    DOT National Transportation Integrated Search

    2006-09-01

    Determination of the seismic risk of the I-24 bridges requires evaluating the current condition of all individual elements of the bridges. All bridges along the I-24 were visually inspected, pictured, and the records were stored in a database. Data o...

  17. Panoramic Scanning: Essential Element of Higher-Order Thought.

    ERIC Educational Resources Information Center

    Ambrose, Don

    1996-01-01

    Panoramic scanning is the capacity to perceive, interpret, and appreciate complex problems from a big-picture vantage point. Barriers to panoramic scanning (sensory bombardment, superficial polarized thought, and tunnel vision) and facilitators (broad interests and knowledge, pattern finding, and connection-making skills) are identified. Educators…

  18. Telidon Videotex presentation level protocol: Augmented picture description instructions

    NASA Astrophysics Data System (ADS)

    Obrien, C. D.; Brown, H. G.; Smirle, J. C.; Lum, Y. F.; Kukulka, J. Z.; Kwan, A.

    1982-02-01

    The Telidon Videotex system is a method by which graphic and textual information and transactional services can be accessed from information sources by the general public. In order to transmit information to a Telidon terminal at a minimum bandwidth, and in a manner independent of the type of communications channel, a coding scheme was devised which permits the encoding of a picture into the geometric drawing elements which compose it. These picture description instructions are an alpha geometric coding model and are based on the primitives of POINT, LINE, ARC, RECTANGLE, POLYGON, and INCREMENT. Text is encoded as (ASCII) characters along with a supplementary table of accents and special characters. A mosaic shape table is included for compatibility. A detailed specification of the coding scheme and a description of the principles which make it independent of communications channel and display hardware are provided.

  19. Demonstration of a real-time implementation of the ICVision holographic stereogram display

    NASA Astrophysics Data System (ADS)

    Kulick, Jeffrey H.; Jones, Michael W.; Nordin, Gregory P.; Lindquist, Robert G.; Kowel, Stephen T.; Thomsen, Axel

    1995-07-01

    There is increasing interest in real-time autostereoscopic 3D displays. Such systems allow 3D objects or scenes to be viewed by one or more observers with correct motion parallax without the need for glasses or other viewing aids. Potential applications of such systems include mechanical design, training and simulation, medical imaging, virtual reality, and architectural design. One approach to the development of real-time autostereoscopic display systems has been to develop real-time holographic display systems. The approach taken by most of the systems is to compute and display a number of holographic lines at one time, and then use a scanning system to replicate the images throughout the display region. The approach taken in the ICVision system being developed at the University of Alabama in Huntsville is very different. In the ICVision display, a set of discrete viewing regions called virtual viewing slits are created by the display. Each pixel is required fill every viewing slit with different image data. When the images presented in two virtual viewing slits separated by an interoccular distance are filled with stereoscopic pair images, the observer sees a 3D image. The images are computed so that a different stereo pair is presented each time the viewer moves 1 eye pupil diameter (approximately mm), thus providing a series of stereo views. Each pixel is subdivided into smaller regions, called partial pixels. Each partial pixel is filled with a diffraction grating that is just that required to fill an individual virtual viewing slit. The sum of all the partial pixels in a pixel then fill all the virtual viewing slits. The final version of the ICVision system will form diffraction gratings in a liquid crystal layer on the surface of VLSI chips in real time. Processors embedded in the VLSI chips will compute the display in real- time. In the current version of the system, a commercial AMLCD is sandwiched with a diffraction grating array. This paper will discuss the design details of a protable 3D display based on the integration of a diffractive optical element with a commercial off-the-shelf AMLCD. The diffractive optic contains several hundred thousand partial-pixel gratings and the AMLCD modulates the light diffracted by the gratings.

  20. Jovian Tempest

    NASA Image and Video Library

    2017-11-16

    This color-enhanced image of a massive, raging storm in Jupiter's northern hemisphere was captured by NASA's Juno spacecraft during its ninth close flyby of the gas giant planet. The image was taken on Oct. 24, 2017 at 10:32 a.m. PDT (1:32 p.m. EDT). At the time the image was taken, the spacecraft was about 6,281 miles (10,108 kilometers) from the tops of the clouds of Jupiter at a latitude of 41.84 degrees. The spatial scale in this image is 4.2 miles/pixel (6.7 kilometers/pixel). The storm is rotating counter-clockwise with a wide range of cloud altitudes. The darker clouds are expected to be deeper in the atmosphere than the brightest clouds. Within some of the bright "arms" of this storm, smaller clouds and banks of clouds can be seen, some of which are casting shadows to the right side of this picture (sunlight is coming from the left). The bright clouds and their shadows range from approximately 4 to 8 miles (7 to 12 kilometers) in both widths and lengths. These appear similar to the small clouds in other bright regions Juno has detected and are expected to be updrafts of ammonia ice crystals possibly mixed with water ice. Citizen scientists Gerald Eichstädt and Seán Doran processed this image using data from the JunoCam imager. https://photojournal.jpl.nasa.gov/catalog/PIA21971

  1. Maia Mapper: high definition XRF imaging in the lab

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryan, Chris G.; Kirkham, R.; Moorhead, G. F.

    Here, Maia Mapper is a laboratory μXRF mapping system for efficient elemental imaging of drill core sections serving minerals research and industrial applications. It targets intermediate spatial scales, with imaging of up to ~80 M pixels over a 500×150 mm 2 sample area. It brings together (i) the Maia detector and imaging system, with its large solid-angle, event-mode operation, millisecond pixel transit times in fly-scan mode and real-time spectral deconvolution and imaging, (ii) the high brightness MetalJet D2 liquid metal micro-focus X-ray source from Excillum, and (iii) an efficient XOS polycapillary lens with a flux gain ~15,900 at 21 keVmore » into a ~32 μm focus, and (iv) a sample scanning stage engineered for standard drill-core sections. Count-rates up to ~3 M/s are observed on drill core samples with low dead-time up to ~1.5%. Automated scans are executed in sequence with display of deconvoluted element component images accumulated in real-time in the Maia detector. Application images on drill core and polished rock slabs illustrate Maia Mapper capabilities as part of the analytical workflow of the Advanced Resource Characterisation Facility, which spans spatial dimensions from ore deposit to atomic scales.« less

  2. Tumor segmentation of multi-echo MR T2-weighted images with morphological operators

    NASA Astrophysics Data System (ADS)

    Torres, W.; Martín-Landrove, M.; Paluszny, M.; Figueroa, G.; Padilla, G.

    2009-02-01

    In the present work an automatic brain tumor segmentation procedure based on mathematical morphology is proposed. The approach considers sequences of eight multi-echo MR T2-weighted images. The relaxation time T2 characterizes the relaxation of water protons in the brain tissue: white matter, gray matter, cerebrospinal fluid (CSF) or pathological tissue. Image data is initially regularized by the application of a log-convex filter in order to adjust its geometrical properties to those of noiseless data, which exhibits monotonously decreasing convex behavior. Finally the regularized data is analyzed by means of an 8-dimensional morphological eccentricity filter. In a first stage, the filter was used for the spatial homogenization of the tissues in the image, replacing each pixel by the most representative pixel within its structuring element, i.e. the one which exhibits the minimum total distance to all members in the structuring element. On the filtered images, the relaxation time T2 is estimated by means of least square regression algorithm and the histogram of T2 is determined. The T2 histogram was partitioned using the watershed morphological operator; relaxation time classes were established and used for tissue classification and segmentation of the image. The method was validated on 15 sets of MRI data with excellent results.

  3. Maia Mapper: high definition XRF imaging in the lab

    DOE PAGES

    Ryan, Chris G.; Kirkham, R.; Moorhead, G. F.; ...

    2018-03-13

    Here, Maia Mapper is a laboratory μXRF mapping system for efficient elemental imaging of drill core sections serving minerals research and industrial applications. It targets intermediate spatial scales, with imaging of up to ~80 M pixels over a 500×150 mm 2 sample area. It brings together (i) the Maia detector and imaging system, with its large solid-angle, event-mode operation, millisecond pixel transit times in fly-scan mode and real-time spectral deconvolution and imaging, (ii) the high brightness MetalJet D2 liquid metal micro-focus X-ray source from Excillum, and (iii) an efficient XOS polycapillary lens with a flux gain ~15,900 at 21 keVmore » into a ~32 μm focus, and (iv) a sample scanning stage engineered for standard drill-core sections. Count-rates up to ~3 M/s are observed on drill core samples with low dead-time up to ~1.5%. Automated scans are executed in sequence with display of deconvoluted element component images accumulated in real-time in the Maia detector. Application images on drill core and polished rock slabs illustrate Maia Mapper capabilities as part of the analytical workflow of the Advanced Resource Characterisation Facility, which spans spatial dimensions from ore deposit to atomic scales.« less

  4. Maia Mapper: high definition XRF imaging in the lab

    NASA Astrophysics Data System (ADS)

    Ryan, C. G.; Kirkham, R.; Moorhead, G. F.; Parry, D.; Jensen, M.; Faulks, A.; Hogan, S.; Dunn, P. A.; Dodanwela, R.; Fisher, L. A.; Pearce, M.; Siddons, D. P.; Kuczewski, A.; Lundström, U.; Trolliet, A.; Gao, N.

    2018-03-01

    Maia Mapper is a laboratory μXRF mapping system for efficient elemental imaging of drill core sections serving minerals research and industrial applications. It targets intermediate spatial scales, with imaging of up to ~80 M pixels over a 500×150 mm2 sample area. It brings together (i) the Maia detector and imaging system, with its large solid-angle, event-mode operation, millisecond pixel transit times in fly-scan mode and real-time spectral deconvolution and imaging, (ii) the high brightness MetalJet D2 liquid metal micro-focus X-ray source from Excillum, and (iii) an efficient XOS polycapillary lens with a flux gain ~15,900 at 21 keV into a ~32 μm focus, and (iv) a sample scanning stage engineered for standard drill-core sections. Count-rates up to ~3 M/s are observed on drill core samples with low dead-time up to ~1.5%. Automated scans are executed in sequence with display of deconvoluted element component images accumulated in real-time in the Maia detector. Application images on drill core and polished rock slabs illustrate Maia Mapper capabilities as part of the analytical workflow of the Advanced Resource Characterisation Facility, which spans spatial dimensions from ore deposit to atomic scales.

  5. Active pixel as dosimetric device for interventional radiology

    NASA Astrophysics Data System (ADS)

    Servoli, L.; Baldaccini, F.; Biasini, M.; Checcucci, B.; Chiocchini, S.; Cicioni, R.; Conti, E.; Di Lorenzo, R.; Dipilato, A. C.; Esposito, A.; Fanó, L.; Paolucci, M.; Passeri, D.; Pentiricci, A.; Placidi, P.

    2013-08-01

    Interventional Radiology (IR) is a subspecialty of radiology comprehensive of all minimally invasive diagnostic and therapeutic procedures performed using radiological devices to obtain image guidance. The interventional procedures are potentially harmful for interventional radiologists and medical staff due to the X-ray diffusion by the patient's body. The characteristic energy range of the diffused photons spans few tens of keV. In this work we will present a proposal for a new X-ray sensing element in the energy range of interest for IR procedures. The sensing element will then be assembled in a dosimeter prototype, capable of real-time measurement, packaged in a small form-factor, with wireless communication and no external power supply to be used for individual operators dosimetry for IR procedures. For the sensor, which is the heart of the system, we considered three different Active Pixel Sensors (APS). They have shown a good capability as single X-ray photon detectors, up to several tens keV photon energy. Two dosimetric quantities have been considered, the number of detected photons and the measured energy deposition. Both observables have a linear dependence with the dose, as measured by commercial dosimeters. The uncertainties in the measurement are dominated by statistic and can be pushed at ˜5% for all the sensors under test.

  6. MOC's Highest Resolution View of Mars Pathfinder Landing Site

    NASA Technical Reports Server (NTRS)

    2000-01-01

    [figure removed for brevity, see original site] (A) Mars Pathfinder site, left: April 1998; right: January 2000.

    [figure removed for brevity, see original site] (B) top: April 1998; bottom: January 2000.

    Can Mars Global Surveyor's 1.5 meter (5 ft) per pixel camera be used to find any evidence as to the fate of the Mars Polar Lander that was lost on December 3, 1999? One way to find out is to look for one of the other Mars landers and determine what, if anything, can be seen. There have been three successful Mars lander missions: Viking 1 (July 1976), Viking 2 (September 1976), and Mars Pathfinder (July 1997). Of these, the location of Mars Pathfinder is known the best because there are several distinct landmarks visible in the lander's images that help in locating the spacecraft. The MGS MOC Operations Team at Malin Space Science Systems has been tasked since mid-December 1999 with looking for the lost Polar Lander. Part of this effort has been to test the capabilities of MOC by taking a picture of the landing site of Mars Pathfinder.

    An attempt to photograph the Pathfinder site was made once before, in April 1998, by turning the entire MGS spacecraft so that the camera could point at the known location of the Mars Pathfinder lander. Turning the MGS spacecraft like this is not a normal operation--it takes considerable planning, and disrupts the on-going, normal acquisition of science data. It took 3 attempts to succeed, but on April 22, 1998, MOC acquired the picture seen on the left side of Figure A, above. The three near-by major landmarks that were visible to the Pathfinder's cameras are labeled here (North Peak, Big Crater, Twin Peaks). It was known at the time that this image was not adequate to see the Pathfinder lander because the camera was not in focus and had a resolution of only 3.3 meters (11 ft) per pixel. In this and all other images shown here, north is up. All views of the 1998 MOC image are illuminated from the lower right, all views of the 2000 MOC image are illuminated from the lower left.

    As part of the Polar Lander search effort, the Mars Pathfinder site was targeted again in December 1999 and January 2000. Like the 1998 attempt, the spacecraft had to be pointed off of its normal, nadir (straight-down) view. Like history repeating itself, it once again took 3 tries before the Pathfinder landing site was hit. The picture on the right side of Figure A, above, shows the new image that was acquired on January 16, 2000. The white box indicates the location shown in Figure B (above, right). The 1000 m scale bar equals 0.62 miles.

    Figure B (above) shows a subsection of both the 1998 image (top, labeled SPO-1-25603) and the 2000 image (bottom, labeled m11-2414) projected at a scale of 3 meters (10 ft) per pixel. At this scale, the differences in camera focus and sunlight illumination angle are apparent, with the January 2000 image being both in focus and having better lighting conditions. In addition, the MGS spacecraft took the 2000 image from a lower altitude than in 1998, thus the image has better spatial resolution overall. The 500 m scale bar is equal to about 547 yards. The white box shows the location of images in Figure C, below.

    [figure removed for brevity, see original site] (C) higher-resolution view; left: April 1998; right: January 2000.

    [figure removed for brevity, see original site] D) Erroneous, preliminary identification of Mars Pathfinder location in January 2000 image. Subsequent analysis (Figures E & F, below) identified the correct spot.

    The third figure (C, above) again shows portions of the April 1998 image (C, left) and January 2000 image (C, right), only this time they have been enlarged to a resolution of 0.75 meters (2.5 ft) per pixel. The intrinsic resolution of the January 2000 image is 1.5 meters (5 ft), so this is a 200% expanded view of the actual M11-02414 image. The circular features in this and the previous images are impact craters in various states of erosion. Some boulders (dark dots) can be seen near the crater in the lower left corner. The texture that runs diagonally across the scene from upper left toward lower right consists of ridges created by the giant floods that washed through the Pathfinder site from Ares and/or Tiu Vallis many hundreds of millions of years ago. These ridges and the troughs between them were also seen by the Pathfinder lander; their crests often covered with boulders and cobbles (which cannot be seen at the resolution of the MOC image). The 100 m scale bar is equal to 109 yards (which can be compared with a 100 yard U.S. football field). The Mars Pathfinder landing site is located near the center of this view.

    The fourth picture, Figure D (above), shows a feature that was initially thought to be the Mars Pathfinder lander by MOC investigators. This and the following figures point out just how difficult it is to find a lander on the martian surface using the MGS MOC. Figure D was prepared early in the week following receipt of the new MOC image on January 17, 2000, and for several days it was believed that the lander had been found. As the subsequent two figures will show (E, and F, below), this location appears to be in error. How the features were misidentified is discussed below. Both Figure D and Figure F, showing possible locations of the Pathfinder lander in the MOC image, are enlarged by a factor of three over the intrinsic resolution of that image (that is, to a scale of 0.5 meters or about 1 ft, 7 inch per pixel). The right picture in Figure D shows sight-lines to the large horizon features--Big Crater, Twin Peaks, and North Peak--that were derived by the MOC team by looking at the images taken by the lander in 1997. After placing these lines on the overall image, there appeared to be two features close to the intersection of the sight-lines. Based upon the consistency of the size and shape of the lander as illuminated by sunlight in this image, the northern of the two candidate features (the small 'hump' at the center of both left and right pictures) was considered, at the time, to be the most likely. HOWEVER...

    [figure removed for brevity, see original site] (E) Photoclinometry, Topography, and Revised Landing Site Location.

    [figure removed for brevity, see original site] (F) Mars Pathfinder Landing Site; lander not resolved by MOC.

    Later in the week following acquisition of the January 16, 2000, image (and over the following weekend), there was time for additional analysis to determine whether the rounded hump identified earlier in the week (Figure D, above) was, in fact, the Mars Pathfinder lander. A computer program that estimates relative topography in a MOC image from knowledge of the illumination (called 'shape-from-shading' or photoclinometry) was run to determine which parts of the landing site image are depressions, which are hills, and which are flat surfaces. The picture at the left in Figure E (above) shows the photoclinometry results for the area around the Pathfinder lander. The picture at the center of Figure E shows the same photoclinometry results overlain by an inset of a topographic map of the Pathfinder landing site derived by the U.S. Geological Survey Astrogeology Branch (Flagstaff, Arizona) from photogrammetry (parallax measurements) using images from Pathfinder's own stereo camera. By matching the features seen by MOC with those seen by the Pathfinder (the large arrows are examples of the matching), the location of the lander was refined and is now indicated in the picture on the right side of Figure E. The large, rounded hump previously identified as Pathfinder in Figure D (above), is more likely a large boulder that was seen in Pathfinder's images and named 'Couch' by the Pathfinder science team in 1997.

    Figure F is summary of the results of this effort to find Mars Pathfinder: it shows that while the landing site of Mars Pathfinder can be identified, the lander itself cannot be seen. It is too small to be resolved in an image where each pixel acquired by the MOC covers a square of 1.5 meters (5 feet) to a side, given the contrast conditions on Mars and the MOC's ability to discriminate contrast. At this scale, Pathfinder is not much larger than two pixels, and the same is true of the lost Polar Lander.

    No evidence has been found in the January 2000 MOC image of the aft portion of Mars Pathfinder's aeroshell or its parachute, either. If the aeroshell is laying on its side, as interpreted from Mars Pathfinder's images, then it would be very difficult to see this from orbit. Because Pathfinder did not image the parachute, it is not known how it may be configured on the surface--it could be wrapped around the aeroshell or a boulder, for example.

    This effort to photograph the Mars Pathfinder lander demonstrates that it is extremely difficult to find a lander on the surface of Mars using the Mars Orbiter Camera aboard the MGS spacecraft. This analysis suggests that it is not very likely that the December 1999 Polar Lander will be found by MOC.

  7. TU-FG-209-03: Exploring the Maximum Count Rate Capabilities of Photon Counting Arrays Based On Polycrystalline Silicon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liang, A K; Koniczek, M; Antonuk, L E

    Purpose: Photon counting arrays (PCAs) offer several advantages over conventional, fluence-integrating x-ray imagers, such as improved contrast by means of energy windowing. For that reason, we are exploring the feasibility and performance of PCA pixel circuitry based on polycrystalline silicon. This material, unlike the crystalline silicon commonly used in photon counting detectors, lends itself toward the economic manufacture of radiation tolerant, monolithic large area (e.g., ∼43×43 cm2) devices. In this presentation, exploration of maximum count rate, a critical performance parameter for such devices, is reported. Methods: Count rate performance for a variety of pixel circuit designs was explored through detailedmore » circuit simulations over a wide range of parameters (including pixel pitch and operating conditions) with the additional goal of preserving good energy resolution. The count rate simulations assume input events corresponding to a 72 kVp x-ray spectrum with 20 mm Al filtration interacting with a CZT detector at various input flux rates. Output count rates are determined at various photon energy threshold levels, and the percentage of counts lost (e.g., due to deadtime or pile-up) is calculated from the ratio of output to input counts. The energy resolution simulations involve thermal and flicker noise originating from each circuit element in a design. Results: Circuit designs compatible with pixel pitches ranging from 250 to 1000 µm that allow count rates over a megacount per second per pixel appear feasible. Such rates are expected to be suitable for radiographic and fluoroscopic imaging. Results for the analog front-end circuitry of the pixels show that acceptable energy resolution can also be achieved. Conclusion: PCAs created using polycrystalline silicon have the potential to offer monolithic large-area detectors with count rate performance comparable to those of crystalline silicon detectors. Further improvement through detailed circuit simulations and prototyping is expected. Partially supported by NIH grant R01-EB000558. This work was partially supported by NIH grant no. R01-EB000558.« less

  8. Movies in Chemistry Education

    ERIC Educational Resources Information Center

    Pekdag, Bulent; Le Marechal, Jean-Francois

    2010-01-01

    This article reviews numerous studies on chemistry movies. Movies, or moving pictures, are important elements of multimedia and signify a privileged or motivating means of presenting knowledge. Studies on chemistry movies show that the first movie productions in this field were devoted to university lectures or documentaries. Shorter movies were…

  9. PCSYS: The optimal design integration system picture drawing system with hidden line algorithm capability for aerospace vehicle configurations

    NASA Technical Reports Server (NTRS)

    Hague, D. S.; Vanderburg, J. D.

    1977-01-01

    A vehicle geometric definition based upon quadrilateral surface elements to produce realistic pictures of an aerospace vehicle. The PCSYS programs can be used to visually check geometric data input, monitor geometric perturbations, and to visualize the complex spatial inter-relationships between the internal and external vehicle components. PCSYS has two major component programs. The between program, IMAGE, draws a complex aerospace vehicle pictorial representation based on either an approximate but rapid hidden line algorithm or without any hidden line algorithm. The second program, HIDDEN, draws a vehicle representation using an accurate but time consuming hidden line algorithm.

  10. The picture superiority effect in categorization: visual or semantic?

    PubMed

    Job, R; Rumiati, R; Lotto, L

    1992-09-01

    Two experiments are reported whose aim was to replicate and generalize the results presented by Snodgrass and McCullough (1986) on the effect of visual similarity in the categorization process. For pictures, Snodgrass and McCullough's results were replicated because Ss took longer to discriminate elements from 2 categories when they were visually similar than when they were visually dissimilar. However, unlike Snodgrass and McCullough, an analogous increase was also observed for word stimuli. The pattern of results obtained here can be explained most parsimoniously with reference to the effect of semantic similarity, or semantic and visual relatedness, rather than to visual similarity alone.

  11. Minimally invasive surgery: only as good as the picture.

    PubMed Central

    Drury, Nigel E.; Pollard, Rebecca; Dyer, Jonathan P.

    2004-01-01

    BACKGROUND: In minimally invasive surgery, there is increased reliance on real-time 2-dimensional images. The fibre-optic light lead is one of the most frequently damaged elements of the 'imaging chain', leading to a poor quality picture. METHODS: Light leads with a honeycomb projection were connected to a light source and the resulting beam directed at a sheet of paper. Darkened sectors with diminished or absent light transmission were recorded. RESULTS: All suitable light leads in routine use were examined. A mean of 22.2% (SD 7.8%) of the projection had diminished or absent light transmission. CONCLUSION: Sub-optimal endoscopic equipment was in routine use. PMID:15005945

  12. Automatic weld torch guidance control system

    NASA Technical Reports Server (NTRS)

    Smaith, H. E.; Wall, W. A.; Burns, M. R., Jr.

    1982-01-01

    A highly reliable, fully digital, closed circuit television optical, type automatic weld seam tracking control system was developed. This automatic tracking equipment is used to reduce weld tooling costs and increase overall automatic welding reliability. The system utilizes a charge injection device digital camera which as 60,512 inidividual pixels as the light sensing elements. Through conventional scanning means, each pixel in the focal plane is sequentially scanned, the light level signal digitized, and an 8-bit word transmitted to scratch pad memory. From memory, the microprocessor performs an analysis of the digital signal and computes the tracking error. Lastly, the corrective signal is transmitted to a cross seam actuator digital drive motor controller to complete the closed loop, feedback, tracking system. This weld seam tracking control system is capable of a tracking accuracy of + or - 0.2 mm, or better. As configured, the system is applicable to square butt, V-groove, and lap joint weldments.

  13. Compressed single pixel imaging in the spatial frequency domain

    PubMed Central

    Torabzadeh, Mohammad; Park, Il-Yong; Bartels, Randy A.; Durkin, Anthony J.; Tromberg, Bruce J.

    2017-01-01

    Abstract. We have developed compressed sensing single pixel spatial frequency domain imaging (cs-SFDI) to characterize tissue optical properties over a wide field of view (35  mm×35  mm) using multiple near-infrared (NIR) wavelengths simultaneously. Our approach takes advantage of the relatively sparse spatial content required for mapping tissue optical properties at length scales comparable to the transport scattering length in tissue (ltr∼1  mm) and the high bandwidth available for spectral encoding using a single-element detector. cs-SFDI recovered absorption (μa) and reduced scattering (μs′) coefficients of a tissue phantom at three NIR wavelengths (660, 850, and 940 nm) within 7.6% and 4.3% of absolute values determined using camera-based SFDI, respectively. These results suggest that cs-SFDI can be developed as a multi- and hyperspectral imaging modality for quantitative, dynamic imaging of tissue optical and physiological properties. PMID:28300272

  14. Demonstration of a linear optical true-time delay device by use of a microelectromechanical mirror array.

    PubMed

    Rader, Amber; Anderson, Betty Lise

    2003-03-10

    We present the design and proof-of-concept demonstration of an optical device capable of producing true-time delay(s) (TTD)(s) for phased array antennas. This TTD device uses a free-space approach consisting of a single microelectromechanical systems (MEMS) mirror array in a multiple reflection spherical mirror configuration based on the White cell. Divergence is avoided by periodic refocusing by the mirrors. By using the MEMS mirror to switch between paths of different lengths, time delays are generated. Six different delays in 1-ns increments were demonstrated by using the Texas Instruments Digital Micromirror Device as the switching element. Losses of 1.6 to 5.2 dB per bounce and crosstalk of -27 dB were also measured, both resulting primarily from diffraction from holes in each pixel and the inter-pixel gaps of the MEMS.

  15. A programmable computational image sensor for high-speed vision

    NASA Astrophysics Data System (ADS)

    Yang, Jie; Shi, Cong; Long, Xitian; Wu, Nanjian

    2013-08-01

    In this paper we present a programmable computational image sensor for high-speed vision. This computational image sensor contains four main blocks: an image pixel array, a massively parallel processing element (PE) array, a row processor (RP) array and a RISC core. The pixel-parallel PE is responsible for transferring, storing and processing image raw data in a SIMD fashion with its own programming language. The RPs are one dimensional array of simplified RISC cores, it can carry out complex arithmetic and logic operations. The PE array and RP array can finish great amount of computation with few instruction cycles and therefore satisfy the low- and middle-level high-speed image processing requirement. The RISC core controls the whole system operation and finishes some high-level image processing algorithms. We utilize a simplified AHB bus as the system bus to connect our major components. Programming language and corresponding tool chain for this computational image sensor are also developed.

  16. Characterization of Custom-Designed Charge-Coupled Devices for Applications to Gas and Aerosol Monitoring Sensorcraft Instrument

    NASA Technical Reports Server (NTRS)

    Refaat, Tamer F.; Abedin, M. Nurul; Farnsworth, Glenn R.; Garcia, Christopher S.; Zawodny, Joseph M.

    2005-01-01

    Custom-designed charge-coupled devices (CCD) for Gas and Aerosols Monitoring Sensorcraft instrument were developed. These custom-designed CCD devices are linear arrays with pixel format of 512x1 elements and pixel size of 10x200 sq m. These devices were characterized at NASA Langley Research Center to achieve a full well capacity as high as 6,000,000 e-. This met the aircraft flight mission requirements in terms of signal-to-noise performance and maximum dynamic range. Characterization and analysis of the electrical and optical properties of the CCDs were carried out at room temperature. This includes measurements of photon transfer curves, gain coefficient histograms, read noise, and spectral response. Test results obtained on these devices successfully demonstrated the objectives of the aircraft flight mission. In this paper, we describe the characterization results and also discuss their applications to future mission.

  17. Phase Adaptation and Correction by Adaptive Optics

    NASA Astrophysics Data System (ADS)

    Tiziani, Hans J.

    2010-04-01

    Adaptive optical elements and systems for imaging or laser beam propagation are used for some time in particular in astronomy, where the image quality is degraded by atmospheric turbulence. In astronomical telescopes a deformable mirror is frequently used to compensate wavefront-errors due to deformations of the large mirror, vibrations as well as turbulence and hence to increase the image quality. In the last few years interesting elements like Spatial Light Modulators, SLM's, such as photorefractive crystals, liquid crystals and micro mirrors and membrane mirrors were introduced. The development of liquid crystals and micro mirrors was driven by data projectors as consumer products. They contain typically a matrix of individually addressable pixels of liquid crystals and flip mirrors respectively or more recently piston mirrors for special applications. Pixel sizes are in the order of a few microns and therefore also appropriate as active diffractive elements in digital holography or miniature masks. Although liquid crystals are mainly optimized for intensity modulation; they can be used for phase modulation. Adaptive optics is a technology for beam shaping and wavefront adaptation. The application of spatial light modulators for wavefront adaptation and correction and defect analysis as well as sensing will be discussed. Dynamic digital holograms are generated with liquid crystal devices (LCD) and used for wavefront correction as well as for beam shaping and phase manipulation, for instance. Furthermore, adaptive optics is very useful to extend the measuring range of wavefront sensors and for the wavefront adaptation in order to measure and compare the shape of high precision aspherical surfaces.

  18. [Music in the picture -- musical scores and other music-related pictorial elements in the visual artworks of schizophrenic patients].

    PubMed

    Simon, Mária

    2015-01-01

    Since the beginning of the 20th century music scores and other music releated pictoral elements have repeteadly appeared in psychotic patients' visual artworks. Interestingly, little attention was paid to these enigmatic forms of psychopathological art expression till the 1970s. This essay investigates the underlying psychopathology and the psychodynamic basis of musical elements applied in psychotic patients' visual art expression within a phenomenological- intersubjective framework integrating the art-historical context of the 20th century. As an illustration, artworks of the psychopathological art collection of the Department of Psychiatry and Psychotherapy, Medical Faculty, University of Pecs, Hungary are presented.

  19. Group-III elements under high pressure.

    NASA Astrophysics Data System (ADS)

    Simak, S. I.; Haussermann, U.; Ahuja, R.; Johansson, B.

    2000-03-01

    At ambient conditions the Group-III elements Ga and In attain unusual open ground-state crystal structures. Recent experiments have discovered that Ga under high pressure transforms into the face-centered (fcc) cubic close-packed structure, while such a transition for In has so far not been observed. We offer a simple explanation for such different behavior based on results from first principles calculations. We predict a so far undiscovered transition of In to the fcc structure at extreme pressures and show that the structure determining mechanism originates from the degree of s-p mixing of the valence orbitals. A unified bonding picture for the Group-III elements is discussed.

  20. Imaging spectroscopy using embedded diffractive optical arrays

    NASA Astrophysics Data System (ADS)

    Hinnrichs, Michele; Hinnrichs, Bradford

    2017-09-01

    Pacific Advanced Technology (PAT) has developed an infrared hyperspectral camera based on diffractive optic arrays. This approach to hyperspectral imaging has been demonstrated in all three infrared bands SWIR, MWIR and LWIR. The hyperspectral optical system has been integrated into the cold-shield of the sensor enabling the small size and weight of this infrared hyperspectral sensor. This new and innovative approach to an infrared hyperspectral imaging spectrometer uses micro-optics that are made up of an area array of diffractive optical elements where each element is tuned to image a different spectral region on a common focal plane array. The lenslet array is embedded in the cold-shield of the sensor and actuated with a miniature piezo-electric motor. This approach enables rapid infrared spectral imaging with multiple spectral images collected and processed simultaneously each frame of the camera. This paper will present our optical mechanical design approach which results in an infrared hyper-spectral imaging system that is small enough for a payload on a small satellite, mini-UAV, commercial quadcopter or man portable. Also, an application of how this spectral imaging technology can easily be used to quantify the mass and volume flow rates of hydrocarbon gases. The diffractive optical elements used in the lenslet array are blazed gratings where each lenslet is tuned for a different spectral bandpass. The lenslets are configured in an area array placed a few millimeters above the focal plane and embedded in the cold-shield to reduce the background signal normally associated with the optics. The detector array is divided into sub-images covered by each lenslet. We have developed various systems using a different number of lenslets in the area array. Depending on the size of the focal plane and the diameter of the lenslet array will determine the number of simultaneous different spectral images collected each frame of the camera. A 2 x 2 lenslet array will image four different spectral images of the scene each frame and when coupled with a 512 x 512 focal plane array will give spatial resolution of 256 x 256 pixel each spectral image. Another system that we developed uses a 4 x 4 lenslet array on a 1024 x 1024 pixel element focal plane array which gives 16 spectral images of 256 x 256 pixel resolution each frame. This system spans the SWIR and MWIR bands with a single optical array and focal plane array.

  1. Cloud Base Height Measurements at Manila Observatory: Initial Results from Constructed Paired Sky Imaging Cameras

    NASA Astrophysics Data System (ADS)

    Lagrosas, N.; Tan, F.; Antioquia, C. T.

    2014-12-01

    Fabricated all sky imagers are efficient and cost effective instruments for cloud detection and classification. Continuous operation of this instrument can result in the determination of cloud occurrence and cloud base heights for the paired system. In this study, a fabricated paired sky imaging system - consisting two commercial digital cameras (Canon Powershot A2300) enclosed in weatherproof containers - is developed in Manila Observatory for the purpose of determining cloud base heights at the Manila Observatory area. One of the cameras is placed on the rooftop of Manila Observatory and the other is placed on the rooftop of the university dormitory, 489m from the first camera. The cameras are programmed to simultaneously gather pictures every 5 min. Continuous operation of these cameras were implemented since the end of May of 2014 but data collection started end of October 2013. The data were processed following the algorithm proposed by Kassianov et al (2005). The processing involves the calculation of the merit function that determines the area of overlap of the two pictures. When two pictures are overlapped, the minimum of the merit function corresponds to the pixel column positions where the pictures have the best overlap. In this study, pictures of overcast sky prove to be difficult to process for cloud base height and were excluded from processing. The figure below shows the initial results of the hourly average of cloud base heights from data collected from November 2013 to July 2014. Measured cloud base heights ranged from 250m to 1.5km. These are the heights of cumulus and nimbus clouds that are dominant in this part of the world. Cloud base heights are low in the early hours of the day indicating low convection process during these times. However, the increase in the convection process in the atmosphere can be deduced from higher cloud base heights in the afternoon. The decrease of cloud base heights after 15:00 follows the trend of decreasing solar energy in the atmosphere after this time. The results show the potential of these instruments to determine cloud base heights on prolonged time intervals. The continuous operation of these instruments is implemented to gather seasonal variation of cloud base heights in this part of the world and to add to the much-needed dataset for future climate studies in Manila Observatory.

  2. South Polar Polygons

    NASA Technical Reports Server (NTRS)

    2004-01-01

    4 March 2004 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows a spectacular suite of large and small polygons in the south polar region. On Earth, polygons such as these would be indicators of the presence of ground ice. Whether this is true for Mars remains to be determined, but it is interesting to note that these polygons do occur in a region identified by the Mars Odyssey Gamma Ray Spectrometer (GRS) team as a place with possible ground ice. The polygons are in an old impact crater located near 62.9oS, 281.4oW. This 1.5 meter (5 ft.) per pixel view covers an area 3 km (1.9 mi) wide and is illuminated by sunlight from the upper left. To see the smaller set of polygons, the reader must view the full-resolution image (click on picture, above).

  3. New far infrared images of bright, nearby, star-forming regions

    NASA Technical Reports Server (NTRS)

    Harper, D. AL, Jr.; Cole, David M.; Dowell, C. Darren; Lees, Joanna F.; Lowenstein, Robert F.

    1995-01-01

    Broadband imaging in the far infrared is a vital tool for understanding how young stars form, evolve, and interact with their environment. As the sensitivity and size of detector arrays has increased, a richer and more detailed picture has emerged of the nearest and brightest regions of active star formation. We present data on M 17, M 42, and S 106 taken recently on the Kuiper Airborne Observatory with the Yerkes Observatory 60-channel far infrared camera, which has pixel sizes of 17 in. at 60 microns, 27 in. at 100 microns, and 45 in. at 160 and 200 microns. In addition to providing a clearer view of the complex central cores of the regions, the images reveal new details of the structure and heating of ionization fronts and photodissociation zones where radiation form luminous stars interacts with adjacent molecular clouds.

  4. Photogrammetric analysis of horizon panoramas: The Pathfinder landing site in Viking orbiter images

    USGS Publications Warehouse

    Oberst, J.; Jaumann, R.; Zeitler, W.; Hauber, E.; Kuschel, M.; Parker, T.; Golombek, M.; Malin, M.; Soderblom, L.

    1999-01-01

    Tiepoint measurements, block adjustment techniques, and sunrise/sunset pictures were used to obtain precise pointing data with respect to north for a set of 33 IMP horizon images. Azimuth angles for five prominent topographic features seen at the horizon were measured and correlated with locations of these features in Viking orbiter images. Based on this analysis, the Pathfinder line/sample coordinates in two raw Viking images were determined with approximate errors of 1 pixel, or 40 m. Identification of the Pathfinder location in orbit imagery yields geological context for surface studies of the landing site. Furthermore, the precise determination of coordinates in images together with the known planet-fixed coordinates of the lander make the Pathfinder landing site the most important anchor point in current control point networks of Mars. Copyright 1999 by the American Geophysical Union.

  5. Experimental Evaluation of a SiPM-Based Scintillation Detector for MR-Compatible SPECT Systems

    NASA Astrophysics Data System (ADS)

    Busca, Paolo; Occhipinti, Michele; Trigilio, Paolo; Cozzi, Giulia; Fiorini, Carlo; Piemonte, Claudio; Ferri, Alessandro; Gola, Alberto; Nagy, Kálmán; Bükki, Tamás; Rieger, Jan

    2015-10-01

    In the present work we briefly describe the architecture of a photo-detection module, designed in the framework of the INSERT (INtegrated SPECT/MRI for Enhanced Stratification in Radio-chemoTherapy) project, supported by the European Community. We focus on two main elements of the module: the SiPM photo-detector unit and the multi-channel ASIC. These two components have been investigated with dedicated and independent setups to assess preliminary performance of INSERT architecture. In details, we designed a 25.30 mm ×25.85 mm tile, comprising 9 pixels, each one with an 8 mm ×8 mm active area. We developed an Anger camera to characterize the tile coupled to a CsI:Tl scintillator (6 mm thick). We measured an average spatial resolution (FWHM) of 2 mm in the central region of the Field of View and a 15.3% energy resolution using a 57Co source (122 keV), when the tile is cooled down to 0 ° C to reduce the impact of the dark count rate. Furthermore, we developed ANGUS, a 36-channels 0.35 μm CMOS technology ASIC designed to cope with input capacitance up to 5 nF, typical of large area SiPM pixels. The spectroscopic capability of single readout channels were evaluated by coupling an 8 mm ×8 mm pixel with a cylindrical CsI:Tl scintillator (8 mm diameter, 10 mm thickness). Energy resolution at room temperature provided values between 13% and 13.5% (FWHM) at the 122 keV line for the nine pixels.

  6. Panoramic thermal imaging: challenges and tradeoffs

    NASA Astrophysics Data System (ADS)

    Aburmad, Shimon

    2014-06-01

    Over the past decade, we have witnessed a growing demand for electro-optical systems that can provide continuous 3600 coverage. Applications such as perimeter security, autonomous vehicles, and military warning systems are a few of the most common applications for panoramic imaging. There are several different technological approaches for achieving panoramic imaging. Solutions based on rotating elements do not provide continuous coverage as there is a time lag between updates. Continuous panoramic solutions either use "stitched" images from multiple adjacent sensors, or sophisticated optical designs which warp a panoramic view onto a single sensor. When dealing with panoramic imaging in the visible spectrum, high volume production and advancement of semiconductor technology has enabled the use of CMOS/CCD image sensors with a huge number of pixels, small pixel dimensions, and low cost devices. However, in the infrared spectrum, the growth of detector pixel counts, pixel size reduction, and cost reduction is taking place at a slower rate due to the complexity of the technology and limitations caused by the laws of physics. In this work, we will explore the challenges involved in achieving 3600 panoramic thermal imaging, and will analyze aspects such as spatial resolution, FOV, data complexity, FPA utilization, system complexity, coverage and cost of the different solutions. We will provide illustrations, calculations, and tradeoffs between three solutions evaluated by Opgal: A unique 3600 lens design using an LWIR XGA detector, stitching of three adjacent LWIR sensors equipped with a low distortion 1200 lens, and a fisheye lens with a HFOV of 180º and an XGA sensor.

  7. Essential English for Micronesian Adults.

    ERIC Educational Resources Information Center

    Conrad, Jo Ann; Reinecke, Hank

    This student workbook is designed to help Micronesian adults learn everyday English. Its ten chapters move from simple one-word picture labeling to more abstract ideas in a spiraled fashion, reiterating the essential elements of the English language in different, more complicated ways. Subjects covered include names for everyday objects and…

  8. The Superintendent's Fieldbook. A Guide for Leaders of Learning

    ERIC Educational Resources Information Center

    Cambron-McCabe, Nelda; Cunningham, Luvern L.; Harvey, James J.; Koff, Robert H.

    2004-01-01

    This book provides goals and challenges for district leaders who are constantly changing. Leadership and governance are only parts of the puzzle when other elements such as the NCLB legislation, budgets, standards and assessment, changing demographics, and public engagement are brought into the picture. Today's superintendents offer an effective…

  9. Combined Spelling--It Works.

    ERIC Educational Resources Information Center

    Zylstra, Barbara Jean

    1989-01-01

    A spelling program was devised for learning-disabled students, using elements from "Signs for Sounds," the Cloze method, and "Auditory Discrimination In-Depth." Day-by-day use of the program involves drawing word pictures, spelling the words with tiles and blocks, writing on the board, using the words in written sentences, spelling bees, etc. (JDD)

  10. The ISI Classroom Observation System: Examining the Literacy Instruction Provided to Individual Students

    ERIC Educational Resources Information Center

    Connor, Carol McDonald; Morrison, Frederick J.; Fishman, Barry J.; Ponitz, Claire Cameron; Glasney, Stephanie; Underwood, Phyllis S.; Piasta, Shayne B.; Crowe, Elizabeth Coyne; Schatschneider, Christopher

    2009-01-01

    The Individualizing Student Instruction (ISI) classroom observation and coding system is designed to provide a detailed picture of the classroom environment at the level of the individual student. Using a multidimensional conceptualization of the classroom environment, foundational elements (teacher warmth and responsiveness to students, classroom…

  11. The Recruitment and Selection of Principals Who Increase Student Learning

    ERIC Educational Resources Information Center

    Ash, Ruth C.; Hodge, Patricia H.; Connell, Peggy H.

    2013-01-01

    The overall picture that emerges from a review of literature pinpoints two elements that inhibit recruiting and hiring effective principals. Primary hindrances include the growing shortage of qualified applicants in conjunction with a growing student population and the reality of the challenging demands, responsibilities, and complexities of the…

  12. The Movies As Medium.

    ERIC Educational Resources Information Center

    Jacobs, Lewis

    This collection of essays by filmmakers, theorists, and scholars studies the fundamental resources and processes of film expression. Essays on the image deal with elements that make an image meaningful, as well as with the subjectivity of the motion picture camera. Other key resources of film that are discussed here include the movement of the…

  13. 21 CFR 1020.33 - Computed tomography (CT) equipment.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ...-ray system. The phantom shall be a right circular cylinder of polymethl-methacrylate of density 1.19±0... the cross-sectional volume over which x-ray transmission data are collected. (12) Picture element..., respectively. (14) Scan increment means the amount of relative displacement of the patient with respect to the...

  14. 21 CFR 1020.33 - Computed tomography (CT) equipment.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ...-ray system. The phantom shall be a right circular cylinder of polymethl-methacrylate of density 1.19±0... the cross-sectional volume over which x-ray transmission data are collected. (12) Picture element..., respectively. (14) Scan increment means the amount of relative displacement of the patient with respect to the...

  15. 21 CFR 1020.33 - Computed tomography (CT) equipment.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ...-ray system. The phantom shall be a right circular cylinder of polymethl-methacrylate of density 1.19±0... the cross-sectional volume over which x-ray transmission data are collected. (12) Picture element..., respectively. (14) Scan increment means the amount of relative displacement of the patient with respect to the...

  16. Coalition Network Defence Common Operational Picture

    DTIC Science & Technology

    2010-11-01

    27000 .org/ iso -27005.htm [26] ISO 8601:2004, Data elements and interchange formats - Information interchange - Representation of dates and times, http://ww.iso.org, http://en.wikipedia.org/wiki/ISO_8601 ...Regular_expression [25] ISO /IEC 27005:2008, Information technology -- Security techniques -- Information security risk management, http://ww.iso.org,; http://www

  17. Who, When, and Where? Age-Related Differences on a New Memory Test

    ERIC Educational Resources Information Center

    Sumida, Catherine A.; Holden, Heather M.; Van Etten, Emily J.; Wagner, Gabrielle M.; Hileman, Jacob D.; Gilbert, Paul E.

    2016-01-01

    Our study examined age-related differences on a new memory test assessing memory for "who," "when," and "where," and associations among these elements. Participants were required to remember a sequence of pictures of different faces paired with different places. Older adults remembered significantly fewer correct…

  18. A Brief History of Leonard Peltier vs. US: Is there Recourse for Justice?

    ERIC Educational Resources Information Center

    Payne, Diane

    1979-01-01

    Asserting the fact that Leonard Peltier is a contemporary element in a stream of Native American genocide, this article outlines the events and presents a picture of the abuses which precipitated a continuous 24 hour vigil at the U.S. Supreme Court. (Author/RTS)

  19. SU-E-T-33: A Feasibility-Seeking Algorithm Applied to Planning of Intensity Modulated Proton Therapy: A Proof of Principle Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Penfold, S; Casiraghi, M; Dou, T

    2015-06-15

    Purpose: To investigate the applicability of feasibility-seeking cyclic orthogonal projections to the field of intensity modulated proton therapy (IMPT) inverse planning. Feasibility of constraints only, as opposed to optimization of a merit function, is less demanding algorithmically and holds a promise of parallel computations capability with non-cyclic orthogonal projections algorithms such as string-averaging or block-iterative strategies. Methods: A virtual 2D geometry was designed containing a C-shaped planning target volume (PTV) surrounding an organ at risk (OAR). The geometry was pixelized into 1 mm pixels. Four beams containing a subset of proton pencil beams were simulated in Geant4 to provide themore » system matrix A whose elements a-ij correspond to the dose delivered to pixel i by a unit intensity pencil beam j. A cyclic orthogonal projections algorithm was applied with the goal of finding a pencil beam intensity distribution that would meet the following dose requirements: D-OAR < 54 Gy and 57 Gy < D-PTV < 64.2 Gy. The cyclic algorithm was based on the concept of orthogonal projections onto half-spaces according to the Agmon-Motzkin-Schoenberg algorithm, also known as ‘ART for inequalities’. Results: The cyclic orthogonal projections algorithm resulted in less than 5% of the PTV pixels and less than 1% of OAR pixels violating their dose constraints, respectively. Because of the abutting OAR-PTV geometry and the realistic modelling of the pencil beam penumbra, complete satisfaction of the dose objectives was not achieved, although this would be a clinically acceptable plan for a meningioma abutting the brainstem, for example. Conclusion: The cyclic orthogonal projections algorithm was demonstrated to be an effective tool for inverse IMPT planning in the 2D test geometry described. We plan to further develop this linear algorithm to be capable of incorporating dose-volume constraints into the feasibility-seeking algorithm.« less

  20. The 150 ns detector project: Prototype preamplifier results

    NASA Astrophysics Data System (ADS)

    Warburton, W. K.; Russell, S. R.; Kleinfelder, Stuart A.

    1994-08-01

    The long-term goal of the 150 ns detector project is to develop a pixel area detector capable of 6 MHz frame rates (150 ns/frame). Our milestones toward this goal are: a single pixel, 1×256 1D and 8×8 2D detectors, 256×256 2D detectors and, finally, 1024 × 1024 2D detectors. The design strategy is to supply a complete electronics chain (resetting preamp, selectable gain amplifier, analog-to-digital converter (ADC), and memory) for each pixel. In the final detectors these will all be custom integrated circuits. The front-end preamplifiers are integrated first, since their design and performance are the most unusual and also critical to the project's success. Similarly, our early work is concentrated on devising and perfecting detector structures. In this paper we demonstrate the performance of prototypes of our integrated preamplifiers. While the final design will have 64 preamps to a chip, including a switchable gain stage, the prototypes were integrated 8 channels to a "Tiny Chip" and tested in 4 configurations (feedback capacitor Cf equal 2.5 or 4.0 pF, output directly or through a source follower). These devices have been tested thoroughly for reset settling times, gain, linearity, and electronic noise. They generally work as designed, being fast enough to easily integrate detector charge, settle, and reset in 150 ns. Gain and linearity appear to be acceptable. Current values of electronic noise, in double-sampling mode, are about twice the design goal of {2}/{3} of a single photon at 6 keV. We expect this figure to improve with the addition of the onboard amplifier stage and improved packaging. Our next test chip will include these improvements and allow testing with our first detector samples, which will be 1×256 (50 μm wide pixels) and 8×8 (1 mm 2 pixels) element detector on 1 mm thick silicon.

  1. Crosstalk-free operation of multielement superconducting nanowire single-photon detector array integrated with single-flux-quantum circuit in a 0.1 W Gifford-McMahon cryocooler.

    PubMed

    Yamashita, Taro; Miki, Shigehito; Terai, Hirotaka; Makise, Kazumasa; Wang, Zhen

    2012-07-15

    We demonstrate the successful operation of a multielement superconducting nanowire single-photon detector (SSPD) array integrated with a single-flux-quantum (SFQ) readout circuit in a compact 0.1 W Gifford-McMahon cryocooler. A time-resolved readout technique, where output signals from each element enter the SFQ readout circuit with finite time intervals, revealed crosstalk-free operation of the four-element SSPD array connected with the SFQ readout circuit. The timing jitter and the system detection efficiency were measured to be 50 ps and 11.4%, respectively, which were comparable to the performance of practical single-pixel SSPD systems.

  2. A Survey of Plasmas and Their Applications

    NASA Technical Reports Server (NTRS)

    Eastman, Timothy E.; Grabbe, C. (Editor)

    2006-01-01

    Plasmas are everywhere and relevant to everyone. We bath in a sea of photons, quanta of electromagnetic radiation, whose sources (natural and artificial) are dominantly plasma-based (stars, fluorescent lights, arc lamps.. .). Plasma surface modification and materials processing contribute increasingly to a wide array of modern artifacts; e.g., tiny plasma discharge elements constitute the pixel arrays of plasma televisions and plasma processing provides roughly one-third of the steps to produce semiconductors, essential elements of our networking and computing infrastructure. Finally, plasmas are central to many cutting edge technologies with high potential (compact high-energy particle accelerators; plasma-enhanced waste processors; high tolerance surface preparation and multifuel preprocessors for transportation systems; fusion for energy production).

  3. Determination of the conversion gain and the accuracy of its measurement for detector elements and arrays

    NASA Astrophysics Data System (ADS)

    Beecken, B. P.; Fossum, E. R.

    1996-07-01

    Standard statistical theory is used to calculate how the accuracy of a conversion-gain measurement depends on the number of samples. During the development of a theoretical basis for this calculation, a model is developed that predicts how the noise levels from different elements of an ideal detector array are distributed. The model can also be used to determine what dependence the accuracy of measured noise has on the size of the sample. These features have been confirmed by experiment, thus enhancing the credibility of the method for calculating the uncertainty of a measured conversion gain. detector-array uniformity, charge coupled device, active pixel sensor.

  4. Distributed transition-edge sensors for linearized position response in a phonon-mediated X-ray imaging spectrometer

    NASA Astrophysics Data System (ADS)

    Cabrera, Blas; Brink, Paul L.; Leman, Steven W.; Castle, Joseph P.; Tomada, Astrid; Young, Betty A.; Martínez-Galarce, Dennis S.; Stern, Robert A.; Deiker, Steve; Irwin, Kent D.

    2004-03-01

    For future solar X-ray satellite missions, we are developing a phonon-mediated macro-pixel composed of a Ge crystal absorber with four superconducting transition-edge sensors (TES) distributed on the backside. The X-rays are absorbed on the opposite side and the energy is converted into phonons, which are absorbed into the four TES sensors. By connecting together parallel elements into four channels, fractional total energy absorbed between two of the sensors provides x-position information and the other two provide y-position information. We determine the optimal distribution for the TES sub-elements to obtain linear position information while minimizing the degradation of energy resolution.

  5. From Pixel on a Screen to Real Person in Your Students' Lives: Establishing Social Presence Using Digital Storytelling

    ERIC Educational Resources Information Center

    Lowenthal, Patrick R.; Dunlap, Joanna C.

    2010-01-01

    The Community of Inquiry (CoI) framework is a comprehensive guide to the research "and" practice of online learning. One of the most challenging aspects of establishing a CoI in online courses is finding the best way to attend to each element of the CoI framework in a primarily text-based environment. In our online courses, we have examined the…

  6. Application of Statistical Learning Theory to Plankton Image Analysis

    DTIC Science & Technology

    2006-06-01

    linear distance interval from 1 to 40 pixels and two directions formula (horizontal & vertical, and diagonals), EF2 is EF with 7 ex- ponential distance...and four directions formula (horizontal, vertical and two diagonals). It is clear that exponential distance inter- val works better than the linear ...PSI - PS by Vincent, linear and pseudo opening and closing spectra, each has 40 elements, total feature length of 160. PS2 - PS modified from Mei- jster

  7. Enhancing Ground Based Telescope Performance with Image Processing

    DTIC Science & Technology

    2013-11-13

    driven by the need to detect small faint objects with relatively short integration times to avoid streaking of the satellite image across multiple...the time right before the eclipse. The orbital elements of the satellite were entered into the SST’s tracking system, so that the SST could be...short integration times , thereby avoiding streaking of the satellite image across multiple CCD pixels so that the objects are suitably modeled as point

  8. Dosimetric characterization with 62 MeV protons of a silicon-segmented detector for 2D dose verifications in radiotherapy

    NASA Astrophysics Data System (ADS)

    Talamonti, C.; Bucciolini, M.; Marrazzo, L.; Menichelli, D.; Bruzzi, M.; Cirrone, G. A. P.; Cuttone, G.; LoJacono, P.

    2008-10-01

    Due to the features of the modern radiotherapy techniques, namely intensity modulated radiation therapy and proton therapy, where high spatial dose gradients are often present, detectors to be employed for 2D dose verifications have to satisfy very narrow requirements. In particular they have to show high spatial resolution. In the framework of the European Integrated Project—Methods and Advanced Equipment for Simulation and Treatment in Radio-Oncology (MAESTRO, no. LSHC-CT-2004-503564), a dosimetric detector adequate for 2D pre-treatment dose verifications was developed. It is a modular detector, based on a monolithic silicon-segmented sensor, with an n-type implantation on an epitaxial p-type layer. Each pixel element is 2×2 mm 2 and the distance center-to-center is 3 mm. The sensor is composed of 21×21 pixels. In this paper, we report the dosimetric characterization of the system with a proton beam. The sensor was irradiated with 62 MeV protons for clinical treatments at INFN-Laboratori Nazionali del Sud (LNS) Catania. The studied parameters were repeatability of a same pixel, response linearity versus absorbed dose, and dose rate and dependence on field size. The obtained results are promising since the performances are within the project specifications.

  9. Characterisation of GaAs:Cr pixel sensors coupled to Timepix chips in view of synchrotron applications

    NASA Astrophysics Data System (ADS)

    Ponchut, C.; Cotte, M.; Lozinskaya, A.; Zarubin, A.; Tolbanov, O.; Tyazhev, A.

    2017-12-01

    In order to meet the needs of some ESRF beamlines for highly efficient 2D X-ray detectors in the 20-50 keV range, GaAs:Cr pixel sensors coupled to TIMEPIX readout chips were implemented into a MAXIPIX detector. Use of GaAs:Cr sensor material is intended to overcome the limitations of Si (low absorption) and of CdTe (fluorescence) in this energy range The GaAs:Cr sensor assemblies were characterised with both laboratory X-ray sources and monochromatic synchrotron X-ray beams. The sensor response as a function of bias voltage was compared to a theoretical model, leading to an estimation of the μτ product of electrons in GaAs:Cr sensor material of 1.6×10-4 cm2/V. The spatial homogeneity of X-ray images obtained with the sensors was measured in different irradiation conditions, showing a particular sensitivity to small variations in the incident beam spectrum. 2D-resolved elemental mapping of the sensor surface was carried out to investigate a possible relation between the noise pattern observed in X-ray images and local fluctuations in chemical composition. A scanning of the sensor response at subpixel scale revealed that these irregularities can be correlated with a distortion of the effective pixel shapes.

  10. High Resolution Airborne Digital Imagery for Precision Agriculture

    NASA Technical Reports Server (NTRS)

    Herwitz, Stanley R.

    1998-01-01

    The Environmental Research Aircraft and Sensor Technology (ERAST) program is a NASA initiative that seeks to demonstrate the application of cost-effective aircraft and sensor technology to private commercial ventures. In 1997-98, a series of flight-demonstrations and image acquisition efforts were conducted over the Hawaiian Islands using a remotely-piloted solar- powered platform (Pathfinder) and a fixed-wing piloted aircraft (Navajo) equipped with a Kodak DCS450 CIR (color infrared) digital camera. As an ERAST Science Team Member, I defined a set of flight lines over the largest coffee plantation in Hawaii: the Kauai Coffee Company's 4,000 acre Koloa Estate. Past studies have demonstrated the applications of airborne digital imaging to agricultural management. Few studies have examined the usefulness of high resolution airborne multispectral imagery with 10 cm pixel sizes. The Kodak digital camera integrated with ERAST's Airborne Real Time Imaging System (ARTIS) which generated multiband CCD images consisting of 6 x 106 pixel elements. At the designated flight altitude of 1,000 feet over the coffee plantation, pixel size was 10 cm. The study involved the analysis of imagery acquired on 5 March 1998 for the detection of anomalous reflectance values and for the definition of spectral signatures as indicators of tree vigor and treatment effectiveness (e.g., drip irrigation; fertilizer application).

  11. Real-time comprehension of wh- movement in aphasia: Evidence from eyetracking while listening

    PubMed Central

    Dickey, Michael Walsh; Choy, JungWon Janet; Thompson, Cynthia K.

    2007-01-01

    Sentences with non-canonical wh- movement are often difficult for individuals with agrammatic Broca's aphasia to understand (Caramazza & Zurif, 1976, inter alia). However, the explanation of this difficulty remains controversial, and little is known about how individuals with aphasia try to understand such sentences in real time. This study uses an eyetracking while listening paradigm (Tanenhaus, et al., 1995) to examine agrammatic aphasic individuals' on-line comprehension of movement sentences. Participants' eye-movements were monitored while they listened to brief stories. These stories were followed by comprehension probes involving wh- movement, and looked at visual displays depicting elements mentioned in the story. In line with previous results for young normal listeners (Sussman & Sedivy, 2003), the study finds that both older unimpaired control participants (n=8) and aphasic individuals (n=12) showed visual evidence of successful automatic comprehension of wh- questions (like “Who did the boy kiss that day at school?”). Specifically, both groups fixated on a picture corresponding to the moved element (“who,” the person kissed in the story) at the position of the verb. Interestingly, aphasic participants showed qualitatively different fixation patterns for trials eliciting correct and incorrect responses. Aphasic individuals looked to first the moved-element picture and then to a competitor following the verb in the incorrect trials, indicating initially correct automatic processing. However, they only showed looks to the moved-element picture for the correct trials, parallel to control participants. Furthermore, aphasic individuals' fixations during movement sentences were just as fast as control participants' fixations. These results are unexpected under slowed-processing accounts of aphasic comprehension deficits, in which the source of failed comprehension should be delayed application of the same processing routines used in successful comprehension. This pattern is also unexpected if aphasic individuals are using qualitatively different strategies to comprehend such sentences, as under impaired-representation accounts of agrammatism (Grodzinsky, 1990, 2000; Mauner, Fromkin & Cornell, 1993). Instead, it suggests that agrammatic aphasic individuals may process wh- questions similarly to unimpaired individuals, but that this process often fails. However, even in cases of failed comprehension, aphasic individuals showed visual evidence of successful automatic processing. PMID:16844211

  12. Real-time comprehension of wh- movement in aphasia: evidence from eyetracking while listening.

    PubMed

    Dickey, Michael Walsh; Choy, JungWon Janet; Thompson, Cynthia K

    2007-01-01

    Sentences with non-canonical wh- movement are often difficult for individuals with agrammatic Broca's aphasia to understand (, inter alia). However, the explanation of this difficulty remains controversial, and little is known about how individuals with aphasia try to understand such sentences in real time. This study uses an eyetracking while listening paradigm to examine agrammatic aphasic individuals' on-line comprehension of movement sentences. Participants' eye-movements were monitored while they listened to brief stories and looked at visual displays depicting elements mentioned in the stories. The stories were followed by comprehension probes involving wh- movement. In line with previous results for young normal listeners [Sussman, R. S., & Sedivy, J. C. (2003). The time-course of processing syntactic dependencies: evidence from eye movements. Language and Cognitive Processes, 18, 143-161], the study finds that both older unimpaired control participants (n=8) and aphasic individuals (n=12) showed visual evidence of successful automatic comprehension of wh- questions (like "Who did the boy kiss that day at school?"). Specifically, both groups fixated on a picture corresponding to the moved element ("who," the person kissed in the story) at the position of the verb. Interestingly, aphasic participants showed qualitatively different fixation patterns for trials eliciting correct and incorrect responses. Aphasic individuals looked first to the moved-element picture and then to a competitor following the verb in the incorrect trials. However, they only showed looks to the moved-element picture for the correct trials, parallel to control participants. Furthermore, aphasic individuals' fixations during movement sentences were just as fast as control participants' fixations. These results are unexpected under slowed-processing accounts of aphasic comprehension deficits, in which the source of failed comprehension should be delayed application of the same processing routines used in successful comprehension. This pattern is also unexpected if aphasic individuals are using qualitatively different strategies than normals to comprehend such sentences, as under impaired-representation accounts of agrammatism. Instead, it suggests that agrammatic aphasic individuals may process wh- questions similarly to unimpaired individuals, but that this process often fails to facilitate off-line comprehension of sentences with wh- movement.

  13. Submillimeter Bolometer Array for the CSO

    NASA Astrophysics Data System (ADS)

    Wang, Ning; Hunter, T. R.; Benford, D. J.; Phillips, T. G.

    We are building a bolometer array for use as a submillimeter continuum camera for the Caltech Submillimeter Observatory (CSO) located on Mauna Kea. This effort is a collaboration with Moseley et al. at Goddard Space Flight Center, who have developed the technique for fabricating monolithic bolometer arrays on Si wafers, as well as a sophisticated data taking system to use with these arrays (Moseley et al. 1984). Our primary goal is to construct a camera with 1x24 bolometer pixels operating at 350 and 450 microns using a 3He refrigerator. The monolithic bolometer arrays are fabricated using the techniques of photolithography and micromachining. Each pixel of the array is suspended by four thin Si legs 2 mm long and 12x14 square microns in cross section. These thin legs, obtained by wet Si etching, provide the weak thermal link between the bolometer pixel and the heat sink. A thermistor is formed on each bolometer pixel by P implantation compensated with 50% B. The bolometer array to be used for the camera will have a pixel size of 1x2 square millimeters, which is about half of the CSO beam size at a wavelength of 400 microns. We plan to use mirrors to focus the beam onto the pixels intead of Winston cones. In order to eliminate background radiation from warm surroundings reaching the bolometers, cold baffles will be inserted along the beam passages. To increase the bolometer absorption to radiation, a thin metal film will be deposited on the back of each bolometer pixel. It has been demonstrated that a proper impedance match of the bolometer element can increase the bolometer absorption efficiency to about 50% (Clarke et al., 1978). The use of baffle approach to illumination will make it easier for us to expand to more pixels in the future. The first stage amplification will be performed with cold FETs, connected to each bolometer pixel. Signals from each bolometer will be digitized using a 16 bit A/D with differential inputs. The digitizing frequency will be up to 40 kHz, though 1 kHz should be sufficient for our application. The output from the A/D will be fed to a digital signal processing (DSP) board via fiber optic cables, which will minimize the RF interference to the bolometers. To date, we have assembled a 1x24 bolometer array, and we are in the process of testing it. We are also designing and bulding cryogenic optics. The data acquisition hardware is nearly completed, as well as the electronics. Our goal is to get the instrument working after a new chopping secondary mirror in installed at the CSO in the summer of 1994. References: Moseley, S.H. et al. 1984, J. Appl. Phys.,56,1257; Clarke et al. 1977, J. Appl. Phys., 48, 4865.

  14. X-Ray Fluorescence Imaging of Ancient Artifacts

    NASA Astrophysics Data System (ADS)

    Thorne, Robert; Geil, Ethan; Hudson, Kathryn; Crowther, Charles

    2011-03-01

    Many archaeological artifacts feature inscribed and/or painted text or figures which, through erosion and aging, have become difficult or impossible to read with conventional methods. Often, however, the pigments in paints contain metallic elements, and traces may remain even after visible markings are gone. A promising non-destructive technique for revealing these remnants is X-ray fluorescence (XRF) imaging, in which a tightly focused beam of monochromatic synchrotron radiation is raster scanned across a sample. At each pixel, an energy-dispersive detector records a fluorescence spectrum, which is then analyzed to determine element concentrations. In this way, a map of various elements is made across a region of interest. We have succesfully XRF imaged ancient Greek, Roman, and Mayan artifacts, and in many cases, the element maps have revealed significant new information, including previously invisible painted lines and traces of iron from tools used to carve stone tablets. X-ray imaging can be used to determine an object's provenance, including the region where it was produced and whether it is authentic or a copy.

  15. Quantification of 2D elemental distribution maps of intermediate-thick biological sections by low energy synchrotron μ-X-ray fluorescence spectrometry

    NASA Astrophysics Data System (ADS)

    Kump, P.; Vogel-Mikuš, K.

    2018-05-01

    Two fundamental-parameter (FP) based models for quantification of 2D elemental distribution maps of intermediate-thick biological samples by synchrotron low energy μ-X-ray fluorescence spectrometry (SR-μ-XRF) are presented and applied to the elemental analysis in experiments with monochromatic focused photon beam excitation at two low energy X-ray fluorescence beamlines—TwinMic, Elettra Sincrotrone Trieste, Italy, and ID21, ESRF, Grenoble, France. The models assume intermediate-thick biological samples composed of measured elements, the sources of the measurable spectral lines, and by the residual matrix, which affects the measured intensities through absorption. In the first model a fixed residual matrix of the sample is assumed, while in the second model the residual matrix is obtained by the iteration refinement of elemental concentrations and an adjusted residual matrix. The absorption of the incident focused beam in the biological sample at each scanned pixel position, determined from the output of a photodiode or a CCD camera, is applied as a control in the iteration procedure of quantification.

  16. Analysis of painted arts by energy sensitive radiographic techniques with the Pixel Detector Timepix

    NASA Astrophysics Data System (ADS)

    Zemlicka, J.; Jakubek, J.; Kroupa, M.; Hradil, D.; Hradilova, J.; Mislerova, H.

    2011-01-01

    Non-invasive techniques utilizing X-ray radiation offer a significant advantage in scientific investigations of painted arts and other cultural artefacts such as painted artworks or statues. In addition, there is also great demand for a mobile analytical and real-time imaging device given the fact that many fine arts cannot be transported. The highly sensitive hybrid semiconductor pixel detector, Timepix, is capable of detecting and resolving subtle and low-contrast differences in the inner composition of a wide variety of objects. Moreover, it is able to map the surface distribution of the contained elements. Several transmission and emission techniques are presented which have been proposed and tested for the analysis of painted artworks. This study focuses on the novel techniques of X-ray transmission radiography (conventional and energy sensitive) and X-ray induced fluorescence imaging (XRF) which can be realised at the table-top scale with the state-of-the-art pixel detector Timepix. Transmission radiography analyses the changes in the X-ray beam intensity caused by specific attenuation of different components in the sample. The conventional approach uses all energies from the source spectrum for the creation of the image while the energy sensitive alternative creates images in given energy intervals which enable identification and separation of materials. The XRF setup is based on the detection of characteristic radiation induced by X-ray photons through a pinhole geometry collimator. The XRF method is extremely sensitive to the material composition but it creates only surface maps of the elemental distribution. For the purpose of the analysis several sets of painted layers have been prepared in a restoration laboratory. The composition of these layers corresponds to those of real historical paintings from the 19th century. An overview of the current status of our methods will be given with respect to the instrumentation and the application in the field of cultural heritage.

  17. Parallel Information Processing (Image Transmission Via Fiber Bundle and Multimode Fiber

    NASA Technical Reports Server (NTRS)

    Kukhtarev, Nicholai

    2003-01-01

    Growing demand for visual, user-friendly representation of information inspires search for the new methods of image transmission. Currently used in-series (sequential) methods of information processing are inherently slow and are designed mainly for transmission of one or two dimensional arrays of data. Conventional transmission of data by fibers requires many fibers with array of laser diodes and photodetectors. In practice, fiber bundles are also used for transmission of images. Image is formed on the fiber-optic bundle entrance surface and each fiber transmits the incident image to the exit surface. Since the fibers do not preserve phase, only 2D intensity distribution can be transmitted in this way. Each single mode fiber transmit only one pixel of an image. Multimode fibers may be also used, so that each mode represent different pixel element. Direct transmission of image through multimode fiber is hindered by the mode scrambling and phase randomization. To overcome these obstacles wavelength and time-division multiplexing have been used, with each pixel transmitted on a separate wavelength or time interval. Phase-conjugate techniques also was tested in, but only in the unpractical scheme when reconstructed image return back to the fiber input end. Another method of three-dimensional imaging over single mode fibers was demonstrated in, using laser light of reduced spatial coherence. Coherence encoding, needed for a transmission of images by this methods, was realized with grating interferometer or with the help of an acousto-optic deflector. We suggest simple practical holographic method of image transmission over single multimode fiber or over fiber bundle with coherent light using filtering by holographic optical elements. Originally this method was successfully tested for the single multimode fiber. In this research we have modified holographic method for transmission of laser illuminated images over commercially available fiber bundle (fiber endoscope, or fiberscope).

  18. Sub-pixel analysis to support graphic security after scanning at low resolution

    NASA Astrophysics Data System (ADS)

    Haas, Bertrand; Cordery, Robert; Gou, Hongmei; Decker, Steve

    2006-02-01

    Whether in the domain of audio, video or finance, our world tends to become increasingly digital. However, for diverse reasons, the transition from analog to digital is often much extended in time, and proceeds by long steps (and sometimes never completes). One such step is the conversion of information on analog media to digital information. We focus in this paper on the conversion (scanning) of printed documents to digital images. Analog media have the advantage over digital channels that they can harbor much imperceptible information that can be used for fraud detection and forensic purposes. But this secondary information usually fails to be retrieved during the conversion step. This is particularly relevant since the Check-21 act (Check Clearing for the 21st Century act) became effective in 2004 and allows images of checks to be handled by banks as usual paper checks. We use here this situation of check scanning as our primary benchmark for graphic security features after scanning. We will first present a quick review of the most common graphic security features currently found on checks, with their specific purpose, qualities and disadvantages, and we demonstrate their poor survivability after scanning in the average scanning conditions expected from the Check-21 Act. We will then present a novel method of measurement of distances between and rotations of line elements in a scanned image: Based on an appropriate print model, we refine direct measurements to an accuracy beyond the size of a scanning pixel, so we can then determine expected distances, periodicity, sharpness and print quality of known characters, symbols and other graphic elements in a document image. Finally we will apply our method to fraud detection of documents after gray-scale scanning at 300dpi resolution. We show in particular that alterations on legitimate checks or copies of checks can be successfully detected by measuring with sub-pixel accuracy the irregularities inherently introduced by the illegitimate process.

  19. Deficiencies and toxicities of trace elements and micronutrients in tropical soils: Limitations of knowledge and future research needs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davies, B.E.

    1997-01-01

    This article reviews present knowledge concerning deficiencies and toxicities of trace elements and micronutrients in tropical soils. The myth that all tropical soils are highly leached and nutrient-poor is challenged. Continuing use of the term laterite by ecologists and geologists is criticized and adoption of plinthite is urged. The trace element content of plinthite and its possible influence on micronutrient availability are described. Micronutrient limitations of tropical agriculture are related to soil type and formation, and the special problem of aluminum toxicity in acid soils is discussed in both agricultural and ecological contexts. Studies of micronutrient cycling in tropical forestsmore » or savannas are needed to supplement the emerging picture of the complexities of major element cycles in these ecosystems.« less

  20. Reducing Speckle In One-Look SAR Images

    NASA Technical Reports Server (NTRS)

    Nathan, K. S.; Curlander, J. C.

    1990-01-01

    Local-adaptive-filter algorithm incorporated into digital processing of synthetic-aperture-radar (SAR) echo data to reduce speckle in resulting imagery. Involves use of image statistics in vicinity of each picture element, in conjunction with original intensity of element, to estimate brightness more nearly proportional to true radar reflectance of corresponding target. Increases ratio of signal to speckle noise without substantial degradation of resolution common to multilook SAR images. Adapts to local variations of statistics within scene, preserving subtle details. Computationally simple. Lends itself to parallel processing of different segments of image, making possible increased throughput.

  1. A localization algorithm of adaptively determining the ROI of the reference circle in image

    NASA Astrophysics Data System (ADS)

    Xu, Zeen; Zhang, Jun; Zhang, Daimeng; Liu, Xiaomao; Tian, Jinwen

    2018-03-01

    Aiming at solving the problem of accurately positioning the detection probes underwater, this paper proposed a method based on computer vision which can effectively solve this problem. The theory of this method is that: First, because the shape information of the heat tube is similar to a circle in the image, we can find a circle which physical location is well known in the image, we set this circle as the reference circle. Second, we calculate the pixel offset between the reference circle and the probes in the picture, and adjust the steering gear through the offset. As a result, we can accurately measure the physical distance between the probes and the under test heat tubes, then we can know the precise location of the probes underwater. However, how to choose reference circle in image is a difficult problem. In this paper, we propose an algorithm that can adaptively confirm the area of reference circle. In this area, there will be only one circle, and the circle is the reference circle. The test results show that the accuracy of the algorithm of extracting the reference circle in the whole picture without using ROI (region of interest) of the reference circle is only 58.76% and the proposed algorithm is 95.88%. The experimental results indicate that the proposed algorithm can effectively improve the efficiency of the tubes detection.

  2. Putting You in the Director's Chair or Move Aside, Cecil B. DeMille; A Guide to Guiding Student Production.

    ERIC Educational Resources Information Center

    Heller, Dawn H.; Palermo, Lucas M.

    The elements of planning, producing, and evaluating high school student media production are described. Thirteen different media formats, audiotape, posters, displays, transparencies, mobiles, models, games, slide/cassettes, motion pictures, multiimage, videotapes, kits, and miscellaneous, are discussed. For each medium, there is a statement of…

  3. Stimulus Competition in Pre/Post and Online Ratings in an Evaluative Learning Design

    ERIC Educational Resources Information Center

    Purkis, Helena M.; Lipp, Ottmar V.

    2010-01-01

    Evaluative learning is said to differ from Pavlovian associative learning in that it reflects stimulus contiguity, not contingency. Thus, evaluative learning should not be subject to stimulus competition, a proposal tested in the current experiments. Participants were presented in elemental and compound training phases with pictures of shapes as…

  4. System support documentation: IDIMS FUNCTION AMOEBA

    NASA Technical Reports Server (NTRS)

    Bryant, J.

    1982-01-01

    A listing is provided for AMOEBA, a clustering program based on a spatial-spectral model for image data. The program is fast and automatic (in the sense that no parameters are required), and classifies each picture element into classes which are determined internally. As an IDIMS function, no limit on the size of the image is imposed.

  5. Plural Formation by Heritage Bilinguals of Spanish: A Phonological Analysis of a Morphological Variable

    ERIC Educational Resources Information Center

    Campbell, Tasha M.

    2017-01-01

    This dissertation explores Spanish nominal plural formation from a morphophonological perspective. The primary objective is to better understand heritage bilinguals' (HBs') phonological categorization of the morphological element of number in their heritage language. This is done by way of picture-naming elicitation tasks of consonant-final nouns…

  6. Mir survey just before docking

    NASA Image and Video Library

    1997-05-16

    STS084-730-002 (15-24 May 1997) --- A Space Shuttle Atlantis point-of-view frame showing the docking port and target during separation from with Russia's Mir Space Station. The picture should be held with the retracted Kristall solar array at right. Other elements partially visible are Kvant-2 (top), Spektr (bottom) and Core Module (left).

  7. Effects of Picture Activity Schedules on Tasks Completed

    ERIC Educational Resources Information Center

    Morrisett, Michael Eric

    2015-01-01

    Self-determination is the freedom to make choices that impact an individual's life. Many people would agree that self-determination leads to an enhanced quality of life, and choice making is considered a central element in self-determination. Most learn choice making through a gradual release of responsibility by caregivers throughout their…

  8. Graphic Design for Researchers

    ERIC Educational Resources Information Center

    Regional Educational Laboratory, 2014

    2014-01-01

    Technology continues to radically change how we create and consume information. Today, news, reports, and other material are often delivered quickly through pictures, colors, or other eye-catching visual elements. Words still matter, but they may be tweeted, viewed on a smartphone, or placed in a call-out box in a report. The design of these items…

  9. The Implicit Role of First-Years' Higher Education Faculties

    ERIC Educational Resources Information Center

    Abi-Raad, Maurice

    2018-01-01

    The higher education experience is a challenge for first-year students. One of the challenges facing a generation of youth is attaining professional skills, academic experience and occupational training. In order to have a clear picture of the challenges involved in first-year experiences it is important to examine elements impacting first-year…

  10. Numerical stability of the error diffusion concept

    NASA Astrophysics Data System (ADS)

    Weissbach, Severin; Wyrowski, Frank

    1992-10-01

    The error diffusion algorithm is an easy implementable mean to handle nonlinearities in signal processing, e.g. in picture binarization and coding of diffractive elements. The numerical stability of the algorithm depends on the choice of the diffusion weights. A criterion for the stability of the algorithm is presented and evaluated for some examples.

  11. An alternative approach to depth of field which avoids the blur circle and uses the pixel pitch

    NASA Astrophysics Data System (ADS)

    Schuster, Norbert

    2015-09-01

    Modern thermal imaging systems apply more and more uncooled detectors. High volume applications work with detectors which have a reduced pixel count (typical between 200x150 and 640x480). This shrinks the application of modern image treatment procedures like wave front coding. On the other hand side, uncooled detectors demand lenses with fast F-numbers near 1.0. Which are the limits on resolution if the target to analyze changes its distance to the camera system? The aim to implement lens arrangements without any focusing mechanism demands a deeper quantification of the Depth of Field problem. The proposed Depth of Field approach avoids the classic "accepted image blur circle". It bases on a camera specific depth of focus which is transformed in the object space by paraxial relations. The traditional RAYLEIGH's -criterion bases on the unaberrated Point Spread Function and delivers a first order relation for the depth of focus. Hence, neither the actual lens resolution neither the detector impact is considered. The camera specific depth of focus respects a lot of camera properties: Lens aberrations at actual F-number, detector size and pixel pitch. The through focus MTF is the base of the camera specific depth of focus. It has a nearly symmetric course around the maximum of sharp imaging. The through focus MTF is considered at detector's Nyquist frequency. The camera specific depth of focus is this the axial distance in front and behind of sharp image plane where the through focus MTF is <0.25. This camera specific depth of focus is transferred in the object space by paraxial relations. It follows a general applicable Depth of Field diagram which could be applied to lenses realizing a lateral magnification range -0.05…0. Easy to handle formulas are provided between hyperfocal distance and the borders of the Depth of Field in dependence on sharp distances. These relations are in line with the classical Depth of Field-theory. Thermal pictures, taken by different IR-camera cores, illustrate the new approach. The quite often requested graph "MTF versus distance" choses the half Nyquist frequency as reference. The paraxial transfer of the through focus MTF in object space distorts the MTF-curve: hard drop at closer distances than sharp distance, smooth drop at further distances. The formula of a general Diffraction-Limited-Through-Focus-MTF (DLTF) is deducted. Arbitrary detector-lens combinations could be discussed. Free variables in this analysis are waveband, aperture based F-number (lens) and pixel pitch (detector). The DLTF- discussion provides physical limits and technical requirements. The detector development with pixel pitches smaller than captured wavelength in the LWIR-region generates a special challenge for optical design.

  12. Ubiquitous picture-rich content representation

    NASA Astrophysics Data System (ADS)

    Wang, Wiley; Dean, Jennifer; Muzzolini, Russ

    2010-02-01

    The amount of digital images taken by the average consumer is consistently increasing. People enjoy the convenience of storing and sharing their pictures through online (digital) and offline (traditional) media. A set of pictures can be uploaded to: online photo services, web blogs and social network websites. Alternatively, these images can be used to generate: prints, cards, photo books or other photo products. Through uploading and sharing, images are easily transferred from one format to another. And often, a different set of associated content (text, tags) is created across formats. For example, on his web blog, a user may journal his experiences of his recent travel; on his social network website, his friends tag and comment on the pictures; in his online photo album, some pictures are titled and keyword-tagged. When the user wants to tell a complete story, perhaps in a photo book, he must collect, across all formats: the pictures, writings and comments, etc. and organize them in a book format. The user has to arrange the content of his trip in each format. The arrangement, the associations between the images, tags, keywords and text, cannot be shared with other formats. In this paper, we propose a system that allows the content to be easily created and shared across various digital media formats. We define a uniformed data association structure to connect: images, documents, comments, tags, keywords and other data. This content structure allows the user to switch representation formats without reediting. The framework under each format can emphasize (display or hide) content elements based on preference. For example, a slide show view will emphasize the display of pictures with limited text; a blog view will display highlighted images and journal text; and the photo book will try to fit in all images and text content. In this paper, we will discuss the strategy to associate pictures with text content, so that it can naturally tell a story. We will also list sample solutions on different formats such as: picture view, blog view and photo book view.

  13. Functional Fixedness in Creative Thinking Tasks Depends on Stimulus Modality.

    PubMed

    Chrysikou, Evangelia G; Motyka, Katharine; Nigro, Cristina; Yang, Song-I; Thompson-Schill, Sharon L

    2016-11-01

    Pictorial examples during creative thinking tasks can lead participants to fixate on these examples and reproduce their elements even when yielding suboptimal creative products. Semantic memory research may illuminate the cognitive processes underlying this effect. Here, we examined whether pictures and words differentially influence access to semantic knowledge for object concepts depending on whether the task is close- or open-ended. Participants viewed either names or pictures of everyday objects, or a combination of the two, and generated common, secondary, or ad hoc uses for them. Stimulus modality effects were assessed quantitatively through reaction times and qualitatively through a novel coding system, which classifies creative output on a continuum from top-down-driven to bottom-up-driven responses. Both analyses revealed differences across tasks. Importantly, for ad hoc uses, participants exposed to pictures generated more top-down-driven responses than those exposed to object names. These findings have implications for accounts of functional fixedness in creative thinking, as well as theories of semantic memory for object concepts.

  14. Functional Fixedness in Creative Thinking Tasks Depends on Stimulus Modality

    PubMed Central

    Chrysikou, Evangelia G.; Motyka, Katharine; Nigro, Cristina; Yang, Song-I; Thompson-Schill, Sharon L.

    2015-01-01

    Pictorial examples during creative thinking tasks can lead participants to fixate on these examples and reproduce their elements even when yielding suboptimal creative products. Semantic memory research may illuminate the cognitive processes underlying this effect. Here, we examined whether pictures and words differentially influence access to semantic knowledge for object concepts depending on whether the task is close- or open-ended. Participants viewed either names or pictures of everyday objects, or a combination of the two, and generated common, secondary, or ad hoc uses for them. Stimulus modality effects were assessed quantitatively through reaction times and qualitatively through a novel coding system, which classifies creative output on a continuum from top-down-driven to bottom-up-driven responses. Both analyses revealed differences across tasks. Importantly, for ad hoc uses, participants exposed to pictures generated more top-down-driven responses than those exposed to object names. These findings have implications for accounts of functional fixedness in creative thinking, as well as theories of semantic memory for object concepts. PMID:28344724

  15. Infrared retina

    DOEpatents

    Krishna, Sanjay [Albuquerque, NM; Hayat, Majeed M [Albuquerque, NM; Tyo, J Scott [Tucson, AZ; Jang, Woo-Yong [Albuquerque, NM

    2011-12-06

    Exemplary embodiments provide an infrared (IR) retinal system and method for making and using the IR retinal system. The IR retinal system can include adaptive sensor elements, whose properties including, e.g., spectral response, signal-to-noise ratio, polarization, or amplitude can be tailored at pixel level by changing the applied bias voltage across the detector. "Color" imagery can be obtained from the IR retinal system by using a single focal plane array. The IR sensor elements can be spectrally, spatially and temporally adaptive using quantum-confined transitions in nanoscale quantum dots. The IR sensor elements can be used as building blocks of an infrared retina, similar to cones of human retina, and can be designed to work in the long-wave infrared portion of the electromagnetic spectrum ranging from about 8 .mu.m to about 12 .mu.m as well as the mid-wave portion ranging from about 3 .mu.m to about 5 .mu.m.

  16. Status of the isophot detector development

    NASA Technical Reports Server (NTRS)

    Wolf, J.; Lemke, D.; Burgdorf, M.; Groezinger, U.; Hajduk, CH.

    1989-01-01

    ISOPHOT is one of the four focal plane experiments of the European Space Agency's Infrared Space Observatory (ISO). Scheduled for a 1993 launch, it will operate extrinsic silicon and germanium photoconductors at low temperature and low background during the longer than 18 month mission. These detectors cover the wavelength range from 2.5 to 200 microns and are used as single elements and in arrays. A cryogenic preamplifier was developed to read out a total number of 223 detector pixels.

  17. Optics for MUSIC: a new (sub)millimeter camera for the Caltech Submillimeter Observatory

    NASA Astrophysics Data System (ADS)

    Sayers, Jack; Czakon, Nicole G.; Day, Peter K.; Downes, Thomas P.; Duan, Ran P.; Gao, Jiansong; Glenn, Jason; Golwala, Sunil R.; Hollister, Matt I.; LeDuc, Henry G.; Mazin, Benjamin A.; Maloney, Philip R.; Noroozian, Omid; Nguyen, Hien T.; Schlaerth, James A.; Siegel, Seth; Vaillancourt, John E.; Vayonakis, Anastasios; Wilson, Philip R.; Zmuidzinas, Jonas

    2010-07-01

    We will present the design and implementation, along with calculations and some measurements of the performance, of the room-temperature and cryogenic optics for MUSIC, a new (sub)millimeter camera we are developing for the Caltech Submm Observatory (CSO). The design consists of two focusing elements in addition to the CSO primary and secondary mirrors: a warm off-axis elliptical mirror and a cryogenic (4K) lens. These optics will provide a 14 arcmin field of view that is diffraction limited in all four of the MUSIC observing bands (2.00, 1.33, 1.02, and 0.86 mm). A cold (4K) Lyot stop will be used to define the primary mirror illumination, which will be maximized while keeping spillover at the sub 1% level. The MUSIC focal plane will be populated with broadband phased antenna arrays that efficiently couple to factor of (see manuscript) 3 in bandwidth,1, 2 and each pixel on the focal plane will be read out via a set of four lumped element filters that define the MUSIC observing bands (i.e., each pixel on the focal plane simultaneously observes in all four bands). Finally, a series of dielectric and metal-mesh low pass filters have been implemented to reduce the optical power load on the MUSIC cryogenic stages to a quasi-negligible level while maintaining good transmission in-band.

  18. Improved adjoin-list for quality-guided phase unwrapping based on red-black trees

    NASA Astrophysics Data System (ADS)

    Cruz-Santos, William; López-García, Lourdes; Rueda-Paz, Juvenal; Redondo-Galvan, Arturo

    2016-08-01

    The quality-guide phase unwrapping is an important technique that is based on quality maps which guide the unwrapping process. The efficiency of this technique depends in the adjoin-list data structure implementation. There exists several proposals that improve the adjoin-list; Ming Zhao et. al. proposed an Indexed Interwoven Linked List (I2L2) that is based on dividing the quality values into intervals of equal size and inserting in a linked list those pixels with quality values within a certain interval. Ming Zhao and Qian Kemao proposed an improved I2L2 replacing each linked list in each interval by a heap data structure, which allows efficient procedures for insertion and deletion. In this paper, we propose an improved I2L2 which uses Red-Black trees (RBT) data structures for each interval. Our proposal has as main goal to avoid the unbalanced properties of the head and thus, reducing the time complexity of insertion. In order to maintain the same efficiency of the heap when deleting an element, we provide an efficient way to remove the pixel with the highest quality value in the RBT using a pointer to the rightmost element in the tree. We also provide a new partition strategy of the phase values that is based on a density criterion. Experimental results applied to phase shifting profilometry are shown for large images.

  19. 640 X 480 PtSi MOS infrared imager

    NASA Astrophysics Data System (ADS)

    Sauer, Donald J.; Shallcross, Frank V.; Hseuh, Fu-Lung; Meray, Grazyna M.; Levine, Peter A.; Gilmartin, Harvey R.; Villani, Thomas S.; Esposito, Benjamin J.; Tower, John R.

    1992-09-01

    The design and performance of a 640 (H) X 480 (V) element PtSi Schottky-barrier infrared image sensor employing a low-noise MOS X-Y addressable readout multiplexer and on-chip low-noise output amplifier is described. The imager achieves an NEDT equals 0.10 K at 30 Hz frame rates with f/1.5 optics (300 K background). The MOS design provides a measured saturation level of 1.5 X 10(superscript 6) electrons (5 V bias) and a noise floor of 300 rms electrons per pixel. A multiplexed horizontal/vertical input address port and on-chip decoding is used to load scan data into CMOS horizontal and vertical scanning registers. This allows random access to any sub-frame in the 640 X 480 element focal plane array. By changing the digital pattern applied to the vertical scan register, the FPA can be operated in either an interlaced or non-interlaced format, and the integration time may be varied over a wide range (60 microsecond(s) to > 30 ms, for RS 170 operation) resulting in `electronic shutter' variable exposure control. The pixel size of 24 micrometers X 24 micrometers results in a fill factor of 38% for 1.5 micrometers process design rules. The overall die size for the IR imager is 13.7 mm X 17.2 mm. All digital inputs to the chip are TTL compatible and include ESD protection.

  20. Image analysis of representative food structures: application of the bootstrap method.

    PubMed

    Ramírez, Cristian; Germain, Juan C; Aguilera, José M

    2009-08-01

    Images (for example, photomicrographs) are routinely used as qualitative evidence of the microstructure of foods. In quantitative image analysis it is important to estimate the area (or volume) to be sampled, the field of view, and the resolution. The bootstrap method is proposed to estimate the size of the sampling area as a function of the coefficient of variation (CV(Bn)) and standard error (SE(Bn)) of the bootstrap taking sub-areas of different sizes. The bootstrap method was applied to simulated and real structures (apple tissue). For simulated structures, 10 computer-generated images were constructed containing 225 black circles (elements) and different coefficient of variation (CV(image)). For apple tissue, 8 images of apple tissue containing cellular cavities with different CV(image) were analyzed. Results confirmed that for simulated and real structures, increasing the size of the sampling area decreased the CV(Bn) and SE(Bn). Furthermore, there was a linear relationship between the CV(image) and CV(Bn) (.) For example, to obtain a CV(Bn) = 0.10 in an image with CV(image) = 0.60, a sampling area of 400 x 400 pixels (11% of whole image) was required, whereas if CV(image) = 1.46, a sampling area of 1000 x 100 pixels (69% of whole image) became necessary. This suggests that a large-size dispersion of element sizes in an image requires increasingly larger sampling areas or a larger number of images.

Top